text
stringlengths 1
1k
| source
stringlengths 31
152
|
---|---|
ters
Metaprogramming – writing programs that write or manipulate other programs (or themselves) as their data, or that do part of the work at compile time that would otherwise be done at runtime
Template metaprogramming – metaprogramming methods in which a compiler uses templates to generate temporary source code, which is merged by the compiler with the rest of the source code and then compiled
Reflective programming – metaprogramming methods in which a program modifies or extends itself
Pipeline programming – a simple syntax change to add syntax to nest function calls to language originally designed with none
Rule-based programming – a network of rules of thumb that comprise a knowledge base and can be used for expert systems and problem deduction & resolution
Visual programming – manipulating program elements graphically rather than by specifying them textually (e.g. Simulink); also termed diagrammatic programming'
Online presence management is distinct from
Web presence management
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
nagement is distinct from
Web presence management in that the former is generally a marketing and messaging discipline while the latter is Governance,risk management, and compliance operational and security discipline.
== Overview ==
Programming paradigms come from computer science research into existing practices of software development. The findings allow for describing and comparing programming practices and the languages used to code programs. For perspective, other fields of research study
software engineering processes and describe various methodologies to describe and compare them.
A programming language can be described in terms of paradigms. Some languages support only one paradigm. For example, Smalltalk supports object-oriented and Haskell supports functional. Most languages support multiple paradigms. For example, a program written in C++, Object Pascal, or PHP can be purely procedural, purely object-oriented, or can contain aspects of both paradigms, or others.
When
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
ntain aspects of both paradigms, or others.
When using a language that supports multiple paradigms, the developer chooses which paradigm elements to use. But, this choice may not involve considering paradigms per se. The developer often uses the features of a language as the language provides them and to the extent that the developer knows them. Categorizing the resulting code by paradigm is often an academic activity done in retrospect.
Languages categorized as imperative paradigm have two main features: they state the order in which operations occur, with constructs that explicitly control that order, and they allow side effects, in which state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. The communication between the units of code is not explicit.
In contrast, languages in the declarative paradigm do not state the order in which to execute operations. Instead, they supply a number o
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
xecute operations. Instead, they supply a number of available operations in the system, along with the conditions under which each is allowed to execute. The implementation of the language's execution model tracks which operations are free to execute and chooses the order independently. More at Comparison of multi-paradigm programming languages.
In object-oriented programming, code is organized into objects that contain state that is owned by and (usually) controlled by the code of the object. Most object-oriented languages are also imperative languages.
In object-oriented programming, programs are treated as a set of interacting objects. In functional programming, programs are treated as a sequence of stateless function evaluations. When programming computers or systems with many processors, in process-oriented programming, programs are treated as sets of concurrent processes that act on a logical shared data structures.
Many programming paradigms are as well known for the techniqu
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
mming paradigms are as well known for the techniques they forbid as for those they support. For instance, pure functional programming disallows side-effects, while structured programming disallows the goto construct. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to older ones. Yet, avoiding certain techniques can make it easier to understand program behavior, and to prove theorems about program correctness.
Programming paradigms can also be compared with programming models, which allows invoking an execution model by using only an API. Programming models can also be classified into paradigms based on features of the execution model.
For parallel computing, using a programming model instead of a language is common. The reason is that details of the parallel hardware leak into the abstractions used to program the hardware. This causes the programmer to have to map patterns in the algorithm onto patterns in the execution mode
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
the algorithm onto patterns in the execution model (which have been inserted due to leakage of hardware into the abstraction). As a consequence, no one parallel programming language maps well to all computation problems. Thus, it is more convenient to use a base sequential language and insert API calls to parallel execution models via a programming model. Such parallel programming models can be classified according to abstractions that reflect the hardware, such as shared memory, distributed memory with message passing, notions of place visible in the code, and so forth. These can be considered flavors of programming paradigm that apply to only parallel languages and programming models.
== Criticism ==
Some programming language researchers criticise the notion of paradigms as a classification of programming languages, e.g. Harper, and Krishnamurthi. They argue that many programming languages cannot be strictly classified into one paradigm, but rather include features from sever
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
e paradigm, but rather include features from several paradigms. See Comparison of multi-paradigm programming languages.
== History ==
Different approaches to programming have developed over time. Classification of each approach was either described at the time the approach was first developed, but often not until some time later, retrospectively. An early approach consciously identified as such is structured programming, advocated since the mid 1960s. The concept of a programming paradigm as such dates at least to 1978, in the Turing Award lecture of Robert W. Floyd, entitled The Paradigms of Programming, which cites the notion of paradigm as used by Thomas Kuhn in his The Structure of Scientific Revolutions (1962). Early programming languages did not have clearly defined programming paradigms and sometimes programs made extensive use of goto statements. Liberal use of which lead to spaghetti code which is difficult to understand and maintain. This led to the development of structure
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
maintain. This led to the development of structured programming paradigms that disallowed the use of goto statements; only allowing the use of more structured programming constructs.
== Languages and paradigms ==
=== Machine code ===
Machine code is the lowest-level of computer programming as it is machine instructions that define behavior at the lowest level of abstract possible for a computer. As it is the most prescriptive way to code it is classified as imperative.
It is sometimes called the first-generation programming language.
=== Assembly ===
Assembly language introduced mnemonics for machine instructions and memory addresses. Assembly is classified as imperative and is sometimes called the second-generation programming language.
In the 1960s, assembly languages were developed to support library COPY and quite sophisticated conditional macro generation and preprocessing abilities, CALL to subroutine, external variables and common sections (globals), enabling significant c
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
common sections (globals), enabling significant code re-use and isolation from hardware specifics via the use of logical operators such as READ/WRITE/GET/PUT. Assembly was, and still is, used for time-critical systems and often in embedded systems as it gives the most control of what the machine does.
=== Procedural languages ===
Procedural languages, also called the third-generation programming languages are the first described as high-level languages. They support vocabulary related to the problem being solved. For example,
COmmon Business Oriented Language (COBOL) – uses terms like file, move and copy.
FORmula TRANslation (FORTRAN) – using mathematical language terminology, it was developed mainly for scientific and engineering problems.
ALGOrithmic Language (ALGOL) – focused on being an appropriate language to define algorithms, while using mathematical language terminology, targeting scientific and engineering problems, just like FORTRAN.
Programming Language One (PL/I) –
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
like FORTRAN.
Programming Language One (PL/I) – a hybrid commercial-scientific general purpose language supporting pointers.
Beginners All purpose Symbolic Instruction Code (BASIC) – it was developed to enable more people to write programs.
C – a general-purpose programming language, initially developed by Dennis Ritchie between 1969 and 1973 at AT&T Bell Labs.
These languages are classified as procedural paradigm. They directly control the step by step process that a computer program follows. The efficacy and efficiency of such a program is therefore highly dependent on the programmer's skill.
=== Object-oriented programming ===
In attempt to improve on procedural languages, object-oriented programming (OOP) languages were created, such as Simula, Smalltalk, C++, Eiffel, Python, PHP, Java, and C#. In these languages, data and methods to manipulate the data are in the same code unit called an object. This encapsulation ensures that the only way that an object can access data is
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
at the only way that an object can access data is via methods of the object that contains the data. Thus, an object's inner workings may be changed without affecting code that uses the object.
There is controversy raised by Alexander Stepanov, Richard Stallman and other programmers, concerning the efficacy of the OOP paradigm versus the procedural paradigm. The need for every object to have associative methods leads some skeptics to associate OOP with software bloat; an attempt to resolve this dilemma came through polymorphism.
Although most OOP languages are third-generation, it is possible to create an object-oriented assembler language. High Level Assembly (HLA) is an example of this that fully supports advanced data types and object-oriented assembly language programming – despite its early origins. Thus, differing programming paradigms can be seen rather like motivational memes of their advocates, rather than necessarily representing progress from one level to the next. Precise
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
ting progress from one level to the next. Precise comparisons of competing paradigms' efficacy are frequently made more difficult because of new and differing terminology applied to similar entities and processes together with numerous implementation distinctions across languages.
=== Declarative languages ===
A declarative programming program describes what the problem is, not how to solve it. The program is structured as a set of properties to find in the expected result, not as a procedure to follow. Given a database or a set of rules, the computer tries to find a solution matching all the desired properties. An archetype of a declarative language is the fourth generation language SQL, and the family of functional languages and logic programming.
Functional programming is a subset of declarative programming. Programs written using this paradigm use functions, blocks of code intended to behave like mathematical functions. Functional languages discourage changes in the value of vari
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
languages discourage changes in the value of variables through assignment, making a great deal of use of recursion instead.
The logic programming paradigm views computation as automated reasoning over a body of knowledge. Facts about the problem domain are expressed as logic formulas, and programs are executed by applying inference rules over them until an answer to the problem is found, or the set of formulas is proved inconsistent.
=== Other paradigms ===
Symbolic programming is a paradigm that describes programs able to manipulate formulas and program components as data. Programs can thus effectively modify themselves, and appear to "learn", making them suited for applications such as artificial intelligence, expert systems, natural-language processing and computer games. Languages that support this paradigm include Lisp and Prolog.
Differentiable programming structures programs so that they can be differentiated throughout, usually via automatic differentiation.
Literate program
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
ly via automatic differentiation.
Literate programming, as a form of imperative programming, structures programs as a human-centered web, as in a hypertext essay: documentation is integral to the program, and the program is structured following the logic of prose exposition, rather than compiler convenience.
Symbolic programming techniques such as reflective programming (reflection), which allow a program to refer to itself, might also be considered as a programming paradigm. However, this is compatible with the major paradigms and thus is not a real paradigm in its own right.
== See also ==
== References ==
== External links ==
Classification of the principal programming paradigms
How programming paradigms evolve and get adopted?
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
A programming tool or software development tool is a computer program that is used to develop another computer program, usually by helping the developer manage computer files. For example, a programmer may use a tool called a source code editor to edit source code files, and then a compiler to convert the source code into machine code files. They may also use build tools that automatically package executable program and data files into shareable packages or install kits.
A set of tools that are run one after another, with each tool feeding its output to the next one, is called a toolchain. An integrated development environment (IDE) integrates the function of several tools into a single program. Usually, an IDE provides a source code editor as well as other built-in or plug-in tools that help with compiling, debugging, and testing.
Whether a program is considered a development tool can be subjective. Some programs, such as the GNU compiler collection, are used exclusively for software
|
https://en.wikipedia.org/wiki/Programming_tool
|
ler collection, are used exclusively for software development while others, such as Notepad, are not meant specifically for development but are nevertheless often used for programming.
== Categories ==
Notable categories of development tools:
Assembler – Converts assembly language into machine code
Bug tracking system – Software application that records software bugs
Build automation – Building software via an unattended fashion
Code review software – Activity where one or more people check a program's code
Compiler – Computer program which translates code from one programming language to another
Compiler-compiler – Program that generates parsers or compilers, a.k.a. parser generator
Debugger – Computer program used to test and debug other programs
Decompiler – Program translating executable to source code
Disassembler – Computer program to translate machine language into assembly language
Documentation generator – Automation technology for creating software documentation
Graphical
|
https://en.wikipedia.org/wiki/Programming_tool
|
ogy for creating software documentation
Graphical user interface builder – Software development tool
Linker – Program that combines intermediate build files into an executable file
Memory debugger – Software memory problem finder
Minifier – Removal of unnecessary characters in code without changing its functionality
Pretty-printer – Formatting to make code or markup easier to readPages displaying short descriptions of redirect targets
Performance profiler – Measuring the time or resources used by a section of a computer program
Static code analyzer – Analysis of computer programs without executing themPages displaying short descriptions of redirect targets
Source code editor – Text editor specializing in software codePages displaying short descriptions of redirect targets
Source code generation – Type of computer programmingPages displaying short descriptions of redirect targets
Version control system – Stores and tracks versions of files
== See also ==
Call graph – Structure in comp
|
https://en.wikipedia.org/wiki/Programming_tool
|
es
== See also ==
Call graph – Structure in computing
Comparison of integrated development environments – Notable software packages that are nominal IDE
Computer aided software engineering – Domain of software toolsPages displaying short descriptions of redirect targets
Git – Distributed version control software system
GitHub – Software development collaboration platform
Lint – Tool to flag poor computer code
List of software engineering topics – Overview of and topical guide to software engineeringPages displaying short descriptions of redirect targets
List of unit testing frameworks
Manual memory management – Computer memory management methodology
Memory leak – When a computer program fails to release unnecessary memory
Reverse-engineering – Process of extracting design information from anything artificialPages displaying short descriptions of redirect targets
Revision Control System – Version-control system
Software development kit – Set of software development tools
Software engi
|
https://en.wikipedia.org/wiki/Programming_tool
|
– Set of software development tools
Software engineering – Engineering approach to software development
SourceForge – Software discovery and hosting platform for B2B and open source software
SWIG – Open-source programming tool
Toolkits for User Innovation – Design methodPages displaying short descriptions of redirect targets
Valgrind – Programming tool for profiling, memory debugging and memory leak detection
== References ==
== External links ==
Media related to Programming tools at Wikimedia Commons
|
https://en.wikipedia.org/wiki/Programming_tool
|
Go is a high-level general purpose programming language that is statically typed and compiled. It is known for the simplicity of its syntax and the efficiency of development that it enables by the inclusion of a large standard library supplying many needs for common projects. It was designed at Google in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson, and publicly announced in November of 2009. It is syntactically similar to C, but also has memory safety, garbage collection, structural typing, and CSP-style concurrency. It is often referred to as Golang to avoid ambiguity and because of its former domain name, golang.org, but its proper name is Go.
There are two major implementations:
The original, self-hosting compiler toolchain, initially developed inside Google;
A frontend written in C++, called gofrontend, originally a GCC frontend, providing gccgo, a GCC-based Go compiler; later extended to also support LLVM, providing an LLVM-based Go compiler called gollvm.
A third-party
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
VM-based Go compiler called gollvm.
A third-party source-to-source compiler, GopherJS, transpiles Go to JavaScript for front-end web development.
== History ==
Go was designed at Google in 2007 to improve programming productivity in an era of multicore, networked machines and large codebases. The designers wanted to address criticisms of other languages in use at Google, but keep their useful characteristics:
Static typing and run-time efficiency (like C)
Readability and usability (like Python)
High-performance networking and multiprocessing
Its designers were primarily motivated by their shared dislike of C++.
Go was publicly announced in November 2009, and version 1.0 was released in March 2012. Go is widely used in production at Google and in many other organizations and open-source projects.
In retrospect the Go authors judged Go to be successful due to the overall engineering work around the language, including the runtime support for the language's concurrency feature.
Althou
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
rt for the language's concurrency feature.
Although the design of most languages concentrates on innovations in syntax, semantics, or typing, Go is focused on the software development process itself. ... The principal unusual property of the language itself—concurrency—addressed problems that arose with the proliferation of multicore CPUs in the 2010s. But more significant was the early work that established fundamentals for packaging, dependencies, build, test, deployment, and other workaday tasks of the software development world, aspects
that are not usually foremost in language design.
=== Branding and styling ===
The Gopher mascot was introduced in 2009 for the open source launch of the language. The design, by Renée French, borrowed from a c. 2000 WFMU promotion.
In November 2016, the Go and Go Mono fonts were released by type designers Charles Bigelow and Kris Holmes specifically for use by the Go project. Go is a humanist sans-serif resembling Lucida Grande, and Go Mono is
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
ans-serif resembling Lucida Grande, and Go Mono is monospaced. Both fonts adhere to the WGL4 character set and were designed to be legible with a large x-height and distinct letterforms. Both Go and Go Mono adhere to the DIN 1450 standard by having a slashed zero, lowercase l with a tail, and an uppercase I with serifs.
In April 2018, the original logo was redesigned by brand designer Adam Smith. The new logo is a modern, stylized GO slanting right with trailing streamlines. (The Gopher mascot remained the same.)
=== Generics ===
The lack of support for generic programming in initial versions of Go drew considerable criticism. The designers expressed an openness to generic programming and noted that built-in functions were in fact type-generic, but are treated as special cases; Pike called this a weakness that might be changed at some point. The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it.
In August 2018, the Go princip
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
did not release it.
In August 2018, the Go principal contributors published draft designs for generic programming and error handling and asked users to submit feedback. However, the error handling proposal was eventually abandoned.
In June 2020, a new draft design document was published that would add the necessary syntax to Go for declaring generic functions and types. A code translation tool, go2go, was provided to allow users to try the new syntax, along with a generics-enabled version of the online Go Playground.
Generics were finally added to Go in version 1.18 on March 15, 2022.
=== Versioning ===
Go 1 guarantees compatibility for the language specification and major parts of the standard library. All versions up through the current Go 1.24 release have maintained this promise.
Go uses a go1.[major].[patch] versioning format, such as go1.24.0 and each major Go release is supported until there are two newer major releases. Unlike most software, Go calls the second number in a ve
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
most software, Go calls the second number in a version the major, i.e., in go1.24.0 the 24 is the major version. This is because Go plans to never reach 2.0, prioritizing backwards compatibility over potential breaking changes.
== Design ==
Go is influenced by C (especially the Plan 9 dialect), but with an emphasis on greater simplicity and safety. It consists of:
A syntax and environment adopting patterns more common in dynamic languages:
Optional concise variable declaration and initialization through type inference (x := 0 instead of var x int = 0; or var x = 0;)
Fast compilation
Remote package management (go get) and online package documentation
Distinctive approaches to particular problems:
Built-in concurrency primitives: light-weight processes (goroutines), channels, and the select statement
An interface system in place of virtual inheritance, and type embedding instead of non-virtual inheritance
A toolchain that, by default, produces statically linked native binaries witho
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
, produces statically linked native binaries without external Go dependencies
A desire to keep the language specification simple enough to hold in a programmer's head, in part by omitting features that are common in similar languages.
=== Syntax ===
Go's syntax includes changes from C aimed at keeping code concise and readable. A combined declaration/initialization operator was introduced that allows the programmer to write i := 3 or s := "Hello, world!", without specifying the types of variables used. This contrasts with C's int i = 3; and const char *s = "Hello, world!";. Go also removes the requirement to use parentheses in if statement conditions.
Semicolons still terminate statements; but are implicit when the end of a line occurs.
Methods may return multiple values, and returning a result, err pair is the conventional way a method indicates an error to its caller in Go. Go adds literal syntaxes for initializing struct parameters by name and for initializing maps and slices. As
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
by name and for initializing maps and slices. As an alternative to C's three-statement for loop, Go's range expressions allow concise iteration over arrays, slices, strings, maps, and channels.
fmt.Println("Hello World!") is a statement.
In Go, statements are separated by ending a line (hitting the Enter key) or by a semicolon ";".
Hitting the Enter key adds ";" to the end of the line implicitly (does not show up in the source code).
The left curly bracket { cannot come at the start of a line.
=== Types ===
Go has a number of built-in types, including numeric ones (byte, int64, float32, etc.), Booleans, and byte strings (string). Strings are immutable; built-in operators and keywords (rather than functions) provide concatenation, comparison, and UTF-8 encoding/decoding. Record types can be defined with the struct keyword.
For each type T and each non-negative integer constant n, there is an array type denoted [n]T; arrays of differing lengths are thus of different types. Dynamic arr
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
g lengths are thus of different types. Dynamic arrays are available as "slices", denoted []T for some type T. These have a length and a capacity specifying when new memory needs to be allocated to expand the array. Several slices may share their underlying memory.
Pointers are available for all types, and the pointer-to-T type is denoted *T. Address-taking and indirection use the & and * operators, as in C, or happen implicitly through the method call or attribute access syntax. There is no pointer arithmetic, except via the special unsafe.Pointer type in the standard library.
For a pair of types K, V, the type map[K]V is the type mapping type-K keys to type-V values, though Go Programming Language specification does not give any performance guarantees or implementation requirements for map types. Hash tables are built into the language, with special syntax and built-in functions. chan T is a channel that allows sending values of type T between concurrent Go processes.
Aside from its
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
T between concurrent Go processes.
Aside from its support for interfaces, Go's type system is nominal: the type keyword can be used to define a new named type, which is distinct from other named types that have the same layout (in the case of a struct, the same members in the same order). Some conversions between types (e.g., between the various integer types) are pre-defined and adding a new type may define additional conversions, but conversions between named types must always be invoked explicitly. For example, the type keyword can be used to define a type for IPv4 addresses, based on 32-bit unsigned integers as follows:
With this type definition, ipv4addr(x) interprets the uint32 value x as an IP address. Simply assigning x to a variable of type ipv4addr is a type error.
Constant expressions may be either typed or "untyped"; they are given a type when assigned to a typed variable if the value they represent passes a compile-time check.
Function types are indicated by the func keyw
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
eck.
Function types are indicated by the func keyword; they take zero or more parameters and return zero or more values, all of which are typed. The parameter and return values determine a function type; thus, func(string, int32) (int, error) is the type of functions that take a string and a 32-bit signed integer, and return a signed integer (of default width) and a value of the built-in interface type error.
Any named type has a method set associated with it. The IP address example above can be extended with a method for checking whether its value is a known standard:
Due to nominal typing, this method definition adds a method to ipv4addr, but not on uint32. While methods have special definition and call syntax, there is no distinct method type.
==== Interface system ====
Go provides two features that replace class inheritance.
The first is embedding, which can be viewed as an automated form of composition.
The second are its interfaces, which provides runtime polymorphism.: 266 I
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
aces, which provides runtime polymorphism.: 266 Interfaces are a class of types and provide a limited form of structural typing in the otherwise nominal type system of Go. An object which is of an interface type is also of another type, much like C++ objects being simultaneously of a base and derived class. The design of Go interfaces was inspired by protocols from the Smalltalk programming language. Multiple sources use the term duck typing when describing Go interfaces. Although the term duck typing is not precisely defined and therefore not wrong, it usually implies that type conformance is not statically checked. Because conformance to a Go interface is checked statically by the Go compiler (except when performing a type assertion), the Go authors prefer the term structural typing.
The definition of an interface type lists required methods by name and type. Any object of type T for which functions exist matching all the required methods of interface type I is an object of type I a
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
thods of interface type I is an object of type I as well. The definition of type T need not (and cannot) identify type I. For example, if Shape, Square and Circle are defined as
then both a Square and a Circle are implicitly a Shape and can be assigned to a Shape-typed variable.: 263–268 In formal language, Go's interface system provides structural rather than nominal typing. Interfaces can embed other interfaces with the effect of creating a combined interface that is satisfied by exactly the types that implement the embedded interface and any methods that the newly defined interface adds.: 270
The Go standard library uses interfaces to provide genericity in several places, including the input/output system that is based on the concepts of Reader and Writer.: 282–283
Besides calling methods via interfaces, Go allows converting interface values to other types with a run-time type check. The language constructs to do so are the type assertion, which checks against a single potentia
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
assertion, which checks against a single potential type:and the type switch, which checks against multiple types:The empty interface interface{} is an important base case because it can refer to an item of any concrete type. It is similar to the Object class in Java or C# and is satisfied by any type, including built-in types like int.: 284 Code using the empty interface cannot simply call methods (or built-in operators) on the referred-to object, but it can store the interface{} value, try to convert it to a more useful type via a type assertion or type switch, or inspect it with Go's reflect package. Because interface{} can refer to any value, it is a limited way to escape the restrictions of static typing, like void* in C but with additional run-time type checks.
The interface{} type can be used to model structured data of any arbitrary schema in Go, such as JSON or YAML data, by representing it as a map[string]interface{} (map of string to empty interface). This recursively descr
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
string to empty interface). This recursively describes data in the form of a dictionary with string keys and values of any type.
Interface values are implemented using pointer to data and a second pointer to run-time type information. Like some other types implemented using pointers in Go, interface values are nil if uninitialized.
==== Generic code using parameterized types ====
Since version 1.18, Go supports generic code using parameterized types.
Functions and types now have the ability to be generic using type parameters. These type parameters are specified within square brackets, right after the function or type name. The compiler transforms the generic function or type into non-generic by substituting type arguments for the type parameters provided, either explicitly by the user or type inference by the compiler. This transformation process is referred to as type instantiation.
Interfaces now can define a set of types (known as type set) using | (Union) operator, as well as a
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
s type set) using | (Union) operator, as well as a set of methods. These changes were made to support type constraints in generics code. For a generic function or type, a constraint can be thought of as the type of the type argument: a meta-type. This new ~T syntax will be the first use of ~ as a token in Go. ~T means the set of all types whose underlying type is T.
==== Enumerated types ====
=== Package system ===
In Go's package system, each package has a path (e.g., "compress/bzip2" or "golang.org/x/net/html") and a name (e.g., bzip2 or html). By default other packages' definitions must always be prefixed with the other package's name. However the name used can be changed from the package name, and if imported as _, then no package prefix is required. Only the capitalized names from other packages are accessible: io.Reader is public but bzip2.reader is not. The go get command can retrieve packages stored in a remote repository and developers are encouraged to develop packages in
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
d developers are encouraged to develop packages inside a base path corresponding to a source repository (such as example.com/user_name/package_name) to reduce the likelihood of name collision with future additions to the standard library or other external libraries.
=== Concurrency: goroutines and channels ===
The Go language has built-in facilities, as well as library support, for writing concurrent programs. The runtime is asynchronous: program execution that performs for example a network read will be suspended until data is available to process, allowing other parts of the program to perform other work. This is built into the runtime and does not require any changes in program code. The go runtime also automatically schedules concurrent operations (goroutines) across multiple CPUs; this can achieve parallelism for a properly written program.
The primary concurrency construct is the goroutine, a type of green thread.: 280–281 A function call prefixed with the go keyword starts a
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
unction call prefixed with the go keyword starts a function in a new goroutine. The language specification does not specify how goroutines should be implemented, but current implementations multiplex a Go process's goroutines onto a smaller set of operating-system threads, similar to the scheduling performed in Erlang and Haskell's GHC runtime implementation.: 10
While a standard library package featuring most of the classical concurrency control structures (mutex locks, etc.) is available,: 151–152 idiomatic concurrent programs instead prefer channels, which send messages between goroutines. Optional buffers store messages in FIFO order: 43 and allow sending goroutines to proceed before their messages are received.: 233
Channels are typed, so that a channel of type chan T can only be used to transfer messages of type T. Special syntax is used to operate on them; <-ch is an expression that causes the executing goroutine to block until a value comes in over the channel ch, while ch
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
il a value comes in over the channel ch, while ch <- x sends the value x (possibly blocking until another goroutine receives the value). The built-in switch-like select statement can be used to implement non-blocking communication on multiple channels; see below for an example. Go has a memory model describing how goroutines must use channels or other operations to safely share data.
The existence of channels does not by itself set Go apart from actor model-style concurrent languages like Erlang, where messages are addressed directly to actors (corresponding to goroutines). In the actor model, channels are themselves actors, therefore addressing a channel just means to address an actor. The actor style can be simulated in Go by maintaining a one-to-one correspondence between goroutines and channels, but the language allows multiple goroutines to share a channel or a single goroutine to send and receive on multiple channels.: 147
From these tools one can build concurrent constructs lik
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
hese tools one can build concurrent constructs like worker pools, pipelines (in which, say, a file is decompressed and parsed as it downloads), background calls with timeout, "fan-out" parallel calls to a set of services, and others. Channels have also found uses further from the usual notion of interprocess communication, like serving as a concurrency-safe list of recycled buffers, implementing coroutines (which helped inspire the name goroutine), and implementing iterators.
Concurrency-related structural conventions of Go (channels and alternative channel inputs) are derived from Tony Hoare's communicating sequential processes model. Unlike previous concurrent programming languages such as Occam or Limbo (a language on which Go co-designer Rob Pike worked), Go does not provide any built-in notion of safe or verifiable concurrency. While the communicating-processes model is favored in Go, it is not the only one: all goroutines in a program share a single address space. This means that
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
gram share a single address space. This means that mutable objects and pointers can be shared between goroutines; see § Lack of data race safety, below.
==== Suitability for parallel programming ====
Although Go's concurrency features are not aimed primarily at parallel processing, they can be used to program shared-memory multi-processor machines. Various studies have been done into the effectiveness of this approach. One of these studies compared the size (in lines of code) and speed of programs written by a seasoned programmer not familiar with the language and corrections to these programs by a Go expert (from Google's development team), doing the same for Chapel, Cilk and Intel TBB. The study found that the non-expert tended to write divide-and-conquer algorithms with one go statement per recursion, while the expert wrote distribute-work-synchronize programs using one goroutine per processor core. The expert's programs were usually faster, but also longer.
==== Lack of data ra
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
ly faster, but also longer.
==== Lack of data race safety ====
Go's approach to concurrency can be summarized as "don't communicate by sharing memory; share memory by communicating". There are no restrictions on how goroutines access shared data, making data races possible. Specifically, unless a program explicitly synchronizes via channels or other means, writes from one goroutine might be partly, entirely, or not at all visible to another, often with no guarantees about ordering of writes. Furthermore, Go's internal data structures like interface values, slice headers, hash tables, and string headers are not immune to data races, so type and memory safety can be violated in multithreaded programs that modify shared instances of those types without synchronization. Instead of language support, safe concurrent programming thus relies on conventions; for example, Chisnall recommends an idiom called "aliases xor mutable", meaning that passing a mutable value (or pointer) over a channel
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
assing a mutable value (or pointer) over a channel signals a transfer of ownership over the value to its receiver.: 155 The gc toolchain has an optional data race detector that can check for unsynchronized access to shared memory during runtime since version 1.1, additionally a best-effort race detector is also included by default since version 1.6 of the gc runtime for access to the map data type.
=== Binaries ===
The linker in the gc toolchain creates statically linked binaries by default; therefore all Go binaries include the Go runtime.
=== Omissions ===
Go deliberately omits certain features common in other languages, including (implementation) inheritance, assertions, pointer arithmetic, implicit type conversions, untagged unions, and tagged unions. The designers added only those facilities that all three agreed on.
Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance a
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
le defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use of interfaces to achieve dynamic dispatch and composition to reuse code. Composition and delegation are in fact largely automated by struct embedding; according to researchers Schmager et al., this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance.
Exception handling was initially omitted in Go due to lack of a "design that gives value proportionate to the complexity". An exception-like panic/recover mechanism that avoids the usual try-catch control structure was proposed and released in the March 30, 2010 snapshot. The Go authors advise using it for unrecov
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
apshot. The Go authors advise using it for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package. Across package boundaries, Go includes a canonical error type, and multi-value returns using this type are the standard idiom.
== Style ==
The Go authors put substantial effort into influencing the style of Go programs:
Indentation, spacing, and other surface-level details of code are automatically standardized by the gofmt tool. It uses tabs for indentation and blanks for alignment. Alignment assumes that an editor is using a fixed-width font. golint does additional style checks automatically, but has been deprecated and archived by the Go maintainers.
Tools and libraries distributed with Go suggest standard approaches to things like API documentation (godoc), testing (go test), building (go build), package management (go get), and so on.
Go enforces rules that are recommendations in o
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
n.
Go enforces rules that are recommendations in other languages, for example banning cyclic dependencies, unused variables or imports, and implicit type conversions.
The omission of certain features (for example, functional-programming shortcuts like map and Java-style try/finally blocks) tends to encourage a particular explicit, concrete, and imperative programming style.
On day one the Go team published a collection of Go idioms, and later also collected code review comments, talks, and official blog posts to teach Go style and coding philosophy.
== Tools ==
The main Go distribution includes tools for building, testing, and analyzing code:
go build, which builds Go binaries using only information in the source files themselves, no separate makefiles
go test, for unit testing and microbenchmarks as well as fuzzing
go fmt, for formatting code
go install, for retrieving and installing remote packages
go vet, a static analyzer looking for potential errors in code
go run, a shortcut f
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
for potential errors in code
go run, a shortcut for building and executing code
go doc, for displaying documentation
go generate, a standard way to invoke code generators
go mod, for creating a new module, adding dependencies, upgrading dependencies, etc.
go tool, for invoking developer tools (added in Go version 1.24)
It also includes profiling and debugging support, fuzzing capabilities to detect bugs, runtime instrumentation (for example, to track garbage collection pauses), and a data race detector.
Another tool maintained by the Go team but is not included in Go distributions is gopls, a language server that provides IDE features such as intelligent code completion to Language Server Protocol compatible editors.
An ecosystem of third-party tools adds to the standard distribution, such as gocode, which enables code autocompletion in many text editors, goimports, which automatically adds/removes package imports as needed, and errcheck, which detects code that might unintentionally
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
eck, which detects code that might unintentionally ignore errors.
== Examples ==
=== Hello world ===
where "fmt" is the package for formatted I/O, similar to C's C file input/output.
=== Concurrency ===
The following simple program demonstrates Go's concurrency features to implement an asynchronous program. It launches two lightweight threads ("goroutines"): one waits for the user to type some text, while the other implements a timeout. The select statement waits for either of these goroutines to send a message to the main routine, and acts on the first message to arrive (example adapted from David Chisnall's book).: 152
=== Testing ===
The testing package provides support for automated testing of go packages. Target function example:
Test code (note that assert keyword is missing in Go; tests live in <filename>_test.go at the same package):
It is possible to run tests in parallel.
=== Web app ===
The net/http package provides support for creating web applications.
This e
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
ides support for creating web applications.
This example would show "Hello world!" when localhost:8080 is visited.
== Applications ==
Go has found widespread adoption in various domains due to its robust standard library and ease of use.
Popular applications include: Caddy, a web server that automates the process of setting up HTTPS, Docker, which provides a platform for containerization, aiming to ease the complexities of software development and deployment, Kubernetes, which automates the deployment, scaling, and management of containerized applications, CockroachDB, a distributed SQL database engineered for scalability and strong consistency, and Hugo, a static site generator that prioritizes speed and flexibility, allowing developers to create websites efficiently.
== Reception ==
The interface system, and the deliberate omission of inheritance, were praised by Michele Simionato, who likened these characteristics to those of Standard ML, calling it "a shame that no popular lang
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
ndard ML, calling it "a shame that no popular language has followed [this] particular route".
Dave Astels at Engine Yard wrote in 2009:
Go is extremely easy to dive into. There are a minimal number of fundamental language concepts and the syntax is clean and designed to be clear and unambiguous.
Go is still experimental and still a little rough around the edges.
Go was named Programming Language of the Year by the TIOBE Programming Community Index in its first year, 2009, for having a larger 12-month increase in popularity (in only 2 months, after its introduction in November) than any other language that year, and reached 13th place by January 2010, surpassing established languages like Pascal. By June 2015, its ranking had dropped to below 50th in the index, placing it lower than COBOL and Fortran. But as of January 2017, its ranking had surged to 13th, indicating significant growth in popularity and adoption. Go was again awarded TIOBE Programming Language of the Year in 2016.
Bruc
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
OBE Programming Language of the Year in 2016.
Bruce Eckel has stated:
The complexity of C++ (even more complexity has been added in the new C++), and the resulting impact on productivity, is no longer justified. All the hoops that the C++ programmer had to jump through in order to use a C-compatible language make no sense anymore -- they're just a waste of time and effort. Go makes much more sense for the class of problems that C++ was originally intended to solve.
A 2011 evaluation of the language and its gc implementation in comparison to C++ (GCC), Java and Scala by a Google engineer found:
Go offers interesting language features, which also allow for a concise and standardized notation. The compilers for this language are still immature, which reflects in both performance and binary sizes.
The evaluation got a rebuttal from the Go development team. Ian Lance Taylor, who had improved the Go code for Hundt's paper, had not been aware of the intention to publish his code, and says t
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
e of the intention to publish his code, and says that his version was "never intended to be an example of idiomatic or efficient Go"; Russ Cox then optimized the Go code, as well as the C++ code, and got the Go code to run almost as fast as the C++ version and more than an order of magnitude faster than the code in the paper.
Go's nil combined with the lack of algebraic types leads to difficulty handling failures and base cases.
Go does not allow an opening brace to appear on its own line, which forces all Go programmers to use the same brace style.
Go has been criticized for focusing on simplicity of implementation rather than correctness and flexibility; as an example, the language uses POSIX file semantics on all platforms, and therefore provides incorrect information on platforms such as Windows (which do not follow the aforementioned standard).
A study showed that it is as easy to make concurrency bugs with message passing as with shared memory, sometimes even more.
== Naming d
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
shared memory, sometimes even more.
== Naming dispute ==
On November 10, 2009, the day of the general release of the language, Francis McCabe, developer of the Go! programming language (note the exclamation point), requested a name change of Google's language to prevent confusion with his language, which he had spent 10 years developing. McCabe raised concerns that "the 'big guy' will end up steam-rollering over" him, and this concern resonated with the more than 120 developers who commented on Google's official issues thread saying they should change the name, with some even saying the issue contradicts Google's motto of: Don't be evil.
On October 12, 2010, the filed public issue ticket was closed by Google developer Russ Cox (@rsc) with the custom status "Unfortunate" accompanied by the following comment: "There are many computing products and services named Go. In the 11 months since our release, there has been minimal confusion of the two languages."
== See also ==
Fat point
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
of the two languages."
== See also ==
Fat pointer
Comparison of programming languages
== Notes ==
== References ==
== Further reading ==
== External links ==
Official website
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.
"Programming" in this context refers to a formal procedure for solving mathematical problems. This usage dates to the 1940s and is not specifically tied to the more recent notion of "computer programming." To avoid confusion, some practitioners prefer the term "optimization" — e.g., "quadratic optimization."
== Problem formulation ==
The quadratic programming problem with n variables and m constraints can be formulated as follows.
Given:
a real-valued, n-dimensional vector c,
an n×n-dimensional real symmetric matrix Q,
an m×n-dimensional real matrix A, and
an m-dimensional real vector b,
the objective of quadratic programming is to find an n-dimensional vector x,
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
programming is to find an n-dimensional vector x, that will
where xT denotes the vector transpose of x, and the notation Ax ⪯ b means that every entry of the vector Ax is less than or equal to the corresponding entry of the vector b (component-wise inequality).
=== Constrained least squares ===
As a special case when Q is symmetric positive-definite, the cost function reduces to least squares:
where Q = RTR follows from the Cholesky decomposition of Q and c = −RT d. Conversely, any such constrained least squares program can be equivalently framed as a quadratic programming problem, even for a generic non-square R matrix.
=== Generalizations ===
When minimizing a function f in the neighborhood of some reference point x0, Q is set to its Hessian matrix H(f(x0)) and c is set to its gradient ∇f(x0). A related programming problem, quadratically constrained quadratic programming, can be posed by adding quadratic constraints on the variables.
== Solution methods ==
For general proble
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
ables.
== Solution methods ==
For general problems a variety of methods are commonly used, including
interior point,
active set,
augmented Lagrangian,
conjugate gradient,
gradient projection,
extensions of the simplex algorithm.
In the case in which Q is positive definite, the problem is a special case of the more general field of convex optimization.
=== Equality constraints ===
Quadratic programming is particularly simple when Q is positive definite and there are only equality constraints; specifically, the solution process is linear. By using Lagrange multipliers and seeking the extremum of the Lagrangian, it may be readily shown that the solution to the equality constrained problem
Minimize
1
2
x
T
Q
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
Q
x
+
c
T
x
{\displaystyle {\text{Minimize}}\quad {\tfrac {1}{2}}\mathbf {x} ^{\mathrm {T} }Q\mathbf {x} +\mathbf {c} ^{\mathrm {T} }\mathbf {x} }
subject to
E
x
=
d
{\displaystyle {\text{subject to}}\quad E\mathbf {x} =\mathbf {d} }
is given by the linear system
[
Q
E
⊤
E
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
E
0
]
[
x
λ
]
=
[
−
c
d
]
{\displaystyle {\begin{bmatrix}Q&E^{\top }\\E&0\end{bmatrix}}{\begin{bmatrix}\mathbf {x} \\\lambda \end{bmatrix}}={\begin{bmat
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
}\mathbf {x} \\\lambda \end{bmatrix}}={\begin{bmatrix}-\mathbf {c} \\\mathbf {d} \end{bmatrix}}}
where λ is a set of Lagrange multipliers which come out of the solution alongside x.
The easiest means of approaching this system is direct solution (for example, LU factorization), which for small problems is very practical. For large problems, the system poses some unusual difficulties, most notably that the problem is never positive definite (even if Q is), making it potentially very difficult to find a good numeric approach, and there are many approaches to choose from dependent on the problem.
If the constraints don't couple the variables too tightly, a relatively simple attack is to change the variables so that constraints are unconditionally satisfied. For example, suppose d = 0 (generalizing to nonzero is straightforward). Looking at the constraint equations:
E
x
=
0
{\displaystyle E\mathbf {x} =
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
0
{\displaystyle E\mathbf {x} =0}
introduce a new variable y defined by
Z
y
=
x
{\displaystyle Z\mathbf {y} =\mathbf {x} }
where y has dimension of x minus the number of constraints. Then
E
Z
y
=
0
{\displaystyle EZ\mathbf {y} =\mathbf {0} }
and if Z is chosen so that EZ = 0 the constraint equation will be always satisfied. Finding such Z entails finding the null space of E, which is more or less simple depending on the structure of E. Substituting into the quadratic form gives an unconstrained minimization problem:
1
2
x
⊤
Q
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
⊤
Q
x
+
c
⊤
x
⟹
1
2
y
⊤
Z
⊤
Q
Z
y
+
(
Z
⊤
c
)
⊤
y
{\displaystyle {\tfrac {1}{2}}\mathbf {x} ^{\top }Q\mathbf {x} +\mathbf {c} ^{\top }\mathbf {x} \quad \implies \quad {\tf
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
{c} ^{\top }\mathbf {x} \quad \implies \quad {\tfrac {1}{2}}\mathbf {y} ^{\top }Z^{\top }QZ\mathbf {y} +\left(Z^{\top }\mathbf {c} \right)^{\top }\mathbf {y} }
the solution of which is given by:
Z
⊤
Q
Z
y
=
−
Z
⊤
c
{\displaystyle Z^{\top }QZ\mathbf {y} =-Z^{\top }\mathbf {c} }
Under certain conditions on Q, the reduced matrix ZTQZ will be positive definite. It is possible to write a variation on the conjugate gradient method which avoids the explicit calculation of Z.
== Lagrangian duality ==
The Lagrangian dual of a quadratic programming problem is also a quadratic programming problem. To see this let us focus on the case where c = 0 and Q is positive definite. We write the Lagrangian function as
L
(
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
function as
L
(
x
,
λ
)
=
1
2
x
⊤
Q
x
+
λ
⊤
(
A
x
−
b
)
.
{\displaystyle L(x,\lambda )={\tfrac {1}{2}}x^{\top }Qx+\lambda ^{\top }(Ax-b).}
Defining the (Lagrangian) dual function g(λ) as
g
(
λ
)
=
inf
x
L
(
x
,
λ
)
{\displaystyle g(\lambda )=\inf _{x}L(x,\lambda )}
, we find an infimum of L, using
∇
x
L
(
x
,
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
L
(
x
,
λ
)
=
0
{\displaystyle \nabla _{x}L(x,\lambda )=0}
and positive-definiteness of Q:
x
∗
=
−
Q
−
1
A
⊤
λ
.
{\displaystyle x^{*}=-Q^{-1}A^{\top }\lambda .}
Hence the dual function is
g
(
λ
)
=
−
1
2
λ
⊤
A
Q
−
1
A
⊤
λ
−
λ
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
λ
−
λ
⊤
b
,
{\displaystyle g(\lambda )=-{\tfrac {1}{2}}\lambda ^{\top }AQ^{-1}A^{\top }\lambda -\lambda ^{\top }b,}
and so the Lagrangian dual of the quadratic programming problem is
maximize
λ
≥
0
−
1
2
λ
⊤
A
Q
−
1
A
⊤
λ
−
λ
⊤
b
.
{\displaystyle {\text{maximize}}_{\lambda \geq 0}\quad -{\tfrac {1}{2}}\lambda ^{\top }AQ^{-1}A^{\top }\lamb
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
tfrac {1}{2}}\lambda ^{\top }AQ^{-1}A^{\top }\lambda -\lambda ^{\top }b.}
Besides the Lagrangian duality theory, there are other duality pairings (e.g. Wolfe, etc.).
== Run-time complexity ==
=== Convex quadratic programming ===
For positive definite Q, when the problem is convex, the ellipsoid method solves the problem in (weakly) polynomial time.
Ye and Tse present a polynomial-time algorithm, which extends Karmarkar's algorithm from linear programming to convex quadratic programming. On a system with n variables and L input bits, their algorithm requires O(L n) iterations, each of which can be done using O(L n3) arithmetic operations, for a total runtime complexity of O(L2 n4).
Kapoor and Vaidya present another algorithm, which requires O(L * log L * n3.67 * log n) arithmetic operations.
=== Non-convex quadratic programming ===
If Q is indefinite, (so the problem is non-convex) then the problem is NP-hard. A simple way to see this is to consider the non-convex quadratic
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
see this is to consider the non-convex quadratic constraint xi2 = xi. This constraint is equivalent to requiring that xi is in {0,1}, that is, xi is a binary integer variable. Therefore, such constraints can be used to model any integer program with binary variables, which is known to be NP-hard.
Moreover, these non-convex problems might have several stationary points and local minima. In fact, even if Q has only one negative eigenvalue, the problem is (strongly) NP-hard.
Moreover, finding a KKT point of a non-convex quadratic program is CLS-hard.
== Mixed-integer quadratic programming ==
There are some situations where one or more elements of the vector x will need to take on integer values. This leads to the formulation of a mixed-integer quadratic programming (MIQP) problem. Applications of MIQP include water resources and the construction of index funds.
== Solvers and scripting (programming) languages ==
== Extensions ==
Polynomial optimization is a more general framework,
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
ynomial optimization is a more general framework, in which the constraints can be polynomial functions of any degree, not only 2.
== See also ==
Sequential quadratic programming
Linear programming
Critical line method
== References ==
== Further reading ==
Cottle, Richard W.; Pang, Jong-Shi; Stone, Richard E. (1992). The linear complementarity problem. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc. pp. xxiv+762 pp. ISBN 978-0-12-192350-1. MR 1150683.
Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 978-0-7167-1045-5. A6: MP2, pg.245.
Gould, Nicholas I. M.; Toint, Philippe L. (2000). "A Quadratic Programming Bibliography" (PDF). RAL Numerical Analysis Group Internal Report 2000-1.
== External links ==
A page about quadratic programming
NEOS Optimization Guide: Quadratic Programming
Quadratic Programming Archived 2023-04-08 at the Wayback Machine
Cubic programming a
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
3-04-08 at the Wayback Machine
Cubic programming and beyond, in Operations Research stack exchange
|
https://en.wikipedia.org/wiki/Quadratic_programming
|
Constraint programming (CP) is a paradigm for solving combinatorial problems that draws on a wide range of techniques from artificial intelligence, computer science, and operations research. In constraint programming, users declaratively state the constraints on the feasible solutions for a set of decision variables. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found. In addition to constraints, users also need to specify a method to solve these constraints. This typically draws upon standard methods like chronological backtracking and constraint propagation, but may use customized code like a problem-specific branching heuristic.
Constraint programming takes its root from and can be expressed in the form of constraint logic programming, which embeds constraints into a logic program. This variant of logic programming is due to Jaffar and
|
https://en.wikipedia.org/wiki/Constraint_programming
|
variant of logic programming is due to Jaffar and Lassez, who extended in 1987 a specific class of constraints that were introduced in Prolog II. The first implementations of constraint logic programming were Prolog III, CLP(R), and CHIP.
Instead of logic programming, constraints can be mixed with functional programming, term rewriting, and imperative languages.
Programming languages with built-in support for constraints include Oz (functional programming) and Kaleidoscope (imperative programming). Mostly, constraints are implemented in imperative languages via constraint solving toolkits, which are separate libraries for an existing imperative language.
== Constraint logic programming ==
Constraint programming is an embedding of constraints in a host language. The first host languages used were logic programming languages, so the field was initially called constraint logic programming. The two paradigms share many important features, like logical variables and backtracking. Today
|
https://en.wikipedia.org/wiki/Constraint_programming
|
s, like logical variables and backtracking. Today most Prolog implementations include one or more libraries for constraint logic programming.
The difference between the two is largely in their styles and approaches to modeling the world. Some problems are more natural (and thus, simpler) to write as logic programs, while some are more natural to write as constraint programs.
The constraint programming approach is to search for a state of the world in which a large number of constraints are satisfied at the same time. A problem is typically stated as a state of the world containing a number of unknown variables. The constraint program searches for values for all the variables.
Temporal concurrent constraint programming (TCC) and non-deterministic temporal concurrent constraint programming (MJV) are variants of constraint programming that can deal with time.
== Constraint satisfaction problem ==
A constraint is a relation between multiple variables that limits the values these variabl
|
https://en.wikipedia.org/wiki/Constraint_programming
|
ple variables that limits the values these variables can take simultaneously.
Three categories of constraints exist:
extensional constraints: constraints are defined by enumerating the set of values that would satisfy them;
arithmetic constraints: constraints are defined by an arithmetic expression, i.e., using
<
,
>
,
≤
,
≥
,
=
,
≠
,
.
.
.
{\displaystyle <,>,\leq ,\geq ,=,\neq ,...}
;
logical constraints: constraints are defined with an explicit semantics, i.e., AllDifferent, AtMost,...
Assignment is the association of a variable to a value from its domain. A partial assignment is when a subset of the variables of the problem has been assigned. A total assignment is when all the variables of the problem have been assigned.
During the search of the solutions of a CSP, a user can wish for:
finding a solution (satisfying all the c
|
https://en.wikipedia.org/wiki/Constraint_programming
|
ish for:
finding a solution (satisfying all the constraints);
finding all the solutions of the problem;
proving the unsatisfiability of the problem.
== Constraint optimization problem ==
A constraint optimization problem (COP) is a constraint satisfaction problem associated to an objective function.
An optimal solution to a minimization (maximization) COP is a solution that minimizes (maximizes) the value of the objective function.
During the search of the solutions of a COP, a user can wish for:
finding a solution (satisfying all the constraints);
finding the best solution with respect to the objective;
proving the optimality of the best found solution;
proving the unsatisfiability of the problem.
== Perturbation vs refinement models ==
Languages for constraint-based programming follow one of two approaches:
Refinement model: variables in the problem are initially unassigned, and each variable is assumed to be able to contain any value included in its range or domain. As comp
|
https://en.wikipedia.org/wiki/Constraint_programming
|
any value included in its range or domain. As computation progresses, values in the domain of a variable are pruned if they are shown to be incompatible with the possible values of other variables, until a single value is found for each variable.
Perturbation model: variables in the problem are assigned a single initial value. At different times one or more variables receive perturbations (changes to their old value), and the system propagates the change trying to assign new values to other variables that are consistent with the perturbation.
Constraint propagation in constraint satisfaction problems is a typical example of a refinement model, and formula evaluation in spreadsheets are a typical example of a perturbation model.
The refinement model is more general, as it does not restrict variables to have a single value, it can lead to several solutions to the same problem. However, the perturbation model is more intuitive for programmers using mixed imperative constraint object-orien
|
https://en.wikipedia.org/wiki/Constraint_programming
|
ers using mixed imperative constraint object-oriented languages.
== Domains ==
The constraints used in constraint programming are typically over some specific domains. Some popular domains for constraint programming are:
Boolean domains, where only true/false constraints apply (SAT problem)
integer domains, rational domains
interval domains, in particular for scheduling problems
linear domains, where only linear functions are described and analyzed (although approaches to non-linear problems do exist)
finite domains, where constraints are defined over finite sets
mixed domains, involving two or more of the above
Finite domains is one of the most successful domains of constraint programming. In some areas (like operations research) constraint programming is often identified with constraint programming over finite domains.
== Constraint propagation ==
Local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables o
|
https://en.wikipedia.org/wiki/Constraint_programming
|
lated to the consistency of subsets of variables or constraints. They can be used to reduce the search space and make the problem easier to solve. Various kinds of local consistency conditions are leveraged, including node consistency, arc consistency, and path consistency.
Every local consistency condition can be enforced by a transformation that changes the problem without changing its solutions. Such a transformation is called constraint propagation. Constraint propagation works by reducing domains of variables, strengthening constraints, or creating new ones. This leads to a reduction of the search space, making the problem easier to solve by some algorithms. Constraint propagation can also be used as an unsatisfiability checker, incomplete in general but complete in some particular cases.
== Constraint solving ==
There are three main algorithmic techniques for solving constraint satisfaction problems: backtracking search, local search, and dynamic programming.
=== Backtracking
|
https://en.wikipedia.org/wiki/Constraint_programming
|
earch, and dynamic programming.
=== Backtracking search ===
Backtracking search is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
=== Local Search ===
Local search is an incomplete method for finding a solution to a problem. It is based on iteratively improving an assignment of the variables until all constraints are satisfied. In particular, local search algorithms typically modify the value of a variable in an assignment at each step. The new assignment is close to the previous one in the space of assignment, hence the name local search.
=== Dynamic programming ===
Dynamic programming is both a mathematical optimization method and a computer programming method. It refers to simplifying a complicated problem
|
https://en.wikipedia.org/wiki/Constraint_programming
|
d. It refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.
== Example ==
The syntax for expressing constraints over finite domains depends on the host language. The following is a Prolog program that solves the classical alphametic puzzle SEND+MORE=MONEY in constraint logic programming:
The interpreter creates a variable for each letter in the puzzle. The operator ins is used to specify the domains of these variables, so that they range over the set of values {0,1,2,3, ..., 9}. The constraints S#\=0 and M#\=0 means that these two variables cannot take the value zero. When t
|
https://en.wikipedia.org/wiki/Constraint_programming
|
e two variables cannot take the value zero. When the interpreter evaluates these constraints, it reduces the domains of these two variables by removing the value 0 from them. Then, the constraint all_different(Digits) is considered; it does not reduce any domain, so it is simply stored. The last constraint specifies that the digits assigned to the letters must be such that "SEND+MORE=MONEY" holds when each letter is replaced by its corresponding digit. From this constraint, the solver infers that M=1. All stored constraints involving variable M are awakened: in this case, constraint propagation on the all_different constraint removes value 1 from the domain of all the remaining variables. Constraint propagation may solve the problem by reducing all domains to a single value, it may prove that the problem has no solution by reducing a domain to the empty set, but may also terminate without proving satisfiability or unsatisfiability. The label literals are used to actually perform search
|
https://en.wikipedia.org/wiki/Constraint_programming
|
label literals are used to actually perform search for a solution.
== See also ==
Combinatorial optimization
Concurrent constraint logic programming
Constraint logic programming
Heuristic algorithms
List of constraint programming languages
Mathematical optimization
Nurse scheduling problem
Regular constraint
Satisfiability modulo theories
Traveling tournament problem
== References ==
== External links ==
Association for Constraint Programming
CP Conference Series
Guide to Constraint Programming
The Mozart Programming System at archive.today (archived December 5, 2012), an Oz-based free software (X Window System style)
Cork Constraint Computation Centre at archive.today (archived January 7, 2013)
|
https://en.wikipedia.org/wiki/Constraint_programming
|
Software consists of computer programs that instruct the execution of a computer. Software also includes design documents and specifications.
The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in the machine language specific to the hardware. The introduction of high-level programming languages in 1958 allowed for more human-readable instructions, making software development easier and more portable across different computer architectures. Software in a programming language is run through a compiler or interpreter to execute on the architecture's hardware. Over time, software has become complex, owing to developments in networking, operating systems, and databases.
Software can generally be categorized into two main types:
operating systems, which manage hardware resources and provide services for applications
application software, which performs specific tasks for users
The rise of cloud computing has i
|
https://en.wikipedia.org/wiki/Software
|
tasks for users
The rise of cloud computing has introduced the new software delivery model Software as a Service (SaaS). In SaaS, applications are hosted by a provider and accessed over the Internet.
The process of developing software involves several stages. The stages include software design, programming, testing, release, and maintenance. Software quality assurance and security are critical aspects of software development, as bugs and security vulnerabilities can lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products.
== History ==
The first use of the word software to describe computer programs is credited to mathematician John Wilder Tukey in 1958.
The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language. Machine language is difficult to debug and not portable across different compute
|
https://en.wikipedia.org/wiki/Software
|
to debug and not portable across different computers. Initially, hardware resources were more expensive than human resources. As programs became complex, programmer productivity became the bottleneck. The introduction of high-level programming languages in 1958 hid the details of the hardware and expressed the underlying algorithms into the code . Early languages include Fortran, Lisp, and COBOL.
== Types ==
There are two main types of software:
Operating systems are "the layer of software that manages a computer's resources for its users and their applications". There are three main purposes that an operating system fulfills:
Allocating resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory.
Providing an interface that abstracts the details of accessing hardware details (like physical memory) to make things easier for programmers.
Offering common services, such as an interface for accessing network and disk d
|
https://en.wikipedia.org/wiki/Software
|
h as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten.
Application software runs on top of the operating system and uses the computer's resources to perform a task. There are many different types of application software because the range of tasks that can be performed with modern computers is so large. Applications account for most software and require the environment provided by an operating system, and often other applications, in order to function.
Software can also be categorized by how it is deployed. Traditional applications are purchased with a perpetual license for a specific version of the software, downloaded, and run on hardware belonging to the purchaser. The rise of the Internet and cloud computing enabled a new model, software as a service (SaaS), in which the provider hosts the software (usually built on top of rented infrastructure or platforms) and provides the use of the
|
https://en.wikipedia.org/wiki/Software
|
ructure or platforms) and provides the use of the software to customers, often in exchange for a subscription fee. By 2023, SaaS products—which are usually delivered via a web application—had become the primary method that companies deliver applications.
== Software development and maintenance ==
Software companies aim to deliver a high-quality product on time and under budget. A challenge is that software development effort estimation is often inaccurate. Software development begins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making a software design. Most software projects speed up their development by reusing or incorporating existing software, either in the form of commercial off-the-shelf (COTS) or open-source software. Software quality assurance is typically a combination of manual code review by other engineers and automated software testing. Due to time constraints, testing cannot cover all aspects of the software's intende
|
https://en.wikipedia.org/wiki/Software
|
cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality. Formal methods are used in some safety-critical systems to prove the correctness of code, while user acceptance testing helps to ensure that the product meets customer expectations. There are a variety of software development methodologies, which vary from completing all steps in order to concurrent and iterative models. Software development is driven by requirements taken from prospective users, as opposed to maintenance, which is driven by events such as a change request.
Frequently, software is released in an incomplete state when the development team runs out of time or funding. Despite testing and quality assurance, virtually all software contains bugs where the system does not work as intended. Post-release software maintenance is necessary to remediate these bugs when they are found and keep the software working as the environment changes over time. Ne
|
https://en.wikipedia.org/wiki/Software
|
e working as the environment changes over time. New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. As software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost.
Completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising.
== Quality and security ==
Software quality is defined as meeting the stated requirements as well as customer expectations. Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability,
|
https://en.wikipedia.org/wiki/Software
|
icient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Software failures in safety-critical systems can be very serious including death. By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs.
The rise of the Internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. If a bug creates a security risk, it is called a vulnerability. Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have
|
https://en.wikipedia.org/wiki/Software
|
ain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. Although some vulnerabilities can only be used for denial of service attacks that compromise a system's availability, others allow the attacker to inject and run their own code (called malware), without the user being aware of it. To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. Despite efforts to ensure security, a significant fraction of computers are infected with malware.
== Encoding and execution ==
=== Programming languages ===
Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others
|
https://en.wikipedia.org/wiki/Software
|
d; some have been in use for decades, while others have fallen into disuse. Some definitions classify machine code—the exact instructions directly implemented by the hardware—and assembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages. Programs written in the high-level programming languages used to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can be ported to other computer systems, and they are more concise and human-readable than machine code. They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware.
=== Compilation, interpretation, and execution ===
The invention of high-level programming languages was simultaneous with the compilers needed to translate them automatically into machine code. Most programs do not contain all the resources needed to run the
|
https://en.wikipedia.org/wiki/Software
|
do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages.
== Legal issues ==
=== Liability ===
Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime where liability for software products is significantly curtailed compared to other products.
=== Licenses ===
Since the mid-1970s, software and its source code have been protected by copyright la
|
https://en.wikipedia.org/wiki/Software
|
ts source code have been protected by copyright law that vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are sometimes treated as a trade secret and concealed by such methods as non-disclosure agreements. A software copyright is often owned by the person or company that financed or made the software (depending on their contracts with employees or contractors who helped to write it). Some software is in the public domain and has no restrictions on who can use it, copy or share it, or modify it; a notable example is software written by the United States Government. Free and open-source software also allow free use, sharing, and modification, perhaps with a few specified conditions. The use of some software is governed by an agreement (software license) written by the copyright holder and imposed on the user. Proprietary software is usually sold under a restrictive license that limits its use and shari
|
https://en.wikipedia.org/wiki/Software
|
restrictive license that limits its use and sharing. Some free software licenses require that modified versions must be released under the same license, which prevents the software from being sold
or distributed under proprietary restrictions.
=== Patents ===
Patents give an inventor an exclusive, time-limited license for a novel product or process. Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered by copyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid. Software patents have been historically controversial. Before the 1998 case State Street Bank & Trust Co. v. Signature Financial Group, Inc., software patents were generally not recognized in the United States. In that case, the Supreme Court decided that business processes could be patented. Patent applications are complex and co
|
https://en.wikipedia.org/wiki/Software
|
e patented. Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products. Unlike copyrights, patents generally only apply in the jurisdiction where they were issued.
== Impact ==
Engineer Capers Jones writes that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else". It has become ubiquitous in everyday life in developed countries. In many cases, software augments the functionality of existing technologies such as household appliances and elevators. Software also spawned entirely new technologies such as the Internet, video games, mobile phones, and GPS. New methods of communication, including email, forums, blogs, microblogging, wikis, and social media, were enabled by the Internet. Massive amounts of knowledge exceeding any paper-based library are now available with a quick web search. Most creative professionals have switched to
|
https://en.wikipedia.org/wiki/Software
|
arch. Most creative professionals have switched to software-based tools such as computer-aided design, 3D modeling, digital image editing, and computer animation. Almost every complex device is controlled by software.
== References ==
=== Sources ===
|
https://en.wikipedia.org/wiki/Software
|
Programming productivity (also called software productivity or development productivity) describes the degree of the ability of individual programmers or development teams to build and evolve software systems. Productivity traditionally refers to the ratio between the quantity of software produced and the cost spent for it. Here the delicacy lies in finding a reasonable way to define software quantity.
== Terminology ==
Productivity is an important topic investigated in disciplines as various as manufacturing, organizational psychology, industrial engineering, strategic management, finance, accounting, marketing and economics. Levels of analysis include the individual, the group, divisional, organizational and national levels. Due to this diversity, there is no clear-cut definition of productivity and its influencing factors, although research has been conducted for more than a century. Like in software engineering, this lack of common agreement on what actually constitutes productiv
|
https://en.wikipedia.org/wiki/Programming_productivity
|
n agreement on what actually constitutes productivity, is perceived as a major obstacle for a substantiated discussion of productivity. The following definitions describe the best consensus on the terminology.
=== Productivity ===
While there is no commonly agreed on definition of productivity, there appears to be an agreement that productivity describes the ratio between output and input:
Productivity = Output / Input
However, across the various disciplines different notions and, particularly, different measurement units for input and output can be found. The manufacturing industry typically uses a straightforward relation between the number of units produced and the number of units consumed. Non-manufacturing industries usually use man-hours or similar units to enable comparison between outputs and inputs.
One basic agreement is that the meaning of productivity and the means for measuring it vary depending on what context is under evaluation. In a manufacturing company the possibl
|
https://en.wikipedia.org/wiki/Programming_productivity
|
evaluation. In a manufacturing company the possible contexts are:
the individual machine or manufacturing system;
the manufacturing function, for example assembly;
the manufacturing process for a single product or group of related products;
the factory; and
the company's entire factory system
As long classical production processes are considered a straightforward metric of productivity is simple: how many units of a product of specified quality is produced by which costs. For intellectual work, productivity is much trickier. How do we measure the productivity of authors, scientists, or engineers? Due to the rising importance of knowledge work (as opposed to manual work), many researchers tried to develop productivity measurement means that can be applied in a non-manufacturing context. It is commonly agreed that the nature of knowledge work fundamentally differs from manual work and, hence, factors besides the simple output/input ratio need to be taken into account, e.g. quality, time
|
https://en.wikipedia.org/wiki/Programming_productivity
|
need to be taken into account, e.g. quality, timeliness, autonomy, project success, customer satisfaction and innovation. However, the research communities in neither discipline have been able to establish broadly applicable and accepted means for productivity measurement yet. The same holds for more specific area of programming productivity.
=== Profitability ===
Profitability and performance are closely linked and are, in fact, often confused. However, as profitability is usually defined as the ratio between revenue and cost
Profitability = Revenue / Cost
It has a wider scope than performance, i.e. the number of factors that influence profitability is greater than the number of factors than influence productivity. Particularly, profitability can change without any change to the productivity, e.g. due to external conditions like cost or price inflation. Besides that, the interdependency between productivity and profitability is usually delayed, i.e. gains in productivity are rarely
|
https://en.wikipedia.org/wiki/Programming_productivity
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.