text
stringlengths 1
1k
| source
stringlengths 31
152
|
---|---|
as in forming tuple literal; as a whole, the results are then put on the left-hand side of the equal sign in an assignment statement. This statement expects an iterable object on the right-hand side of the equal sign to produce the same number of values as the writable expressions on the left-hand side; while iterating, the statement assigns each of the values produced on the right to the corresponding expression on the left.
Python has a "string format" operator % that functions analogously to printf format strings in the C language—e.g. "spam=%s eggs=%d" % ("blah", 2) evaluates to "spam=blah eggs=2". In Python 2.6+ and 3+, this operator was supplemented by the format() method of the str class, e.g., "spam={0} eggs={1}".format("blah", 2). Python 3.6 added "f-strings": spam = "blah"; eggs = 2; f'spam={spam} eggs={eggs}'.
Strings in Python can be concatenated by "adding" them (using the same operator as for adding integers and floats); e.g., "spam" + "eggs" returns "spameggs". If strin
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
e.g., "spam" + "eggs" returns "spameggs". If strings contain numbers, they are concatenated as strings rather than as integers, e.g. "2" + "2" returns "22".
Python supports string literals in several ways:
Delimited by single or double quotation marks; single and double quotation marks have equivalent functionality (unlike in Unix shells, Perl, and Perl-influenced languages). Both marks use the backslash (\) as an escape character. String interpolation became available in Python 3.6 as "formatted string literals".
Triple-quoted, i.e., starting and ending with three single or double quotation marks; this may span multiple lines and function like here documents in shells, Perl, and Ruby.
Raw string varieties, denoted by prefixing the string literal with r. Escape sequences are not interpreted; hence raw strings are useful where literal backslashes are common, such as in regular expressions and Windows-style paths. (Compare "@-quoting" in C#.)
Python has array index and array slicing expr
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
C#.)
Python has array index and array slicing expressions in lists, which are written as a[key], a[start:stop] or a[start:stop:step]. Indexes are zero-based, and negative indexes are relative to the end. Slices take elements from the start index up to, but not including, the stop index. The (optional) third slice parameter, called step or stride, allows elements to be skipped or reversed. Slice indexes may be omitted—for example, a[:] returns a copy of the entire list. Each element of a slice is a shallow copy.
In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example:
List comprehensions vs. for-loops
Conditional expressions vs. if blocks
The eval() vs. exec() built-in functions (in Python 2, exec is a statement); the former function is for expressions, while the latter is for statements
A statement cannot be part of an expression
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
ements
A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement.
=== Methods ===
Methods of objects are functions attached to the object's class; the syntax for normal methods and functions, instance.method(argument), is syntactic sugar for Class.method(instance, argument). Python methods have an explicit self parameter to access instance data, in contrast to the implicit self (or this) parameter in some object-oriented programming languages (e.g., C++, Java, Objective-C, Ruby). Python also provides methods, often called dunder methods (because their names begin and end with double underscores); these methods allow user-defined classes to modify how they are handled by native operations including length, comparison, arithmetic, and type c
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
cluding length, comparison, arithmetic, and type conversion.
=== Typing ===
Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them.
Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection.
Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
yle and new-style. Current Python versions support the semantics of only the new style.
Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Mypy also supports a Python compiler called mypyc, which leverages type annotations for optimization.
=== Arithmetic operations ===
Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the module operator, a remainder can be negative, e.g., 4 % -3 == -2.) Python also offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0; it also offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively.
Division between integers produces floating-point results. The behavior of d
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
produces floating-point results. The behavior of division has changed significantly over time:
The current version of Python (i.e., since 3.0) changed the / operator to always represent floating-point division, e.g., 5/2 == 2.5.
The floor division // operator was introduced. Thus 7//3 == 2, -7//3 == -3, 7.5//3 == 2.0, and -7.5//3 == -3.0. For outdated Python 2.7 adding the from __future__ import division statement causes a module in Python 2.7 to use Python 3.0 rules for division instead (see above).
In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division.
Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. The rounding also implies that the equation b*(a//b) + a%b == a is valid for both positive and
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative.
Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0.
Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c.
Python uses arbitrar
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
ould then be compared with c.
Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers.
Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation.
=== Function syntax ===
Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs:
To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header.
== Code examples ==
"
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
nside the function header.
== Code examples ==
"Hello, World!" program:
Program to calculate the factorial of a positive integer:
== Libraries ==
Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing.
Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations.
As of 13 March 2025, the
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
variant implementations.
As of 13 March 2025, the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. These have a wide range of functionality, including the following:
== Development environments ==
Most Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately.
Python is also bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.
Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting.
Standard desktop IDEs include PyCharm, IntelliJ Idea, Visual Studio Code; there are also web browser-based IDEs, such as the following environments:
SageMath, for developing science- and math-related programs;
Jupyte
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
eloping science- and math-related programs;
Jupyter Notebooks, an open-source interactive computing platform;
PythonAnywhere, a browser-based IDE and hosting environment; and
Canopy IDE, a commercial IDE that emphasizes scientific computing.
== Implementations ==
=== Reference implementation ===
CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard (since version 3.11, older versions use the C89 standard with several select C99 features), but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python.
CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms.
All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past.
=== Other implementations ===
All alternative implementations have at least slightly different semantic. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Alternative implementations include the following:
P
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
ernative implementations include the following:
PyPy is a fast, compliant interpreter of Python 2.7 and 3.10. PyPy's just-in-time compiler often improves speed significantly relative to CPython, but PyPy does not support some libraries written in C. PyPy offers support for the RISC-V instruction-set architecture, for example.
Codon is an implentation with an ahead-of-time (AOT) compiler, which compiles a statically-typed Python-like language whose "syntax and semantics are nearly identical to Python's, there are some notable differences" For example, Codon uses 64-bit machine integers for speed, not arbitrarily as with Python; Codon developers claim that speedups over CPython are usually on the order of ten to a hundred times. Codon compiles to machine code (via LLVM) and supports native multithreading. Codon can also compile to Python extension modules that can be imported and used from Python.
MicroPython and CircuitPython are Python 3 variants that are optimized for microcontroll
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
on 3 variants that are optimized for microcontrollers, including the Lego Mindstorms EV3.
Pyston is a variant of the Python runtime that uses just-in-time compilation to speed up execution of Python programs.
Cinder is a performance-oriented fork of CPython 3.8 that features a number of optimizations, including bytecode inline caching, eager evaluation of coroutines, a method-at-a-time JIT, and an experimental bytecode compiler.
The Snek embedded computing language "is Python-inspired, but it is not Python. It is possible to write Snek programs that run under a full Python system, but most Python programs will not run under Snek." Snek is compatible with 8-bit AVR microcontrollers such as ATmega 328P-based Arduino, as well as larger microcontrollers that are compatible with MicroPython. Snek is an imperative language that (unlike Python) omits object-oriented programming. Snek supports only one numeric data type, which features 32-bit single precision (resembling JavaScript numbers, th
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
ingle precision (resembling JavaScript numbers, though smaller).
=== Unsupported implementations ===
Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version.
Just-in-time Python compilers have been developed, but are now unsupported:
Google began a project named Unladen Swallow in 2009: this project aimed to speed up the Python interpreter five-fold by using LLVM, and improve multithreading capability for scaling to thousands of cores, while typical implementations are limited by the global interpreter lock.
Psyco is a discontinued just-in-time specializing compiler, which integrates with CPython and transforms bytecode to machine code at runtime. The emitted code is specialized for certain data types and is faster than standard Python code. Psyco does not support Python 2.7 or later.
PyS60 was a Python 2 interpreter for Serie
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
later.
PyS60 was a Python 2 interpreter for Series 60 mobile phones, which was released by Nokia in 2005. The interpreter implemented many modules from Python's standard library, as well as additional modules for integration with the Symbian operating system. The Nokia N900 also supports Python through the GTK widget library, allowing programs to be written and run on the target device.
=== Cross-compilers to other languages ===
There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python:
Brython, Transcrypt, and Pyjs compile Python to JavaScript. (The latest release of Pyjs was in 2012.)
Cython compiles a superset of Python to C. The resulting code can be used with Python via direct C-level API calls into the Python interpreter.
PyJL compiles/transpiles a subset of Python to "human-readable, maintainable, and high-performance Julia source code". Despite the developers' perfo
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
Julia source code". Despite the developers' performance claims, this is not possible for arbitrary Python code; that is, compiling to a faster language or machine code is known to be impossible in the general case. The semantics of Python might potentially be changed, but in many cases speedup is possible with few or no changes in the Python code. The faster Julia source code can then be used from Python or compiled to machine code.
Nuitka compiles Python into C. This compiler works with Python 3.4 to 3.12 (and 2.6 and 2.7) for Python's main supported platforms (and Windows 7 or even Windows XP) and for Android. The compiler developers claim full support for Python 3.10, partial support for Python 3.11 and 3.12, and experimental support for Python 3.13. Nuitka supports macOS including Apple Silicon-based versions. The compiler is free of cost, though it has commercial add-ons (e.g., for hiding source code).
Numba is a JIT compiler that is used from Python; the compiler translates a
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
at is used from Python; the compiler translates a subset of Python and NumPy code into fast machine code. This tool is enabled by adding a decorator to the relevant Python code.
Pythran compiles a subset of Python 3 to C++ (C++11).
RPython can be compiled to C, and it is used to build the PyPy interpreter for Python.
The Python → 11l → C++ transpiler compiles a subset of Python 3 to C++ (C++17).
There are also specialized compilers:
MyHDL is a Python-based hardware description language (HDL) that converts MyHDL code to Verilog or VHDL code.
Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax:
Google's Grumpy transpiles Python 2 to Go. The latest release was in 2017.
IronPython allows running Python 2.7 programs with the .NET Common Language Runtime. An alpha version (released in 2021), is available for "Python 3.4, although features and behaviors from later versions may be included."
Jython compiles Python 2.7 to Java bytecode, al
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
."
Jython compiles Python 2.7 to Java bytecode, allowing the use of Java libraries from a Python program.
Pyrex (last released in 2010) and Shed Skin (last released in 2013) compile to C and C++ respectively.
=== Performance ===
A perforance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game.
There are several approaches to optimizing Python performance, given the inherent slowness of an interpreted language. These approaches include the following strategies or tools:
Just-in-time compilation: Dynamically compiling Python code just before it is executed. This technique is used in libraries such as Numba and PyPy.
Static compilation: Python code is compiled into machine code sometime before execution. An example of this approach is Cython, which compiles Python into C.
Concurrenc
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
s Cython, which compiles Python into C.
Concurrency and parallelism: Multiple tasks can be run simultaneously. Python contains modules such as `multiprocessing` to support this form of parallelism. Moreover, this approach helps to overcome limitations of the Global Interpreter Lock (GIL) in CPU tasks.
Efficient data structures: Performance can also be improved by using data types such as Set for membership tests, or deque from collections for queue operations.
== Language Development ==
Python's development is conducted largely through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council.
Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list pytho
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
n reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017.
CPython's public releases have three types, distinguished by which part of the version number is incremented:
Backward-incompatible versions, where code is expected to break and must be manually ported. The first part of the version number is incremented. These releases happen infrequently—version 3.0 was released 8 years after 2.0. According to Guido van Rossum, a version 4.0 will probably never exist.
Major or "feature" releases are largely compatible with the previous version but introduce new features. The second part of the version number is incremented. Starting with Python 3.9, the
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
mber is incremented. Starting with Python 3.9, these releases are expected to occur annually. Each major version is supported by bug fixes for several years after its release.
Bug fix releases, which introduce no new features, occur approximately every three months; these releases are made when a sufficient number of bugs have been fixed upstream since the last release. Security vulnerabilities are also patched in these releases. The third and final part of the version number is incremented.
Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development.
The major academic conference on Python is PyCon. There are also special Python mentoring programs, such as PyLadies.
== API documentation generators ==
Tools that can generate
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
cumentation generators ==
Tools that can generate documentation for Python API include pydoc (available as part of the standard library); Sphinx; and Pdoc and its forks, Doxygen and Graphviz.
== Naming ==
Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. The official Python documentation also contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas".
The affix Py is often used when naming Python applications or libraries. Some examples include the following:
Pygame, a binding of Simple DirectMedia Layer to Python (commonly used to create games);
PyQt and PyGTK, which bind Qt and GTK to Python respectively;
PyPy, a Python implementation orig
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
n respectively;
PyPy, a Python implementation originally written in Python;
NumPy, a Python library for numerical processing.
== Popularity ==
Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index; as of December 2022, Python was the most popular language. Python was selected as Programming Language of the Year (for "the highest rise in ratings in a year") in 2007, 2010, 2018, and 2020—the only language to have done so four times as of 2020). In the TIOBE Index, monthly rankings are based on the volume of searches for programming languages on Google, Amazon, Wikipedia, Bing, and 20 other platforms. According to the accompanying graph, Python has shown a marked upward trend since the early 2000s, eventually passing more established languages such as C, C++, and Java. This trend can be attributed to Python's readable syntax, comprehensive standard library, and application in data science and machine
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
rary, and application in data science and machine learning fields.
Large organizations that use Python include Wikipedia, Google, Yahoo!, CERN, NASA, Facebook, Amazon, Instagram, Spotify, and some smaller entities such as Industrial Light & Magic and ITA. The social news networking site Reddit was developed mostly in Python. Organizations that partly use Python include Discord and Baidu.
== Types of Use ==
Python has many uses, including the following:
Scripting for web applications
Scientific computing
Artificial-intelligence and machine-learning projects
Graphical user interfaces and desktop environments
Embedded scripting in software and hardware products
Operating systems
Information security
Python can serve as a scripting language for web applications, e.g., via the mod_wsgi module for the Apache web server. With Web Server Gateway Interface, a standard API has evolved to facilitate these applications. Web frameworks such as Django, Pylons, Pyramid, TurboGears, web2py, Torna
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
Django, Pylons, Pyramid, TurboGears, web2py, Tornado, Flask, Bottle, and Zope support developers in the design and maintenance of complex applications. Pyjs and IronPython can be used to develop the client-side of Ajax-based applications. SQLAlchemy can be used as a data mapper to a relational database. Twisted is a framework to program communication between computers; this framework is used by Dropbox, for example.
Libraries such as NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing, with specialized libraries such as Biopython and Astropy providing domain-specific functionality. SageMath is a computer algebra system with a notebook interface that is programmable in Python; the SageMath library covers many aspects of mathematics, including algebra, combinatorics, numerical mathematics, number theory, and calculus. OpenCV has Python bindings with a rich set of features for computer vision and image processing.
Python is commonly used in artificial-int
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
cessing.
Python is commonly used in artificial-intelligence and machine-learning projects, with support from libraries such as TensorFlow, Keras, Pytorch, scikit-learn and ProbLog (a logic language). As a scripting language with a modular architecture, simple syntax, and rich text processing tools, Python is often used for natural language processing.
The combination of Python and Prolog has proven useful for AI applications, with Prolog providing knowledge representation and reasoning capabilities. The Janus system, in particular, exploits similarities between these two languages, in part because of their dynamic typing and their simple, recursive data structures. This combination is typically applied natural language processing, visual query answering, geospatial reasoning, and handling semantic web data.
The Natlog system, implemented in Python, uses Definite Clause Grammars (DCGs) to create prompts for two types of generators: text-to-text generators such as GPT3, and text-to-image
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
to-text generators such as GPT3, and text-to-image generators such as DALL-E or Stable Diffusion.
Python can be used for graphical user interfaces (GUIs), by using libraries such as Tkinter. Similarly, for the One Laptop per Child XO computer, most of the Sugar desktop environment is written in Python (as of 2008).
Python is embedded in many software products (and some hardware products) as a scripting language. These products include the following:
finite element method software such as Abaqus,
3D parametric modelers such as FreeCAD,
3D animation packages such as 3ds Max, Blender, Cinema 4D, Lightwave, Houdini, Maya, modo, MotionBuilder, Softimage,
the visual effects compositor Nuke,
2D imaging programs such as GIMP, Inkscape, Scribus and Paint Shop Pro, and
musical notation programs such as scorewriter and capella.
Similarly, GNU Debugger uses Python as a pretty printer to show complex structures such as C++ containers. Esri promotes Python as the best choice for writing scripts in
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
s Python as the best choice for writing scripts in ArcGIS. Python has also been used in several video games, and it has been adopted as first of the three programming languages available in Google App Engine (the other two being Java and Go). LibreOffice includes Python, and its developers plan to replace Java with Python; LibreOffice's Python Scripting Provider is a core feature since version 4.0 (from 7 February 2013).
Among hardware products, the Raspberry Pi single-board computer project has adopted Python as its main user-programming language.
Many operating systems include Python as a standard component. Python ships with most Linux distributions, AmigaOS 4 (using Python 2.7), FreeBSD (as a package), NetBSD, and OpenBSD (as a package); it can be used from the command line (terminal). Many Linux distributions use installers written in Python: Ubuntu uses the Ubiquity installer, while Red Hat Linux and Fedora Linux use the Anaconda installer. Gentoo Linux uses Python in its packag
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
installer. Gentoo Linux uses Python in its package management system, Portage.
Python is used extensively in the information security industry, including in exploit development.
== Languages influenced by Python ==
Python's design and philosophy have influenced many other programming languages:
Boo uses indentation, a similar syntax, and a similar object model.
Cobra uses indentation and a similar syntax; its Acknowledgements document lists Python first among influencing languages.
CoffeeScript, a programming language that cross-compiles to JavaScript, has a Python-inspired syntax.
ECMAScript–JavaScript borrowed iterators and generators from Python.
GDScript, a Python-like scripting language that is built in to the Godot game engine.
Go is designed for "speed of working in a dynamic language like Python"; Go shares Python's syntax for slicing arrays.
Groovy was motivated by a desire to incorporate the Python design philosophy into Java.
Julia was designed to be "as usable for gene
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
Java.
Julia was designed to be "as usable for general programming as Python".
Mojo is a non-strict superset of Python (e.g., omitting classes, and adding struct).
Nim uses indentation and a similar syntax.
Ruby's creator, Yukihiro Matsumoto, said that "I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That's why I decided to design my own language."
Swift, a programming language developed by Apple, has some Python-inspired syntax.
Kotlin blends Python and Java features, which minimizes boilerplate code and enhances developer efficiency.
Python's development practices have also been emulated by other languages. For example, Python requires a document that describes the rationale and context for any language change; this document is known as a Python Enhancement Proposal or PEP. This practice is also used by the developers of Tcl, Erlang, and Swift.
== See also ==
Python syntax and semantics
pip (package manager)
List of programming
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
mantics
pip (package manager)
List of programming languages
History of programming languages
Comparison of programming languages
== Notes ==
== References ==
=== Sources ===
"Python for Artificial Intelligence". Python Wiki. 19 July 2012. Archived from the original on 1 November 2012. Retrieved 3 December 2012.
Paine, Jocelyn, ed. (August 2005). "AI in Python". AI Expert Newsletter. Amzi!. Archived from the original on 26 March 2012. Retrieved 11 February 2012.
"PyAIML 0.8.5 : Python Package Index". Pypi.python.org. Retrieved 17 July 2013.
Russell, Stuart J. & Norvig, Peter (2009). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-604259-4.
== Further reading ==
Downey, Allen (July 2024). Think Python: How to Think Like a Computer Scientist (3rd ed.). O'Reilly Media. ISBN 978-1098155438.
Lutz, Mark (2013). Learning Python (5th ed.). O'Reilly Media. ISBN 978-0-596-15806-4.
Summerfield, Mark (2009). Programming in Python 3
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
Summerfield, Mark (2009). Programming in Python 3 (2nd ed.). Addison-Wesley Professional. ISBN 978-0-321-68056-3.
Ramalho, Luciano (May 2022). Fluent Python. O'Reilly Media. ISBN 978-1-4920-5632-4.
== External links ==
Official website
The Python Tutorial
|
https://en.wikipedia.org/wiki/Python_(programming_language)
|
In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.
Modern programming languages that support array programming (also known as vector or multidimensional languages) have been engineered specifically to generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays. These include APL, J, Fortran, MATLAB, Analytica, Octave, R, Cilk Plus, Julia, Perl Data Language (PDL), Raku (programming language). In these languages, an operation that operates on entire arrays can be called a vectorized operation, regardless of whether it is executed on a vector processor, which implements vector instructions. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon to find array programming language on
|
https://en.wikipedia.org/wiki/Array_programming
|
not uncommon to find array programming language one-liners that require several pages of object-oriented code.
== Concepts of array ==
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a high-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.
Kenneth E. Iverson described the rationale behind array programming (actually referring to APL) as follows:
most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician.
The thesis is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. it is important to distinguish the diffi
|
https://en.wikipedia.org/wiki/Array_programming
|
notation. it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.
[...]
Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted since a clear statement of an algorithm can usually be used as a basis from which one may easily derive a more efficient algorithm.
The basis behind array programming an
|
https://en.wikipedia.org/wiki/Array_programming
|
t algorithm.
The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts (or scalar quantities), array orientation looks to group data and apply a uniform handling.
Function rank is an important concept to array programming languages in general, by analogy to tensor rank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers). The cross product operation is an example of a vector rank function because it operates on vectors, not scalars. Matrix multiplication is an example of a 2-rank function, because it operates on 2-dimensional objects (matrices). Collapse operators reduce the dimensionality of an input data array by one or more dimensio
|
https://en.wikipedia.org/wiki/Array_programming
|
ity of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension.
== Uses ==
Array programming is very well suited to implicit parallelization; a topic of much research nowadays. Further, Intel and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from MMX and continuing through SSSE3 and 3DNow!, which include rudimentary SIMD array capabilities. This has continued into the 2020s with instruction sets such as AVX-512, making modern CPUs sophisticated vector processors. Array processing is distinct from parallel processing in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones (MIMD) to be solved piecemeal by numerous processors. Processors with multiple cores and GPUs with thousands of general computing cores are common as of 2023.
== Languages ==
The canonica
|
https://en.wikipedia.org/wiki/Array_programming
|
common as of 2023.
== Languages ==
The canonical examples of array programming languages are Fortran, APL, and J. Others include: A+, Analytica, Chapel, IDL, Julia, K, Klong, Q, MATLAB, GNU Octave, Scilab, FreeMat, Perl Data Language (PDL), R, Raku, S-Lang, SAC, Nial, ZPL, Futhark, and TI-BASIC.
=== Scalar languages ===
In scalar languages such as C and Pascal, operations apply only to single values, so a+b expresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious.
In array-based languages, for example in Fortran, the nested for-loop above can be written in array-format in one line,
or alternatively, to emphasize the array nature of the objects,
While scalar languages like C do not have native array programming elements as part of the language proper, this does not mean programs written in these languages never take advantage of the underlying techniques of vectorization (i.e., utilizing a
|
https://en.wikipedia.org/wiki/Array_programming
|
ing techniques of vectorization (i.e., utilizing a CPU's vector-based instructions if it has them or by using multiple CPU cores). Some C compilers like GCC at some optimization levels detect and vectorize sections of code that its heuristics determine would benefit from it. Another approach is given by the OpenMP API, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores.
=== Array languages ===
In array languages, operations are generalized to apply to both scalars and arrays. Thus, a+b expresses the sum of two scalars if a and b are scalars, or the sum of two arrays if they are arrays.
An array language simplifies programming but possibly at a cost known as the abstraction penalty. Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally most efficient code. (For example, additions of other elements of the same array may be subsequently encountered during the same execution, cau
|
https://en.wikipedia.org/wiki/Array_programming
|
quently encountered during the same execution, causing unnecessary repeated lookups.) Even the most sophisticated optimizing compiler would have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimize overhead).
==== Ada ====
The previous C code would become the following in the Ada language, which supports array-programming syntax.
==== APL ====
APL uses single character Unicode symbols with no syntactic sugar.
This operation works on arrays of any rank (including rank 0), and on a scalar and an array. Dyalog APL extends the original language with augmented assignments:
==== Analytica ====
Analytica provides the same economy of expression as Ada.
A := A + B;
==== BASIC ====
Dartmouth BASIC had MAT statements for matrix and array manipulation in its third edition (1966).
==== Mat
|
https://en.wikipedia.org/wiki/Array_programming
|
nipulation in its third edition (1966).
==== Mata ====
Stata's matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions.
==== MATLAB ====
The implementation in MATLAB allows the same economy allowed by using the Fortran language.
A variant of the MATLAB language is the GNU Octave language, which extends the original language with augmented assignments:
Both MATLAB and GNU Octave natively support linear algebra operations such as matrix multiplication, matrix inversion, and the numerical solution of system of linear equations, even using the Moore–Penrose pseudoinverse.
The Nial example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If a is a row vector of size [1 n] and b is a corresponding column vector of size [n 1].
a * b;
By contrast, the
|
https://en.wikipedia.org/wiki/Array_programming
|
n vector of size [n 1].
a * b;
By contrast, the entrywise product is implemented as:
a .* b;
The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator (:), which reshapes a given matrix into a column vector, and the transpose operator ':
A(:)' * B(:);
==== rasql ====
The rasdaman query language is a database-oriented array-programming language. For example, two arrays could be added with the following query:
==== R ====
The R language supports array paradigm by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:
==== Raku ====
Raku supports the array paradigm via its Metaoperators. The following example demonstrates the addition of arrays @a and @b using the Hyper-operator in conjunction with the plus operator.
== Mathematical reasoning and language notation ==
The matrix left-divisi
|
https://en.wikipedia.org/wiki/Array_programming
|
ng and language notation ==
The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the (determinant of the) coefficient (matrix) A is not null then it is possible to solve the (vectorial) equation A * x = b by left-multiplying both sides by the inverse of A: A−1 (in both MATLAB and GNU Octave languages: A^-1). The following mathematical statements hold when A is a full rank square matrix:
A^-1 *(A * x)==A^-1 * (b)
(A^-1 * A)* x ==A^-1 * b (matrix-multiplication associativity)
x = A^-1 * b
where == is the equivalence relational operator.
The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors).
If the system is overdetermined – so that A has more rows than columns – the pseudoinverse A+ (in MATLAB and GNU Octave languages: pinv(A)) can replace the inverse A−1, as follows:
pinv(A) *(A * x)==pinv(A
|
https://en.wikipedia.org/wiki/Array_programming
|
nverse A−1, as follows:
pinv(A) *(A * x)==pinv(A) * (b)
(pinv(A) * A)* x ==pinv(A) * b (matrix-multiplication associativity)
x = pinv(A) * b
However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent a * x = b, for which the solution x = a^-1 * b would require two operations instead of the more efficient x = b / a.
The problem is that generally matrix multiplications are not commutative as the extension of the scalar solution to the matrix case would require:
(a * x)/ a ==b / a
(x * a)/ a ==b / a (commutativity does not hold for matrices!)
x * (a / a)==b / a (associativity also holds for matrices)
x = b / a
The MATLAB language introduces the left-division operator \ to maintain the essential part of the analogy with the scalar case, therefore simplifying the ma
|
https://en.wikipedia.org/wiki/Array_programming
|
with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness:
A \ (A * x)==A \ b
(A \ A)* x ==A \ b (associativity also holds for matrices, commutativity is no more required)
x = A \ b
This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such as ATLAS or LAPACK.
Returning to the previous quotation of Iverson, the rationale behind it should now be evident: it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and
|
https://en.wikipedia.org/wiki/Array_programming
|
tions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.
== Third-party libraries ==
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In C++ several linear algebra libraries exploit the language's ability to overload operators. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the NumPy extension library to Python, Armadillo and Blitz++ libraries do.
== See also ==
Array slicing
List of array programming languages
Automatic vectorization
== References ==
== External links ==
"No stinking loops" programming
Discovering Array Languages
"Types of Arrays" programming
|
https://en.wikipedia.org/wiki/Array_programming
|
Defensive programming is a form of defensive design intended to develop programs that are capable of detecting potential security abnormalities and make predetermined responses. It ensures the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety, or security is needed.
Defensive programming is an approach to improve software and source code, in terms of:
General quality – reducing the number of software bugs and problems.
Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit.
Making the software behave in a predictable manner despite unexpected inputs or user actions.
Overly defensive programming, however, may safeguard against errors that will never be encountered, thus incurring run-time and maintenance costs.
== Secure programming ==
Secure programming is the subset of defensive programming concerned with c
|
https://en.wikipedia.org/wiki/Defensive_programming
|
e subset of defensive programming concerned with computer security. Security is the concern, not necessarily safety or availability (the software may be allowed to fail in certain ways). As with all kinds of defensive programming, avoiding bugs is a primary objective; however, the motivation is not as much to reduce the likelihood of failure in normal operation (as if safety were the concern), but to reduce the attack surface – the programmer must assume that the software might be misused actively to reveal bugs, and that bugs could be exploited maliciously.
The function will result in undefined behavior when the input is over 1000 characters. Some programmers may not feel that this is a problem, supposing that no user will enter such a long input. This particular bug demonstrates a vulnerability which enables buffer overflow exploits. Here is a solution to this example:
== Offensive programming ==
Offensive programming is a category of defensive programming, with the added emphasi
|
https://en.wikipedia.org/wiki/Defensive_programming
|
y of defensive programming, with the added emphasis that certain errors should not be handled defensively. In this practice, only errors from outside the program's control are to be handled (such as user input); the software itself, as well as data from within the program's line of defense, are to be trusted in this methodology.
=== Trusting internal data validity ===
Overly defensive programming
Offensive programming
=== Trusting software components ===
Overly defensive programming
Offensive programming
== Techniques ==
Here are some defensive programming techniques:
=== Intelligent source code reuse ===
If existing code is tested and known to work, reusing it may reduce the chance of bugs being introduced.
However, reusing code is not always good practice. Reuse of existing code, especially when widely distributed, can allow for exploits to be created that target a wider audience than would otherwise be possible and brings with it all the security and vulnerabilities of the
|
https://en.wikipedia.org/wiki/Defensive_programming
|
ith it all the security and vulnerabilities of the reused code.
When considering using existing source code, a quick review of the modules(sub-sections such as classes or functions) will help eliminate or make the developer aware of any potential vulnerabilities and ensure it is suitable to use in the project.
==== Legacy problems ====
Before reusing old source code, libraries, APIs, configurations and so forth, it must be considered if the old work is valid for reuse, or if it is likely to be prone to legacy problems.
Legacy problems are problems inherent when old designs are expected to work with today's requirements, especially when the old designs were not developed or tested with those requirements in mind.
Many software products have experienced problems with old legacy source code; for example:
Legacy code may not have been designed under a defensive programming initiative, and might therefore be of much lower quality than newly designed source code.
Legacy code may have bee
|
https://en.wikipedia.org/wiki/Defensive_programming
|
wly designed source code.
Legacy code may have been written and tested under conditions which no longer apply. The old quality assurance tests may have no validity any more.
Example 1: legacy code may have been designed for ASCII input but now the input is UTF-8.
Example 2: legacy code may have been compiled and tested on 32-bit architectures, but when compiled on 64-bit architectures, new arithmetic problems may occur (e.g., invalid signedness tests, invalid type casts, etc.).
Example 3: legacy code may have been targeted for offline machines, but becomes vulnerable once network connectivity is added.
Legacy code is not written with new problems in mind. For example, source code written in 1990 is likely to be prone to many code injection vulnerabilities, because most such problems were not widely understood at that time.
Notable examples of the legacy problem:
BIND 9, presented by Paul Vixie and David Conrad as "BINDv9 is a complete rewrite", "Security was a key consideration in des
|
https://en.wikipedia.org/wiki/Defensive_programming
|
rewrite", "Security was a key consideration in design", naming security, robustness, scalability and new protocols as key concerns for rewriting old legacy code.
Microsoft Windows suffered from "the" Windows Metafile vulnerability and other exploits related to the WMF format. Microsoft Security Response Center describes the WMF-features as "Around 1990, WMF support was added... This was a different time in the security landscape... were all completely trusted", not being developed under the security initiatives at Microsoft.
Oracle is combating legacy problems, such as old source code written without addressing concerns of SQL injection and privilege escalation, resulting in many security vulnerabilities which have taken time to fix and also generated incomplete fixes. This has given rise to heavy criticism from security experts such as David Litchfield, Alexander Kornbrust, Cesar Cerrudo. An additional criticism is that default installations (largely a legacy from old versions) are no
|
https://en.wikipedia.org/wiki/Defensive_programming
|
ations (largely a legacy from old versions) are not aligned with their own security recommendations, such as Oracle Database Security Checklist, which is hard to amend as many applications require the less secure legacy settings to function correctly.
=== Canonicalization ===
Malicious users are likely to invent new kinds of representations of incorrect data. For example, if a program attempts to reject accessing the file "/etc/passwd", a cracker might pass another variant of this file name, like "/etc/./passwd". Canonicalization libraries can be employed to avoid bugs due to non-canonical input.
=== Low tolerance against "potential" bugs ===
Assume that code constructs that appear to be problem prone (similar to known vulnerabilities, etc.) are bugs and potential security flaws. The basic rule of thumb is: "I'm not aware of all types of security exploits. I must protect against those I do know of and then I must be proactive!".
=== Other ways of securing code ===
One of the mo
|
https://en.wikipedia.org/wiki/Defensive_programming
|
=== Other ways of securing code ===
One of the most common problems is unchecked use of constant-size or pre-allocated structures for dynamic-size data such as inputs to the program (the buffer overflow problem). This is especially common for string data in C. C library functions like gets should never be used since the maximum size of the input buffer is not passed as an argument. C library functions like scanf can be used safely, but require the programmer to take care with the selection of safe format strings, by sanitizing it before using it.
Encrypt/authenticate all important data transmitted over networks. Do not attempt to implement your own encryption scheme, use a proven one instead. Message checking with a hash or similar technology will also help secure data sent over a network.
==== The three rules of data security ====
All data is important until proven otherwise.
All data is tainted until proven otherwise.
All code is insecure until proven otherwise.
You cannot prove
|
https://en.wikipedia.org/wiki/Defensive_programming
|
insecure until proven otherwise.
You cannot prove the security of any code in userland, or, more commonly known as: "never trust the client".
These three rules about data security describe how to handle any data, internally or externally sourced:
All data is important until proven otherwise - means that all data must be verified as garbage before being destroyed.
All data is tainted until proven otherwise - means that all data must be handled in a way that does not expose the rest of the runtime environment without verifying integrity.
All code is insecure until proven otherwise - while a slight misnomer, does a good job reminding us to never assume our code is secure as bugs or undefined behavior may expose the project or system to attacks such as common SQL injection attacks.
==== More Information ====
If data is to be checked for correctness, verify that it is correct, not that it is incorrect.
Design by contract
Assertions (also called assertive programming)
Prefer exceptions
|
https://en.wikipedia.org/wiki/Defensive_programming
|
o called assertive programming)
Prefer exceptions to return codes
Generally speaking, it is preferable to throw exception messages that enforce part of your API contract and guide the developer instead of returning error code values that do not point to where the exception occurred or what the program stack looked liked, Better logging and exception handling will increase robustness and security of your software, while minimizing developer stress.
== See also ==
Computer security
== References ==
== External links ==
CERT Secure Coding Standards
|
https://en.wikipedia.org/wiki/Defensive_programming
|
Generic programming is a style of computer programming in which algorithms are written in terms of data types to-be-specified-later that are then instantiated when needed for specific types provided as parameters. This approach, pioneered in the programming language ML in 1973, permits writing common functions or data types that differ only in the set of types on which they operate when used, thus reducing duplicate code.
Generic programming was introduced to the mainstream with Ada in 1977. With templates in C++, generic programming became part of the repertoire of professional library design. The techniques were further improved and parameterized types were introduced in the influential 1994 book Design Patterns.
New techniques were introduced by Andrei Alexandrescu in his 2001 book Modern C++ Design: Generic Programming and Design Patterns Applied. Subsequently, D implemented the same ideas.
Such software entities are known as generics in Ada, C#, Delphi, Eiffel, F#, Java, Nim, Pyth
|
https://en.wikipedia.org/wiki/Generic_programming
|
cs in Ada, C#, Delphi, Eiffel, F#, Java, Nim, Python, Go, Rust, Swift, TypeScript, and Visual Basic (.NET). They are known as parametric polymorphism in ML, Scala, Julia, and Haskell. (Haskell terminology also uses the term generic for a related but somewhat different concept.)
The term generic programming was originally coined by David Musser and Alexander Stepanov in a more specific sense than the above, to describe a programming paradigm in which fundamental requirements on data types are abstracted from across concrete examples of algorithms and data structures and formalized as concepts, with generic functions implemented in terms of these concepts, typically using language genericity mechanisms as described above.
== Stepanov–Musser and other generic programming paradigms ==
Generic programming is defined in Musser & Stepanov (1989) as follows,
Generic programming centers around the idea of abstracting from concrete, efficient algorithms to obtain generic algorithms that can b
|
https://en.wikipedia.org/wiki/Generic_programming
|
algorithms to obtain generic algorithms that can be combined with different data representations to produce a wide variety of useful software.
The "generic programming" paradigm is an approach to software decomposition whereby fundamental requirements on types are abstracted from across concrete examples of algorithms and data structures and formalized as concepts, analogously to the abstraction of algebraic theories in abstract algebra. Early examples of this programming approach were implemented in Scheme and Ada, although the best known example is the Standard Template Library (STL), which developed a theory of iterators that is used to decouple sequence data structures and the algorithms operating on them.
For example, given N sequence data structures, e.g. singly linked list, vector etc., and M algorithms to operate on them, e.g. find, sort etc., a direct approach would implement each algorithm specifically for each data structure, giving N × M combinations to implement. However,
|
https://en.wikipedia.org/wiki/Generic_programming
|
giving N × M combinations to implement. However, in the generic programming approach, each data structure returns a model of an iterator concept (a simple value type that can be dereferenced to retrieve the current value, or changed to point to another value in the sequence) and each algorithm is instead written generically with arguments of such iterators, e.g. a pair of iterators pointing to the beginning and end of the subsequence or range to process. Thus, only N + M data structure-algorithm combinations need be implemented. Several iterator concepts are specified in the STL, each a refinement of more restrictive concepts e.g. forward iterators only provide movement to the next value in a sequence (e.g. suitable for a singly linked list or a stream of input data), whereas a random-access iterator also provides direct constant-time access to any element of the sequence (e.g. suitable for a vector). An important point is that a data structure will return a model of the most general
|
https://en.wikipedia.org/wiki/Generic_programming
|
structure will return a model of the most general concept that can be implemented efficiently—computational complexity requirements are explicitly part of the concept definition. This limits the data structures a given algorithm can be applied to and such complexity requirements are a major determinant of data structure choice. Generic programming similarly has been applied in other domains, e.g. graph algorithms.
Although this approach often uses language features of compile-time genericity and templates, it is independent of particular language-technical details. Generic programming pioneer Alexander Stepanov wrote,
Generic programming is about abstracting and classifying algorithms and data structures. It gets its inspiration from Knuth and not from type theory. Its goal is the incremental construction of systematic catalogs of useful, efficient and abstract algorithms and data structures. Such an undertaking is still a dream.
I believe that iterator theories are as central to Comp
|
https://en.wikipedia.org/wiki/Generic_programming
|
ieve that iterator theories are as central to Computer Science as theories of rings or Banach spaces are central to Mathematics.
Bjarne Stroustrup noted,
Following Stepanov, we can define generic programming without mentioning language features: Lift algorithms and data structures from concrete examples to their most general and abstract form.
Other programming paradigms that have been described as generic programming include Datatype generic programming as described in "Generic Programming – an Introduction". The Scrap your boilerplate approach is a lightweight generic programming approach for Haskell.
In this article we distinguish the high-level programming paradigms of generic programming, above, from the lower-level programming language genericity mechanisms used to implement them (see Programming language support for genericity). For further discussion and comparison of generic programming paradigms, see.
== Programming language support for genericity ==
Genericity facilitie
|
https://en.wikipedia.org/wiki/Generic_programming
|
age support for genericity ==
Genericity facilities have existed in high-level languages since at least the 1970s in languages such as ML, CLU and Ada, and were subsequently adopted by many object-based and object-oriented languages, including BETA, C++, D, Eiffel, Java, and DEC's now defunct Trellis-Owl.
Genericity is implemented and supported differently in various programming languages; the term "generic" has also been used differently in various programming contexts. For example, in Forth the compiler can execute code while compiling and one can create new compiler keywords and new implementations for those words on the fly. It has few words that expose the compiler behaviour and therefore naturally offers genericity capacities that, however, are not referred to as such in most Forth texts. Similarly, dynamically typed languages, especially interpreted ones, usually offer genericity by default as both passing values to functions and value assignment are type-indifferent and such be
|
https://en.wikipedia.org/wiki/Generic_programming
|
value assignment are type-indifferent and such behavior is often used for abstraction or code terseness, however this is not typically labeled genericity as it's a direct consequence of the dynamic typing system employed by the language. The term has been used in functional programming, specifically in Haskell-like languages, which use a structural type system where types are always parametric and the actual code on those types is generic. These uses still serve a similar purpose of code-saving and rendering an abstraction.
Arrays and structs can be viewed as predefined generic types. Every usage of an array or struct type instantiates a new concrete type, or reuses a previous instantiated type. Array element types and struct element types are parameterized types, which are used to instantiate the corresponding generic type. All this is usually built-in in the compiler and the syntax differs from other generic constructs. Some extensible programming languages try to unify built-in and
|
https://en.wikipedia.org/wiki/Generic_programming
|
le programming languages try to unify built-in and user defined generic types.
A broad survey of genericity mechanisms in programming languages follows. For a specific survey comparing suitability of mechanisms for generic programming, see.
=== In object-oriented languages ===
When creating container classes in statically typed languages, it is inconvenient to write specific implementations for each datatype contained, especially if the code for each datatype is virtually identical. For example, in C++, this duplication of code can be circumvented by defining a class template:
Above, T is a placeholder for whatever type is specified when the list is created. These "containers-of-type-T", commonly called templates, allow a class to be reused with different datatypes as long as certain contracts such as subtypes and signature are kept. This genericity mechanism should not be confused with inclusion polymorphism, which is the algorithmic usage of exchangeable sub-classes: for instance,
|
https://en.wikipedia.org/wiki/Generic_programming
|
c usage of exchangeable sub-classes: for instance, a list of objects of type Moving_Object containing objects of type Animal and Car. Templates can also be used for type-independent functions as in the Swap example below:
The C++ template construct used above is widely cited as the genericity construct that popularized the notion among programmers and language designers and supports many generic programming idioms. The D language also offers fully generic-capable templates based on the C++ precedent but with a simplified syntax. The Java language has provided genericity facilities syntactically based on C++'s since the introduction of Java Platform, Standard Edition (J2SE) 5.0.
C# 2.0, Oxygene 1.5 (formerly Chrome) and Visual Basic (.NET) 2005 have constructs that exploit the support for generics present in Microsoft .NET Framework since version 2.0.
==== Generics in Ada ====
Ada has had generics since it was first designed in 1977–1980. The standard library uses generics to provid
|
https://en.wikipedia.org/wiki/Generic_programming
|
1980. The standard library uses generics to provide many services. Ada 2005 adds a comprehensive generic container library to the standard library, which was inspired by C++'s Standard Template Library.
A generic unit is a package or a subprogram that takes one or more generic formal parameters.
A generic formal parameter is a value, a variable, a constant, a type, a subprogram, or even an instance of another, designated, generic unit. For generic formal types, the syntax distinguishes between discrete, floating-point, fixed-point, access (pointer) types, etc. Some formal parameters can have default values.
To instantiate a generic unit, the programmer passes actual parameters for each formal. The generic instance then behaves just like any other unit. It is possible to instantiate generic units at run-time, for example inside a loop.
===== Example =====
The specification of a generic package:
Instantiating the generic package:
Using an instance of a generic package:
===== Advant
|
https://en.wikipedia.org/wiki/Generic_programming
|
g an instance of a generic package:
===== Advantages and limits =====
The language syntax allows precise specification of constraints on generic formal parameters. For example, it is possible to specify that a generic formal type will only accept a modular type as the actual. It is also possible to express constraints between generic formal parameters; for example:
In this example, Array_Type is constrained by both Index_Type and Element_Type. When instantiating the unit, the programmer must pass an actual array type that satisfies these constraints.
The disadvantage of this fine-grained control is a complicated syntax, but, because all generic formal parameters are completely defined in the specification, the compiler can instantiate generics without looking at the body of the generic.
Unlike C++, Ada does not allow specialised generic instances, and requires that all generics be instantiated explicitly. These rules have several consequences:
the compiler can implement shared gene
|
https://en.wikipedia.org/wiki/Generic_programming
|
sequences:
the compiler can implement shared generics: the object code for a generic unit can be shared between all instances (unless the programmer requests inlining of subprograms, of course). As further consequences:
there is no possibility of code bloat (code bloat is common in C++ and requires special care, as explained below).
it is possible to instantiate generics at run-time, and at compile time, since no new object code is required for a new instance.
actual objects corresponding to a generic formal object are always considered to be non-static inside the generic; see Generic formal objects in the Wikibook for details and consequences.
all instances of a generic being exactly the same, it is easier to review and understand programs written by others; there are no "special cases" to take into account.
all instantiations being explicit, there are no hidden instantiations that might make it difficult to understand the program.
Ada does not permit arbitrary computation at compile
|
https://en.wikipedia.org/wiki/Generic_programming
|
a does not permit arbitrary computation at compile time, because operations on generic arguments are performed at runtime.
==== Templates in C++ ====
C++ uses templates to enable generic programming techniques. The C++ Standard Library includes the Standard Template Library or STL that provides a framework of templates for common data structures and algorithms. Templates in C++ may also be used for template metaprogramming, which is a way of pre-evaluating some of the code at compile-time rather than run-time. Using template specialization, C++ Templates are Turing complete.
===== Technical overview =====
There are many kinds of templates, the most common being function templates and class templates. A function template is a pattern for creating ordinary functions based upon the parameterizing types supplied when instantiated. For example, the C++ Standard Template Library contains the function template max(x, y) that creates functions that return either x or y, whichever is large
|
https://en.wikipedia.org/wiki/Generic_programming
|
ions that return either x or y, whichever is larger. max() could be defined like this:
Specializations of this function template, instantiations with specific types, can be called just like an ordinary function:
The compiler examines the arguments used to call max and determines that this is a call to max(int, int). It then instantiates a version of the function where the parameterizing type T is int, making the equivalent of the following function:
This works whether the arguments x and y are integers, strings, or any other type for which the expression x < y is sensible, or more specifically, for any type for which operator< is defined. Common inheritance is not needed for the set of types that can be used, and so it is very similar to duck typing. A program defining a custom data type can use operator overloading to define the meaning of < for that type, thus allowing its use with the max() function template. While this may seem a minor benefit in this isolated example, in the c
|
https://en.wikipedia.org/wiki/Generic_programming
|
a minor benefit in this isolated example, in the context of a comprehensive library like the STL it allows the programmer to get extensive functionality for a new data type, just by defining a few operators for it. Merely defining < allows a type to be used with the standard sort(), stable_sort(), and binary_search() algorithms or to be put inside data structures such as sets, heaps, and associative arrays.
C++ templates are completely type safe at compile time. As a demonstration, the standard type complex does not define the < operator, because there is no strict order on complex numbers. Therefore, max(x, y) will fail with a compile error, if x and y are complex values. Likewise, other templates that rely on < cannot be applied to complex data unless a comparison (in the form of a functor or function) is provided. E.g.: A complex cannot be used as key for a map unless a comparison is provided. Unfortunately, compilers historically generate somewhat esoteric, long, and unhelpful erro
|
https://en.wikipedia.org/wiki/Generic_programming
|
nerate somewhat esoteric, long, and unhelpful error messages for this sort of error. Ensuring that a certain object adheres to a method protocol can alleviate this issue. Languages which use compare instead of < can also use complex values as keys.
Another kind of template, a class template, extends the same concept to classes. A class template specialization is a class. Class templates are often used to make generic containers. For example, the STL has a linked list container. To make a linked list of integers, one writes list<int>. A list of strings is denoted list<string>. A list has a set of standard functions associated with it, that work for any compatible parameterizing types.
===== Template specialization =====
A powerful feature of C++'s templates is template specialization. This allows alternative implementations to be provided based on certain characteristics of the parameterized type that is being instantiated. Template specialization has two purposes: to allow certain fo
|
https://en.wikipedia.org/wiki/Generic_programming
|
ecialization has two purposes: to allow certain forms of optimization, and to reduce code bloat.
For example, consider a sort() template function. One of the primary activities that such a function does is to swap or exchange the values in two of the container's positions. If the values are large (in terms of the number of bytes it takes to store each of them), then it is often quicker to first build a separate list of pointers to the objects, sort those pointers, and then build the final sorted sequence. If the values are quite small however it is usually fastest to just swap the values in-place as needed. Furthermore, if the parameterized type is already of some pointer-type, then there is no need to build a separate pointer array. Template specialization allows the template creator to write different implementations and to specify the characteristics that the parameterized type(s) must have for each implementation to be used.
Unlike function templates, class templates can be partial
|
https://en.wikipedia.org/wiki/Generic_programming
|
function templates, class templates can be partially specialized. That means that an alternate version of the class template code can be provided when some of the template parameters are known, while leaving other template parameters generic. This can be used, for example, to create a default implementation (the primary specialization) that assumes that copying a parameterizing type is expensive and then create partial specializations for types that are cheap to copy, thus increasing overall efficiency. Clients of such a class template just use specializations of it without needing to know whether the compiler used the primary specialization or some partial specialization in each case. Class templates can also be fully specialized, which means that an alternate implementation can be provided when all of the parameterizing types are known.
===== Advantages and disadvantages =====
Some uses of templates, such as the max() function, were formerly filled by function-like preprocessor mac
|
https://en.wikipedia.org/wiki/Generic_programming
|
formerly filled by function-like preprocessor macros (a legacy of the C language). For example, here is a possible implementation of such macro:
Macros are expanded (copy pasted) by the preprocessor, before program compiling; templates are actual real functions. Macros are always expanded inline; templates can also be inline functions when the compiler deems it appropriate.
However, templates are generally considered an improvement over macros for these purposes. Templates are type-safe. Templates avoid some of the common errors found in code that makes heavy use of function-like macros, such as evaluating parameters with side effects twice. Perhaps most importantly, templates were designed to be applicable to much larger problems than macros.
There are four primary drawbacks to the use of templates: supported features, compiler support, poor error messages (usually with pre C++20 substitution failure is not an error (SFINAE)), and code bloat:
Templates in C++ lack many features, wh
|
https://en.wikipedia.org/wiki/Generic_programming
|
de bloat:
Templates in C++ lack many features, which makes implementing them and using them in a straightforward way often impossible. Instead programmers have to rely on complex tricks which leads to bloated, hard to understand and hard to maintain code. Current developments in the C++ standards exacerbate this problem by making heavy use of these tricks and building a lot of new features for templates on them or with them intended.
Many compilers historically had poor support for templates, thus the use of templates could make code somewhat less portable. Support may also be poor when a C++ compiler is being used with a linker that is not C++-aware, or when attempting to use templates across shared library boundaries.
Compilers can produce confusing, long, and sometimes unhelpful error messages when errors are detected in code that uses SFINAE. This can make templates difficult to develop with.
Finally, the use of templates requires the compiler to generate a separate instance of th
|
https://en.wikipedia.org/wiki/Generic_programming
|
the compiler to generate a separate instance of the templated class or function for every type parameters used with it. (This is necessary because types in C++ are not all the same size, and the sizes of data fields are important to how classes work.) So the indiscriminate use of templates can lead to code bloat, resulting in excessively large executables. However, judicious use of template specialization and derivation can dramatically reduce such code bloat in some cases:So, can derivation be used to reduce the problem of code replicated because templates are used? This would involve deriving a template from an ordinary class. This technique proved successful in curbing code bloat in real use. People who do not use a technique like this have found that replicated code can cost megabytes of code space even in moderate size programs.
Templated classes or functions may require an explicit specialization of the template class which would require rewriting of an entire class for a specifi
|
https://en.wikipedia.org/wiki/Generic_programming
|
require rewriting of an entire class for a specific template parameters used by it.
The extra instantiations generated by templates can also cause some debuggers to have difficulty working gracefully with templates. For example, setting a debug breakpoint within a template from a source file may either miss setting the breakpoint in the actual instantiation desired or may set a breakpoint in every place the template is instantiated.
Also, the implementation source code for the template must be completely available (e.g. included in a header) to the translation unit (source file) using it. Templates, including much of the Standard Library, if not included in header files, cannot be compiled. (This is in contrast to non-templated code, which may be compiled to binary, providing only a declarations header file for code using it.) This may be a disadvantage by exposing the implementing code, which removes some abstractions, and could restrict its use in closed-source projects.
==== Templ
|
https://en.wikipedia.org/wiki/Generic_programming
|
ct its use in closed-source projects.
==== Templates in D ====
The D language supports templates based in design on C++. Most C++ template idioms work in D without alteration, but D adds some functionality:
Template parameters in D are not restricted to just types and primitive values (as it was in C++ before C++20), but also allow arbitrary compile-time values (such as strings and struct literals), and aliases to arbitrary identifiers, including other templates or template instantiations.
Template constraints and the static if statement provide an alternative to respectively C++'s C++ concepts and if constexpr.
The is(...) expression allows speculative instantiation to verify an object's traits at compile time.
The auto keyword and the typeof expression allow type inference for variable declarations and function return values, which in turn allows "Voldemort types" (types that do not have a global name).
Templates in D use a different syntax than in C++: whereas in C++ template par
|
https://en.wikipedia.org/wiki/Generic_programming
|
nt syntax than in C++: whereas in C++ template parameters are wrapped in angular brackets (Template<param1, param2>),
D uses an exclamation sign and parentheses: Template!(param1, param2).
This avoids the C++ parsing difficulties due to ambiguity with comparison operators.
If there is only one parameter, the parentheses can be omitted.
Conventionally, D combines the above features to provide compile-time polymorphism using trait-based generic programming.
For example, an input range is defined as any type that satisfies the checks performed by isInputRange, which is defined as follows:
A function that accepts only input ranges can then use the above template in a template constraint:
===== Code generation =====
In addition to template metaprogramming, D also provides several features to enable compile-time code generation:
The import expression allows reading a file from disk and using its contents as a string expression.
Compile-time reflection allows enumerating and inspecting de
|
https://en.wikipedia.org/wiki/Generic_programming
|
me reflection allows enumerating and inspecting declarations and their members during compiling.
User-defined attributes allow users to attach arbitrary identifiers to declarations, which can then be enumerated using compile-time reflection.
Compile-time function execution (CTFE) allows a subset of D (restricted to safe operations) to be interpreted during compiling.
String mixins allow evaluating and compiling the contents of a string expression as D code that becomes part of the program.
Combining the above allows generating code based on existing declarations.
For example, D serialization frameworks can enumerate a type's members and generate specialized functions for each serialized type
to perform serialization and deserialization.
User-defined attributes could further indicate serialization rules.
The import expression and compile-time function execution also allow efficiently implementing domain-specific languages.
For example, given a function that takes a string containing an
|
https://en.wikipedia.org/wiki/Generic_programming
|
iven a function that takes a string containing an HTML template and returns equivalent D source code, it is possible to use it in the following way:
==== Genericity in Eiffel ====
Generic classes have been a part of Eiffel since the original method and language design. The foundation publications of Eiffel, use the term genericity to describe creating and using generic classes.
===== Basic, unconstrained genericity =====
Generic classes are declared with their class name and a list of one or more formal generic parameters. In the following code, class LIST has one formal generic parameter G
The formal generic parameters are placeholders for arbitrary class names that will be supplied when a declaration of the generic class is made, as shown in the two generic derivations below, where ACCOUNT and DEPOSIT are other class names. ACCOUNT and DEPOSIT are considered actual generic parameters as they provide real class names to substitute for G in actual use.
Within the Eiffel type syst
|
https://en.wikipedia.org/wiki/Generic_programming
|
for G in actual use.
Within the Eiffel type system, although class LIST [G] is considered a class, it is not considered a type. However, a generic derivation of LIST [G] such as LIST [ACCOUNT] is considered a type.
===== Constrained genericity =====
For the list class shown above, an actual generic parameter substituting for G can be any other available class. To constrain the set of classes from which valid actual generic parameters can be chosen, a generic constraint can be specified. In the declaration of class SORTED_LIST below, the generic constraint dictates that any valid actual generic parameter will be a class that inherits from class COMPARABLE. The generic constraint ensures that elements of a SORTED_LIST can in fact be sorted.
==== Generics in Java ====
Support for the generics, or "containers-of-type-T" was added to the Java programming language in 2004 as part of J2SE 5.0. In Java, generics are only checked at compile time for type correctness. The generic type inf
|
https://en.wikipedia.org/wiki/Generic_programming
|
le time for type correctness. The generic type information is then removed via a process called type erasure, to maintain compatibility with old JVM implementations, making it unavailable at runtime. For example, a List<String> is converted to the raw type List. The compiler inserts type casts to convert the elements to the String type when they are retrieved from the list, reducing performance compared to other implementations such as C++ templates.
==== Genericity in .NET [C#, VB.NET] ====
Generics were added as part of .NET Framework 2.0 in November 2005, based on a research prototype from Microsoft Research started in 1999. Although similar to generics in Java, .NET generics do not apply type erasure,: 208–209 but implement generics as a first class mechanism in the runtime using reification. This design choice provides additional functionality, such as allowing reflection with preservation of generic types, and alleviating some of the limits of erasure (such as being unable to
|
https://en.wikipedia.org/wiki/Generic_programming
|
of the limits of erasure (such as being unable to create generic arrays). This also means that there is no performance hit from runtime casts and normally expensive boxing conversions. When primitive and value types are used as generic arguments, they get specialized implementations, allowing for efficient generic collections and methods. As in C++ and Java, nested generic types such as Dictionary<string, List<int>> are valid types, however are advised against for member signatures in code analysis design rules.
.NET allows six varieties of generic type constraints using the where keyword including restricting generic types to be value types, to be classes, to have constructors, and to implement interfaces. Below is an example with an interface constraint:
The MakeAtLeast() method allows operation on arrays, with elements of generic type T. The method's type constraint indicates that the method is applicable to any type T that implements the generic IComparable<T> interface. This ensu
|
https://en.wikipedia.org/wiki/Generic_programming
|
ts the generic IComparable<T> interface. This ensures a compile time error, if the method is called if the type does not support comparison. The interface provides the generic method CompareTo(T).
The above method could also be written without generic types, simply using the non-generic Array type. However, since arrays are contravariant, the casting would not be type safe, and the compiler would be unable to find certain possible errors that would otherwise be caught when using generic types. In addition, the method would need to access the array items as objects instead, and would require casting to compare two elements. (For value types like types such as int this requires a boxing conversion, although this can be worked around using the Comparer<T> class, as is done in the standard collection classes.)
A notable behavior of static members in a generic .NET class is static member instantiation per run-time type (see example below).
==== Genericity in Pascal ====
For Pascal, generi
|
https://en.wikipedia.org/wiki/Generic_programming
|
==== Genericity in Pascal ====
For Pascal, generics were first implemented in 2006, in the implementation Free Pascal.
===== In Delphi =====
The Object Pascal dialect Delphi acquired generics in the 2007 Delphi 11 release by CodeGear, initially only with the .NET compiler (since discontinued) before being added to the native code in the 2009 Delphi 12 release. The semantics and abilities of Delphi generics are largely modelled on those of generics in .NET 2.0, though the implementation is by necessity quite different. Here is a more or less direct translation of the first C# example shown above:
As with C#, methods and whole types can have one or more type parameters. In the example, TArray is a generic type (defined by the language) and MakeAtLeast a generic method. The available constraints are very similar to the available constraints in C#: any value type, any class, a specific class or interface, and a class with a parameterless constructor. Multiple constraints act as an addi
|
https://en.wikipedia.org/wiki/Generic_programming
|
s constructor. Multiple constraints act as an additive union.
===== In Free Pascal =====
Free Pascal implemented generics in 2006 in version 2.2.0, before Delphi and with different syntax and semantics. However, since FPC version 2.6.0, the Delphi-style syntax is available when using the language mode {$mode Delphi}. Thus, Free Pascal code supports generics in either style.
Delphi and Free Pascal example:
=== Functional languages ===
==== Genericity in Haskell ====
The type class mechanism of Haskell supports generic programming. Six of the predefined type classes in Haskell (including Eq, the types that can be compared for equality, and Show, the types whose values can be rendered as strings) have the special property of supporting derived instances. This means that a programmer defining a new type can state that this type is to be an instance of one of these special type classes, without providing implementations of the class methods as is usually necessary when declaring class
|
https://en.wikipedia.org/wiki/Generic_programming
|
thods as is usually necessary when declaring class instances. All the necessary methods will be "derived" – that is, constructed automatically – based on the structure of the type. For example, the following declaration of a type of binary trees states that it is to be an instance of the classes Eq and Show:
This results in an equality function (==) and a string representation function (show) being automatically defined for any type of the form BinTree T provided that T itself supports those operations.
The support for derived instances of Eq and Show makes their methods == and show generic in a qualitatively different way from parametrically polymorphic functions: these "functions" (more accurately, type-indexed families of functions) can be applied to values of various types, and although they behave differently for every argument type, little work is needed to add support for a new type. Ralf Hinze (2004) has shown that a similar effect can be achieved for user-defined type classes
|
https://en.wikipedia.org/wiki/Generic_programming
|
fect can be achieved for user-defined type classes by certain programming techniques. Other researchers have proposed approaches to this and other kinds of genericity in the context of Haskell and extensions to Haskell (discussed below).
===== PolyP =====
PolyP was the first generic programming language extension to Haskell. In PolyP, generic functions are called polytypic. The language introduces a special construct in which such polytypic functions can be defined via structural induction over the structure of the pattern functor of a regular datatype. Regular datatypes in PolyP are a subset of Haskell datatypes. A regular datatype t must be of kind * → *, and if a is the formal type argument in the definition, then all recursive calls to t must have the form t a. These restrictions rule out higher-kinded datatypes and nested datatypes, where the recursive calls are of a different form.
The flatten function in PolyP is here provided as an example:
===== Generic Haskell =====
Gener
|
https://en.wikipedia.org/wiki/Generic_programming
|
as an example:
===== Generic Haskell =====
Generic Haskell is another extension to Haskell, developed at Utrecht University in the Netherlands. The extensions it provides are:
Type-indexed values are defined as a value indexed over the various Haskell type constructors (unit, primitive types, sums, products, and user-defined type constructors). In addition, we can also specify the behaviour of a type-indexed values for a specific constructor using constructor cases, and reuse one generic definition in another using default cases.
The resulting type-indexed value can be specialized to any type.
Kind-indexed types are types indexed over kinds, defined by giving a case for both * and k → k'. Instances are obtained by applying the kind-indexed type to a kind.
Generic definitions can be used by applying them to a type or kind. This is called generic application. The result is a type or value, depending on which sort of generic definition is applied.
Generic abstraction enables generic d
|
https://en.wikipedia.org/wiki/Generic_programming
|
is applied.
Generic abstraction enables generic definitions be defined by abstracting a type parameter (of a given kind).
Type-indexed types are types that are indexed over the type constructors. These can be used to give types to more involved generic values. The resulting type-indexed types can be specialized to any type.
As an example, the equality function in Generic Haskell:
==== Clean ====
Clean offers generic programming based PolyP and the Generic Haskell as supported by the GHC ≥ 6.0. It parametrizes by kind as those but offers overloading.
=== Other languages ===
Languages in the ML family support generic programming through parametric polymorphism and generic modules called functors. Both Standard ML and OCaml provide functors, which are similar to class templates and to Ada's generic packages. Scheme syntactic abstractions also have a connection to genericity – these are in fact a superset of C++ templates.
A Verilog module may take one or more parameters, to which the
|
https://en.wikipedia.org/wiki/Generic_programming
|
dule may take one or more parameters, to which their actual values are assigned upon the instantiation of the module. One example is a generic register array where the array width is given via a parameter. Such an array, combined with a generic wire vector, can make a generic buffer or memory module with an arbitrary bit width out of a single module implementation.
VHDL, being derived from Ada, also has generic abilities.
C supports "type-generic expressions" using the _Generic keyword:
== See also ==
Concept (generic programming)
Partial evaluation
Template metaprogramming
Type polymorphism
== References ==
== Sources ==
== Further reading ==
Gabriel Dos Reis and Jaakko Järvi, What is Generic Programming?, LCSD 2005 Archived 28 August 2019 at the Wayback Machine.
Gibbons, Jeremy (2007). Backhouse, R.; Gibbons, J.; Hinze, R.; Jeuring, J. (eds.). Datatype-generic programming. Spring School on Datatype-Generic Programming 2006. Lecture Notes in Computer Science. Vol. 4719. Heidel
|
https://en.wikipedia.org/wiki/Generic_programming
|
cture Notes in Computer Science. Vol. 4719. Heidelberg: Springer. pp. 1–71. CiteSeerX 10.1.1.159.1228.
Meyer, Bertrand (1986). "Genericity versus inheritance". Conference proceedings on Object-oriented programming systems, languages and applications - OOPSLA '86. pp. 391–405. doi:10.1145/28697.28738. ISBN 0897912047. S2CID 285030.
== External links ==
generic-programming.org
Alexander A. Stepanov, Collected Papers of Alexander A. Stepanov (creator of the STL)
C++, D
Walter Bright, Templates Revisited.
David Vandevoorde, Nicolai M Josuttis, C++ Templates: The Complete Guide, 2003 Addison-Wesley. ISBN 0-201-73484-2
C#, .NET
Jason Clark, "Introducing Generics in the Microsoft CLR," September 2003, MSDN Magazine, Microsoft.
Jason Clark, "More on Generics in the Microsoft CLR," October 2003, MSDN Magazine, Microsoft.
M. Aamir Maniar, Generics.Net. An open source generics library for C#.
Delphi, Object Pascal
Nick Hodges, "Delphi 2009 Reviewers Guide," October 2008, Embarcadero Developer N
|
https://en.wikipedia.org/wiki/Generic_programming
|
wers Guide," October 2008, Embarcadero Developer Network, Embarcadero.
Craig Stuntz, "Delphi 2009 Generics and Type Constraints," October 2008
Dr. Bob, "Delphi 2009 Generics"
Free Pascal: Free Pascal Reference guide Chapter 8: Generics, Michaël Van Canneyt, 2007
Delphi for Win32: Generics with Delphi 2009 Win32, Sébastien DOERAENE, 2008
Delphi for .NET: Delphi Generics, Felix COLIBRI, 2008
Eiffel
Eiffel ISO/ECMA specification document
Haskell
Johan Jeuring, Sean Leather, José Pedro Magalhães, and Alexey Rodriguez Yakushev. Libraries for Generic Programming in Haskell. Utrecht University.
Dæv Clarke, Johan Jeuring and Andres Löh, The Generic Haskell user's guide
Ralf Hinze, "Generics for the Masses," In Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), 2004.
Simon Peyton Jones, editor, The Haskell 98 Language Report, Revised 2002.
Ralf Lämmel and Simon Peyton Jones, "Scrap Your Boilerplate: A Practical Design Pattern for Generic Programming," In P
|
https://en.wikipedia.org/wiki/Generic_programming
|
ical Design Pattern for Generic Programming," In Proceedings of the ACM SIGPLAN International Workshop on Types in Language Design and Implementation (TLDI'03), 2003. (Also see the website devoted to this research)
Andres Löh, Exploring Generic Haskell, PhD thesis, 2004 Utrecht University. ISBN 90-393-3765-9
Generic Haskell: a language for generic programming
Java
Gilad Bracha, Generics in the Java Programming Language, 2004.
Maurice Naftalin and Philip Wadler, Java Generics and Collections, 2006, O'Reilly Media, Inc. ISBN 0-596-52775-6
Peter Sestoft, Java Precisely, Second Edition, 2005 MIT Press. ISBN 0-262-69325-9
Generic Programming in Java, 2004 Sun Microsystems, Inc.
Angelika Langer, Java Generics FAQs
|
https://en.wikipedia.org/wiki/Generic_programming
|
Procedural programming is a programming paradigm, classified as imperative programming, that involves implementing the behavior of a computer program as procedures (a.k.a. functions, subroutines) that call each other. The resulting program is a series of steps that forms a hierarchy of calls to its constituent procedures.
The first major procedural programming languages appeared c. 1957–1964, including Fortran, ALGOL, COBOL, PL/I and BASIC. Pascal and C were published c. 1970–1972.
Computer processors provide hardware support for procedural programming through a stack register and instructions for calling procedures and returning from them. Hardware support for other types of programming is possible, like Lisp machines or Java processors, but no attempt was commercially successful.
== Development practices ==
Certain software development practices are often employed with procedural programming in order to enhance quality and lower development and maintenance costs.
=== Modularity a
|
https://en.wikipedia.org/wiki/Procedural_programming
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.