text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
(L, N, H_{in}) when "batch_first=False" or (N, L, H_{in}) when
"batch_first=True" containing the features of the input
sequence. The input can also be a packed variable length
sequence. See "torch.nn.utils.rnn.pack_padded_sequence()" or
"torch.nn.utils.rnn.pack_sequence()" for details.
* **h_0**: tensor of shape (D * \text{num\_layers}, H_{out}) for
unbatched input or (D * \text{num\_layers}, N, H_{out})
containing the initial hidden state for the input sequence
batch. Defaults to zeros if not provided.
where:
\begin{aligned} N ={} & \text{batch size} \\ L ={} &
\text{sequence length} \\ D ={} & 2 \text{ if
bidirectional=True otherwise } 1 \\ H_{in} ={} &
\text{input\_size} \\ H_{out} ={} & \text{hidden\_size}
\end{aligned}
Outputs: output, h_n
* output: tensor of shape (L, D * H_{out}) for unbatched
|
https://pytorch.org/docs/stable/generated/torch.nn.RNN.html
|
pytorch docs
|
input, (L, N, D * H_{out}) when "batch_first=False" or (N, L,
D * H_{out}) when "batch_first=True" containing the output
features (h_t) from the last layer of the RNN, for each t.
If a "torch.nn.utils.rnn.PackedSequence" has been given as the
input, the output will also be a packed sequence.
* **h_n**: tensor of shape (D * \text{num\_layers}, H_{out}) for
unbatched input or (D * \text{num\_layers}, N, H_{out})
containing the final hidden state for each element in the
batch.
Variables:
* weight_ih_l[k] -- the learnable input-hidden weights of
the k-th layer, of shape (hidden_size, input_size) for k =
0. Otherwise, the shape is (hidden_size, num_directions *
hidden_size)
* **weight_hh_l[k]** -- the learnable hidden-hidden weights of
the k-th layer, of shape *(hidden_size, hidden_size)*
* **bias_ih_l[k]** -- the learnable input-hidden bias of the
|
https://pytorch.org/docs/stable/generated/torch.nn.RNN.html
|
pytorch docs
|
k-th layer, of shape (hidden_size)
* **bias_hh_l[k]** -- the learnable hidden-hidden bias of the
k-th layer, of shape *(hidden_size)*
Note:
All the weights and biases are initialized from
\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{hidden\_size}}
Note:
For bidirectional RNNs, forward and backward are directions 0 and
1 respectively. Example of splitting the output layers when
"batch_first=False": "output.view(seq_len, batch, num_directions,
hidden_size)".
Note:
"batch_first" argument is ignored for unbatched inputs.
Warning:
There are known non-determinism issues for RNN functions on some
versions of cuDNN and CUDA. You can enforce deterministic
behavior by setting the following environment variables:On CUDA
10.1, set environment variable "CUDA_LAUNCH_BLOCKING=1". This may
affect performance.On CUDA 10.2 or later, set environment
variable (note the leading colon symbol)
|
https://pytorch.org/docs/stable/generated/torch.nn.RNN.html
|
pytorch docs
|
variable (note the leading colon symbol)
"CUBLAS_WORKSPACE_CONFIG=:16:8" or
"CUBLAS_WORKSPACE_CONFIG=:4096:2".See the cuDNN 8 Release Notes
for more information.
Note:
If the following conditions are satisfied: 1) cudnn is enabled,
2) input data is on the GPU 3) input data has dtype
"torch.float16" 4) V100 GPU is used, 5) input data is not in
"PackedSequence" format persistent algorithm can be selected to
improve performance.
Examples:
>>> rnn = nn.RNN(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)
|
https://pytorch.org/docs/stable/generated/torch.nn.RNN.html
|
pytorch docs
|
torch.Tensor.tanh_
Tensor.tanh_() -> Tensor
In-place version of "tanh()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.tanh_.html
|
pytorch docs
|
torch.deg2rad
torch.deg2rad(input, *, out=None) -> Tensor
Returns a new tensor with each of the elements of "input" converted
from angles in degrees to radians.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([[180.0, -180.0], [360.0, -360.0], [90.0, -90.0]])
>>> torch.deg2rad(a)
tensor([[ 3.1416, -3.1416],
[ 6.2832, -6.2832],
[ 1.5708, -1.5708]])
|
https://pytorch.org/docs/stable/generated/torch.deg2rad.html
|
pytorch docs
|
torch.rand
torch.rand(size, , generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor
Returns a tensor filled with random numbers from a uniform
distribution on the interval [0, 1)
The shape of the tensor is defined by the variable argument "size".
Parameters:
size (int...) -- a sequence of integers defining the
shape of the output tensor. Can be a variable number of
arguments or a collection like a list or tuple.
Keyword Arguments:
* generator ("torch.Generator", optional) -- a pseudorandom
number generator for sampling
* **out** (*Tensor**, **optional*) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
|
https://pytorch.org/docs/stable/generated/torch.rand.html
|
pytorch docs
|
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> torch.rand(4)
tensor([ 0.5204, 0.2503, 0.3525, 0.5673])
>>> torch.rand(2, 3)
tensor([[ 0.8237, 0.5781, 0.6879],
[ 0.3816, 0.7249, 0.0998]])
|
https://pytorch.org/docs/stable/generated/torch.rand.html
|
pytorch docs
|
torch.Tensor.sinc
Tensor.sinc() -> Tensor
See "torch.sinc()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.sinc.html
|
pytorch docs
|
torch.autograd.profiler.load_nvprof
torch.autograd.profiler.load_nvprof(path)
Opens an nvprof trace file and parses autograd annotations.
Parameters:
path (str) -- path to nvprof trace
|
https://pytorch.org/docs/stable/generated/torch.autograd.profiler.load_nvprof.html
|
pytorch docs
|
torch.Tensor.triu
Tensor.triu(diagonal=0) -> Tensor
See "torch.triu()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.triu.html
|
pytorch docs
|
torch.Tensor.ge
Tensor.ge(other) -> Tensor
See "torch.ge()".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.ge.html
|
pytorch docs
|
check_sparse_tensor_invariants
class torch.sparse.check_sparse_tensor_invariants(enable=True)
A tool to control checking sparse tensor invariants.
The following options exists to manage sparsr tensor invariants
checking in sparse tensor construction:
Using a context manager:
with torch.sparse.check_sparse_tensor_invariants():
run_my_model()
Using a procedural approach:
prev_checks_enabled = torch.sparse.check_sparse_tensor_invariants.is_enabled()
torch.sparse.check_sparse_tensor_invariants.enable()
run_my_model()
if not prev_checks_enabled:
torch.sparse.check_sparse_tensor_invariants.disable()
Using function decoration:
@torch.sparse.check_sparse_tensor_invariants()
def run_my_model():
...
run_my_model()
Using "check_invariants" keyword argument in sparse tensor
constructor call. For example:
|
https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html
|
pytorch docs
|
constructor call. For example:
>>> torch.sparse_csr_tensor([0, 1, 3], [0, 1], [1, 2], check_invariants=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: `crow_indices[..., -1] == nnz` is not satisfied.
static disable()
Disable sparse tensor invariants checking in sparse tensor
constructors.
See "torch.sparse.check_sparse_tensor_invariants.enable()" for
more information.
static enable()
Enable sparse tensor invariants checking in sparse tensor
constructors.
Note:
By default, the sparse tensor invariants checks are disabled.
Use "torch.sparse.check_sparse_tensor_invariants.is_enabled()"
to retrieve the current state of sparse tensor invariants
checking.
Note:
The sparse tensor invariants check flag is effective to all
sparse tensor constructors, both in Python and ATen.The flag
|
https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html
|
pytorch docs
|
can be locally overridden by the "check_invariants" optional
argument of the sparse tensor constructor functions.
static is_enabled()
Returns True if the sparse tensor invariants checking is
enabled.
Note:
Use "torch.sparse.check_sparse_tensor_invariants.enable()" or
"torch.sparse.check_sparse_tensor_invariants.disable()" to
manage the state of the sparse tensor invariants checks.
|
https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html
|
pytorch docs
|
torch.sin
torch.sin(input, *, out=None) -> Tensor
Returns a new tensor with the sine of the elements of "input".
\text{out}_{i} = \sin(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.5461, 0.1347, -2.7266, -0.2746])
>>> torch.sin(a)
tensor([-0.5194, 0.1343, -0.4032, -0.2711])
|
https://pytorch.org/docs/stable/generated/torch.sin.html
|
pytorch docs
|
torch.autograd.graph.Node.register_prehook
abstract Node.register_prehook(fn)
Registers a backward pre-hook.
The hook will be called every time a gradient with respect to the
Node is computed. The hook should have the following signature:
hook(grad_outputs: Tuple[Tensor]) -> Tuple[Tensor] or None
The hook should not modify its argument, but it can optionally
return a new gradient which will be used in place of
"grad_outputs".
This function returns a handle with a method "handle.remove()" that
removes the hook from the module.
Note:
See Backward Hooks execution for more information on how when
this hook is executed, and how its execution is ordered relative
to other hooks.
Example:
>>> a = torch.tensor([0., 0., 0.], requires_grad=True)
>>> b = a.clone()
>>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)
>>> handle = b.grad_fn.register_prehook(lambda gI: (gI[0] * 2,))
|
https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_prehook.html
|
pytorch docs
|
b.sum().backward(retain_graph=True)
>>> print(a.grad)
tensor([2., 2., 2.])
>>> handle.remove()
>>> a.grad = None
>>> b.sum().backward(retain_graph=True)
>>> print(a.grad)
tensor([1., 1., 1.])
Return type:
RemovableHandle
|
https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_prehook.html
|
pytorch docs
|
torch.nn.utils.rnn.pack_padded_sequence
torch.nn.utils.rnn.pack_padded_sequence(input, lengths, batch_first=False, enforce_sorted=True)
Packs a Tensor containing padded sequences of variable length.
"input" can be of size "T x B x " where T is the length of the
longest sequence (equal to "lengths[0]"), "B" is the batch size,
and "" is any number of dimensions (including 0). If "batch_first"
is "True", "B x T x *" "input" is expected.
For unsorted sequences, use enforce_sorted = False. If
"enforce_sorted" is "True", the sequences should be sorted by
length in a decreasing order, i.e. "input[:,0]" should be the
longest sequence, and "input[:,B-1]" the shortest one.
enforce_sorted = True is only necessary for ONNX export.
Note:
This function accepts any input that has at least two dimensions.
You can apply it to pack the labels, and use the output of the
|
https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html
|
pytorch docs
|
RNN with them to compute the loss directly. A Tensor can be
retrieved from a "PackedSequence" object by accessing its ".data"
attribute.
Parameters:
* input (Tensor) -- padded batch of variable length
sequences.
* **lengths** (*Tensor** or **list**(**int**)*) -- list of
sequence lengths of each batch element (must be on the CPU if
provided as a tensor).
* **batch_first** (*bool**, **optional*) -- if "True", the input
is expected in "B x T x *" format.
* **enforce_sorted** (*bool**, **optional*) -- if "True", the
input is expected to contain sequences sorted by length in a
decreasing order. If "False", the input will get sorted
unconditionally. Default: "True".
Returns:
a "PackedSequence" object
Return type:
PackedSequence
|
https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html
|
pytorch docs
|
torch.nn.utils.clip_grad_value_
torch.nn.utils.clip_grad_value_(parameters, clip_value, foreach=None)
Clips gradient of an iterable of parameters at specified value.
Gradients are modified in-place.
Parameters:
* parameters (Iterable[Tensor] or Tensor) -- an
iterable of Tensors or a single Tensor that will have
gradients normalized
* **clip_value** (*float*) -- maximum allowed value of the
gradients. The gradients are clipped in the range
\left[\text{-clip\_value}, \text{clip\_value}\right]
* **foreach** (*bool*) -- use the faster foreach-based
implementation If "None", use the foreach implementation for
CUDA and CPU tensors and silently fall back to the slow
implementation for other device types. Default: "None"
|
https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_value_.html
|
pytorch docs
|
torch.Tensor.igammac_
Tensor.igammac_(other) -> Tensor
In-place version of "igammac()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.igammac_.html
|
pytorch docs
|
torch.autograd.functional.hessian
torch.autograd.functional.hessian(func, inputs, create_graph=False, strict=False, vectorize=False, outer_jacobian_strategy='reverse-mode')
Function that computes the Hessian of a given scalar function.
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a Tensor with a single element.
* **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the
function "func".
* **create_graph** (*bool**, **optional*) -- If "True", the
Hessian will be computed in a differentiable manner. Note that
when "strict" is "False", the result can not require gradients
or be disconnected from the inputs. Defaults to "False".
* **strict** (*bool**, **optional*) -- If "True", an error will
be raised when we detect that there exists an input such that
all the outputs are independent of it. If "False", we return a
|
https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html
|
pytorch docs
|
Tensor of zeros as the hessian for said inputs, which is the
expected mathematical value. Defaults to "False".
* **vectorize** (*bool**, **optional*) -- This feature is
experimental. Please consider using "torch.func.hessian()"
instead if you are looking for something less experimental and
more performant. When computing the hessian, usually we invoke
"autograd.grad" once per row of the hessian. If this flag is
"True", we use the vmap prototype feature as the backend to
vectorize calls to "autograd.grad" so we only invoke it once
instead of once per row. This should lead to performance
improvements in many use cases, however, due to this feature
being incomplete, there may be performance cliffs. Please use
*torch._C._debug_only_display_vmap_fallback_warnings(True)* to
show any performance warnings and file us issues if warnings
exist for your use case. Defaults to "False".
|
https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html
|
pytorch docs
|
outer_jacobian_strategy (str, optional) -- The
Hessian is computed by computing the Jacobian of a Jacobian.
The inner Jacobian is always computed in reverse-mode AD.
Setting strategy to ""forward-mode"" or ""reverse-mode""
determines whether the outer Jacobian will be computed with
forward or reverse mode AD. Currently, computing the outer
Jacobian in ""forward-mode"" requires "vectorized=True".
Defaults to ""reverse-mode"".
Returns:
if there is a single input, this will be a single Tensor
containing the Hessian for the input. If it is a tuple, then the
Hessian will be a tuple of tuples where "Hessian[i][j]" will
contain the Hessian of the "i"th input and "j"th input with size
the sum of the size of the "i"th input plus the size of the
"j"th input. "Hessian[i][j]" will have the same dtype and device
as the corresponding "i"th input.
Return type:
|
https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html
|
pytorch docs
|
Return type:
Hessian (Tensor or a tuple of tuple of Tensors)
-[ Example ]-
def pow_reducer(x):
... return x.pow(3).sum()
inputs = torch.rand(2, 2)
hessian(pow_reducer, inputs)
tensor([[[[5.2265, 0.0000],
[0.0000, 0.0000]],
[[0.0000, 4.8221],
[0.0000, 0.0000]]],
[[[0.0000, 0.0000],
[1.9456, 0.0000]],
[[0.0000, 0.0000],
[0.0000, 3.2550]]]])
hessian(pow_reducer, inputs, create_graph=True)
tensor([[[[5.2265, 0.0000],
[0.0000, 0.0000]],
[[0.0000, 4.8221],
[0.0000, 0.0000]]],
[[[0.0000, 0.0000],
[1.9456, 0.0000]],
[[0.0000, 0.0000],
[0.0000, 3.2550]]]], grad_fn=)
def pow_adder_reducer(x, y):
... return (2 * x.pow(2) + 3 * y.pow(2)).sum()
inputs = (torch.rand(2), torch.rand(2))
hessian(pow_adder_reducer, inputs)
((tensor([[4., 0.],
|
https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html
|
pytorch docs
|
((tensor([[4., 0.],
[0., 4.]]),
tensor([[0., 0.],
[0., 0.]])),
(tensor([[0., 0.],
[0., 0.]]),
tensor([[6., 0.],
[0., 6.]])))
|
https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html
|
pytorch docs
|
torch.hstack
torch.hstack(tensors, *, out=None) -> Tensor
Stack tensors in sequence horizontally (column wise).
This is equivalent to concatenation along the first axis for 1-D
tensors, and along the second axis for all other tensors.
Parameters:
tensors (sequence of Tensors) -- sequence of tensors to
concatenate
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([1, 2, 3])
>>> b = torch.tensor([4, 5, 6])
>>> torch.hstack((a,b))
tensor([1, 2, 3, 4, 5, 6])
>>> a = torch.tensor([[1],[2],[3]])
>>> b = torch.tensor([[4],[5],[6]])
>>> torch.hstack((a,b))
tensor([[1, 4],
[2, 5],
[3, 6]])
|
https://pytorch.org/docs/stable/generated/torch.hstack.html
|
pytorch docs
|
torch.vmap
torch.vmap(func, in_dims=0, out_dims=0, randomness='error', *, chunk_size=None)
vmap is the vectorizing map; "vmap(func)" returns a new function
that maps "func" over some dimension of the inputs. Semantically,
vmap pushes the map into PyTorch operations called by "func",
effectively vectorizing those operations.
vmap is useful for handling batch dimensions: one can write a
function "func" that runs on examples and then lift it to a
function that can take batches of examples with "vmap(func)". vmap
can also be used to compute batched gradients when composed with
autograd.
Note:
"torch.vmap()" is aliased to "torch.func.vmap()" for convenience.
Use whichever one you'd like.
Parameters:
* func (function) -- A Python function that takes one or
more arguments. Must return one or more Tensors.
* **in_dims** (*int** or **nested structure*) -- Specifies which
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
dimension of the inputs should be mapped over. "in_dims"
should have a structure like the inputs. If the "in_dim" for a
particular input is None, then that indicates there is no map
dimension. Default: 0.
* **out_dims** (*int** or **Tuple**[**int**]*) -- Specifies
where the mapped dimension should appear in the outputs. If
"out_dims" is a Tuple, then it should have one element per
output. Default: 0.
* **randomness** (*str*) -- Specifies whether the randomness in
this vmap should be the same or different across batches. If
'different', the randomness for each batch will be different.
If 'same', the randomness will be the same across batches. If
'error', any calls to random functions will error. Default:
'error'. WARNING: this flag only applies to random PyTorch
operations and does not apply to Python's random module or
numpy randomness.
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
numpy randomness.
* **chunk_size** (*None** or **int*) -- If None (default), apply
a single vmap over inputs. If not None, then compute the vmap
"chunk_size" samples at a time. Note that "chunk_size=1" is
equivalent to computing the vmap with a for-loop. If you run
into memory issues computing the vmap, please try a non-None
chunk_size.
Returns:
Returns a new "batched" function. It takes the same inputs as
"func", except each input has an extra dimension at the index
specified by "in_dims". It takes returns the same outputs as
"func", except each output has an extra dimension at the index
specified by "out_dims".
Return type:
Callable
One example of using "vmap()" is to compute batched dot products.
PyTorch doesn't provide a batched "torch.dot" API; instead of
unsuccessfully rummaging through docs, use "vmap()" to construct a
new function.
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
new function.
torch.dot # [D], [D] -> []
batched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N]
x, y = torch.randn(2, 5), torch.randn(2, 5)
batched_dot(x, y)
"vmap()" can be helpful in hiding batch dimensions, leading to a
simpler model authoring experience.
batch_size, feature_size = 3, 5
weights = torch.randn(feature_size, requires_grad=True)
def model(feature_vec):
# Very simple linear model with activation
return feature_vec.dot(weights).relu()
examples = torch.randn(batch_size, feature_size)
result = torch.vmap(model)(examples)
"vmap()" can also help vectorize computations that were previously
difficult or impossible to batch. One example is higher-order
gradient computation. The PyTorch autograd engine computes vjps
(vector-Jacobian products). Computing a full Jacobian matrix for
some function f: R^N -> R^N usually requires N calls to
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
"autograd.grad", one per Jacobian row. Using "vmap()", we can
vectorize the whole computation, computing the Jacobian in a single
call to "autograd.grad".
Setup
N = 5
f = lambda x: x ** 2
x = torch.randn(N, requires_grad=True)
y = f(x)
I_N = torch.eye(N)
Sequential approach
jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]
for v in I_N.unbind()]
jacobian = torch.stack(jacobian_rows)
vectorized gradient computation
def get_vjp(v):
return torch.autograd.grad(y, x, v)
jacobian = torch.vmap(get_vjp)(I_N)
"vmap()" can also be nested, producing an output with multiple
batched dimensions
torch.dot # [D], [D] -> []
batched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0]
x, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5)
batched_dot(x, y) # tensor of size [2, 3]
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
batched_dot(x, y) # tensor of size [2, 3]
If the inputs are not batched along the first dimension, "in_dims"
specifies the dimension that each inputs are batched along as
torch.dot # [N], [N] -> []
batched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D]
x, y = torch.randn(2, 5), torch.randn(2, 5)
batched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension
If there are multiple inputs each of which is batched along
different dimensions, "in_dims" must be a tuple with the batch
dimension for each input as
torch.dot # [D], [D] -> []
batched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N]
x, y = torch.randn(2, 5), torch.randn(5)
batched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None
If the input is a Python struct, "in_dims" must be a tuple
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
containing a struct matching the shape of the input:
f = lambda dict: torch.dot(dict['x'], dict['y'])
x, y = torch.randn(2, 5), torch.randn(5)
input = {'x': x, 'y': y}
batched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},))
batched_dot(input)
By default, the output is batched along the first dimension.
However, it can be batched along any dimension by using "out_dims"
f = lambda x: x ** 2
x = torch.randn(2, 5)
batched_pow = torch.vmap(f, out_dims=1)
batched_pow(x) # [5, 2]
For any function that uses kwargs, the returned function will not
batch the kwargs but will accept kwargs
x = torch.randn([2, 5])
def fn(x, scale=4.):
return x * scale
batched_pow = torch.vmap(fn)
assert torch.allclose(batched_pow(x), x * 4)
batched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5]
Note:
vmap does not provide general autobatching or handle variable-
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
length sequences out of the box.
|
https://pytorch.org/docs/stable/generated/torch.vmap.html
|
pytorch docs
|
torch.cuda.default_stream
torch.cuda.default_stream(device=None)
Returns the default "Stream" for a given device.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns the default "Stream" for the current device,
given by "current_device()", if "device" is "None" (default).
Return type:
Stream
|
https://pytorch.org/docs/stable/generated/torch.cuda.default_stream.html
|
pytorch docs
|
torch.Tensor.numpy
Tensor.numpy(*, force=False) -> numpy.ndarray
Returns the tensor as a NumPy "ndarray".
If "force" is "False" (the default), the conversion is performed
only if the tensor is on the CPU, does not require grad, does not
have its conjugate bit set, and is a dtype and layout that NumPy
supports. The returned ndarray and the tensor will share their
storage, so changes to the tensor will be reflected in the ndarray
and vice versa.
If "force" is "True" this is equivalent to calling
"t.detach().cpu().resolve_conj().resolve_neg().numpy()". If the
tensor isn't on the CPU or the conjugate or negative bit is set,
the tensor won't share its storage with the returned ndarray.
Setting "force" to "True" can be a useful shorthand.
Parameters:
force (bool) -- if "True", the ndarray may be a copy of
the tensor instead of always sharing memory, defaults to
"False".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.numpy.html
|
pytorch docs
|
torch.expm1
torch.expm1(input, *, out=None) -> Tensor
Alias for "torch.special.expm1()".
|
https://pytorch.org/docs/stable/generated/torch.expm1.html
|
pytorch docs
|
torch.nn.functional.pdist
torch.nn.functional.pdist(input, p=2) -> Tensor
Computes the p-norm distance between every pair of row vectors in
the input. This is identical to the upper triangular portion,
excluding the diagonal, of torch.norm(input[:, None] - input,
dim=2, p=p). This function will be faster if the rows are
contiguous.
If input has shape N \times M then the output will have shape
\frac{1}{2} N (N - 1).
This function is equivalent to "scipy.spatial.distance.pdist(input,
'minkowski', p=p)" if p \in (0, \infty). When p = 0 it is
equivalent to "scipy.spatial.distance.pdist(input, 'hamming') * M".
When p = \infty, the closest scipy function is
"scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x -
y).max())".
Parameters:
* input -- input tensor of shape N \times M.
* **p** -- p value for the p-norm distance to calculate between
each vector pair \in [0, \infty].
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.pdist.html
|
pytorch docs
|
LogSigmoid
class torch.nn.LogSigmoid
Applies the element-wise function:
\text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 +
\exp(-x)}\right)
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.LogSigmoid()
>>> input = torch.randn(2)
>>> output = m(input)
|
https://pytorch.org/docs/stable/generated/torch.nn.LogSigmoid.html
|
pytorch docs
|
torch.Tensor.frac
Tensor.frac() -> Tensor
See "torch.frac()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.frac.html
|
pytorch docs
|
SELU
class torch.nn.SELU(inplace=False)
Applied element-wise, as:
\text{SELU}(x) = \text{scale} * (\max(0,x) + \min(0, \alpha *
(\exp(x) - 1)))
with \alpha = 1.6732632423543772848170429916717 and \text{scale} =
1.0507009873554804934193349852946.
Warning:
When using "kaiming_normal" or "kaiming_normal_" for
initialisation, "nonlinearity='linear'" should be used instead of
"nonlinearity='selu'" in order to get Self-Normalizing Neural
Networks. See "torch.nn.init.calculate_gain()" for more
information.
More details can be found in the paper Self-Normalizing Neural
Networks .
Parameters:
inplace (bool, optional) -- can optionally do the
operation in-place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.SELU()
>>> input = torch.randn(2)
>>> output = m(input)
|
https://pytorch.org/docs/stable/generated/torch.nn.SELU.html
|
pytorch docs
|
torch.nn.functional.nll_loss
torch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean')
The negative log likelihood loss.
See "NLLLoss" for details.
Parameters:
* input (Tensor) -- (N, C) where C = number of classes
or (N, C, H, W) in case of 2D Loss, or (N, C, d_1, d_2, ...,
d_K) where K \geq 1 in the case of K-dimensional loss. input
is expected to be log-probabilities.
* **target** (*Tensor*) -- (N) where each value is 0 \leq
\text{targets}[i] \leq C-1, or (N, d_1, d_2, ..., d_K) where K
\geq 1 for K-dimensional loss.
* **weight** (*Tensor**, **optional*) -- a manual rescaling
weight given to each class. If given, has to be a Tensor of
size *C*
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html
|
pytorch docs
|
loss element in the batch. Note that for some losses, there
multiple elements per sample. If the field "size_average" is
set to "False", the losses are instead summed for each
minibatch. Ignored when reduce is "False". Default: "True"
* **ignore_index** (*int**, **optional*) -- Specifies a target
value that is ignored and does not contribute to the input
gradient. When "size_average" is "True", the loss is averaged
over non-ignored targets. Default: -100
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html
|
pytorch docs
|
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Return type:
Tensor
Example:
>>> # input is of size N x C = 3 x 5
>>> input = torch.randn(3, 5, requires_grad=True)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.tensor([1, 0, 4])
>>> output = F.nll_loss(F.log_softmax(input, dim=1), target)
>>> output.backward()
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html
|
pytorch docs
|
torch.compile
torch.compile(model=None, , fullgraph=False, dynamic=False, backend='inductor', mode=None, passes=None, *kwargs)
Optimizes given model/function using Dynamo and specified backend
Parameters:
* model (Callable) -- Module/function to optimize
* **fullgraph** (*bool*) -- Whether it is ok to break model into
several subgraphs
* **dynamic** (*bool*) -- Use dynamic shape tracing
* **backend** (*str** or **Callable*) -- backend to be used
* **mode** (*str*) -- Can be either "default", "reduce-overhead"
or "max-autotune"
* **passes** (*dict*) -- A dictionary of passes to the backend.
Passes currently recognized by inductor backend: - static-
memory - matmul-tune - matmul-padding - triton-autotune -
triton-bmm - triton-mm - triton-convolution - rematerialize-
threshold - rematerialize-acc-threshold
Return type:
Callable
Example:
|
https://pytorch.org/docs/stable/generated/torch.compile.html
|
pytorch docs
|
Return type:
Callable
Example:
@torch.compile(passes={"matmul-padding": True}, fullgraph=True)
def foo(x):
return torch.sin(x) + torch.cos(x)
|
https://pytorch.org/docs/stable/generated/torch.compile.html
|
pytorch docs
|
torch.nn.functional.local_response_norm
torch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0)
Applies local response normalization over an input signal composed
of several input planes, where channels occupy the second
dimension. Applies normalization across channels.
See "LocalResponseNorm" for details.
Return type:
Tensor
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.local_response_norm.html
|
pytorch docs
|
torch.Tensor.kthvalue
Tensor.kthvalue(k, dim=None, keepdim=False)
See "torch.kthvalue()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.kthvalue.html
|
pytorch docs
|
ModuleList
class torch.nn.ModuleList(modules=None)
Holds submodules in a list.
"ModuleList" can be indexed like a regular Python list, but modules
it contains are properly registered, and will be visible by all
"Module" methods.
Parameters:
modules (iterable, optional) -- an iterable of modules
to add
Example:
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)])
def forward(self, x):
# ModuleList can act as an iterable, or be indexed using ints
for i, l in enumerate(self.linears):
x = self.linears[i // 2](x) + l(x)
return x
append(module)
Appends a given module to the end of the list.
Parameters:
**module** (*nn.Module*) -- module to append
Return type:
*ModuleList*
extend(modules)
|
https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html
|
pytorch docs
|
ModuleList
extend(modules)
Appends modules from a Python iterable to the end of the list.
Parameters:
**modules** (*iterable*) -- iterable of modules to append
Return type:
*ModuleList*
insert(index, module)
Insert a given module before a given index in the list.
Parameters:
* **index** (*int*) -- index to insert.
* **module** (*nn.Module*) -- module to insert
|
https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html
|
pytorch docs
|
torch.nn.functional.adaptive_max_pool1d
torch.nn.functional.adaptive_max_pool1d(args, *kwargs)
Applies a 1D adaptive max pooling over an input signal composed of
several input planes.
See "AdaptiveMaxPool1d" for details and output shape.
Parameters:
* output_size -- the target output size (single integer)
* **return_indices** -- whether to return pooling indices.
Default: "False"
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool1d.html
|
pytorch docs
|
FakeQuantize
class torch.quantization.fake_quantize.FakeQuantize(observer=, quant_min=None, quant_max=None, **observer_kwargs)
Simulate the quantize and dequantize operations in training time.
The output of this module is given by:
x_out = (
clamp(round(x/scale + zero_point), quant_min, quant_max) - zero_point
) * scale
"scale" defines the scale factor used for quantization.
"zero_point" specifies the quantized value to which 0 in floating
point maps to
"fake_quant_enabled" controls the application of fake
quantization on tensors, note that statistics can still be
updated.
"observer_enabled" controls statistics collection on tensors
"dtype" specifies the quantized dtype that is being emulated with
fake-quantization,
allowable values are torch.qint8 and torch.quint8.
Parameters:
|
https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantize.html
|
pytorch docs
|
Parameters:
* observer (module) -- Module for observing statistics on
input tensors and calculating scale and zero-point.
* **observer_kwargs** (*optional*) -- Arguments for the observer
module
Variables:
activation_post_process (Module) -- User provided module
that collects statistics on the input tensor and provides a
method to calculate scale and zero-point.
|
https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantize.html
|
pytorch docs
|
torch.adjoint
torch.adjoint(Tensor) -> Tensor
Returns a view of the tensor conjugated and with the last two
dimensions transposed.
"x.adjoint()" is equivalent to "x.transpose(-2, -1).conj()" for
complex tensors and to "x.transpose(-2, -1)" for real tensors.
Example::
>>> x = torch.arange(4, dtype=torch.float)
>>> A = torch.complex(x, x).reshape(2, 2)
>>> A
tensor([[0.+0.j, 1.+1.j],
[2.+2.j, 3.+3.j]])
>>> A.adjoint()
tensor([[0.-0.j, 2.-2.j],
[1.-1.j, 3.-3.j]])
>>> (A.adjoint() == A.mH).all()
tensor(True)
|
https://pytorch.org/docs/stable/generated/torch.adjoint.html
|
pytorch docs
|
Softmin
class torch.nn.Softmin(dim=None)
Applies the Softmin function to an n-dimensional input Tensor
rescaling them so that the elements of the n-dimensional output
Tensor lie in the range [0, 1] and sum to 1.
Softmin is defined as:
\text{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)}
Shape:
* Input: (*) where *** means, any number of additional
dimensions
* Output: (*), same shape as the input
Parameters:
dim (int) -- A dimension along which Softmin will be
computed (so every slice along dim will sum to 1).
Returns:
a Tensor of the same dimension and shape as the input, with
values in the range [0, 1]
Return type:
None
Examples:
>>> m = nn.Softmin(dim=1)
>>> input = torch.randn(2, 3)
>>> output = m(input)
|
https://pytorch.org/docs/stable/generated/torch.nn.Softmin.html
|
pytorch docs
|
torch.Tensor.masked_scatter
Tensor.masked_scatter(mask, tensor) -> Tensor
Out-of-place version of "torch.Tensor.masked_scatter_()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.masked_scatter.html
|
pytorch docs
|
torch.nn.utils.parameters_to_vector
torch.nn.utils.parameters_to_vector(parameters)
Convert parameters to one vector
Parameters:
parameters (Iterable[Tensor]) -- an iterator of
Tensors that are the parameters of a model.
Returns:
The parameters represented by a single vector
Return type:
Tensor
|
https://pytorch.org/docs/stable/generated/torch.nn.utils.parameters_to_vector.html
|
pytorch docs
|
default_debug_qconfig
torch.quantization.qconfig.default_debug_qconfig
alias of QConfig(activation=,
weight=functools.partial(,
dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})
|
https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_debug_qconfig.html
|
pytorch docs
|
UninitializedParameter
class torch.nn.parameter.UninitializedParameter(requires_grad=True, device=None, dtype=None)
A parameter that is not initialized.
Uninitialized Parameters are a a special case of
"torch.nn.Parameter" where the shape of the data is still unknown.
Unlike a "torch.nn.Parameter", uninitialized parameters hold no
data and attempting to access some properties, like their shape,
will throw a runtime error. The only operations that can be
performed on a uninitialized parameter are changing its datatype,
moving it to a different device and converting it to a regular
"torch.nn.Parameter".
The default device or dtype to use when the parameter is
materialized can be set during construction using e.g.
"device='cuda'".
cls_to_become
alias of "Parameter"
|
https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedParameter.html
|
pytorch docs
|
torch.linalg.tensorinv
torch.linalg.tensorinv(A, ind=2, *, out=None) -> Tensor
Computes the multiplicative inverse of "torch.tensordot()".
If m is the product of the first "ind" dimensions of "A" and n
is the product of the rest of the dimensions, this function expects
m and n to be equal. If this is the case, it computes a tensor
X such that tensordot("A", X, "ind") is the identity matrix
in dimension m. X will have the shape of "A" but with the first
"ind" dimensions pushed back to the end
X.shape == A.shape[ind:] + A.shape[:ind]
Supports input of float, double, cfloat and cdouble dtypes.
Note:
When "A" is a *2*-dimensional tensor and "ind"*= 1*, this
function computes the (multiplicative) inverse of "A" (see
"torch.linalg.inv()").
Note:
Consider using "torch.linalg.tensorsolve()" if possible for
multiplying a tensor on the left by the tensor inverse, as:
|
https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html
|
pytorch docs
|
linalg.tensorsolve(A, B) == torch.tensordot(linalg.tensorinv(A), B) # When B is a tensor with shape A.shape[:B.ndim]
It is always preferred to use "tensorsolve()" when possible, as
it is faster and more numerically stable than computing the
pseudoinverse explicitly.
See also:
"torch.linalg.tensorsolve()" computes
*torch.tensordot(tensorinv(*"A"*), *"B"*)*.
Parameters:
* A (Tensor) -- tensor to invert. Its shape must satisfy
prod("A".shape[:"ind"]) == prod("A".shape["ind":]).
* **ind** (*int*) -- index at which to compute the inverse of
"torch.tensordot()". Default: *2*.
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Raises:
RuntimeError -- if the reshaped "A" is not invertible or the
product of the first "ind" dimensions is not equal to the
product of the rest.
Examples:
|
https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html
|
pytorch docs
|
product of the rest.
Examples:
>>> A = torch.eye(4 * 6).reshape((4, 6, 8, 3))
>>> Ainv = torch.linalg.tensorinv(A, ind=2)
>>> Ainv.shape
torch.Size([8, 3, 4, 6])
>>> B = torch.randn(4, 6)
>>> torch.allclose(torch.tensordot(Ainv, B), torch.linalg.tensorsolve(A, B))
True
>>> A = torch.randn(4, 4)
>>> Atensorinv = torch.linalg.tensorinv(A, ind=1)
>>> Ainv = torch.linalg.inverse(A)
>>> torch.allclose(Atensorinv, Ainv)
True
|
https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html
|
pytorch docs
|
torch.Tensor.apply_
Tensor.apply_(callable) -> Tensor
Applies the function "callable" to each element in the tensor,
replacing each element with the value returned by "callable".
Note:
This function only works with CPU tensors and should not be used
in code sections that require high performance.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.apply_.html
|
pytorch docs
|
torch.softmax
torch.softmax(input, dim, *, dtype=None) -> Tensor
Alias for "torch.nn.functional.softmax()".
|
https://pytorch.org/docs/stable/generated/torch.softmax.html
|
pytorch docs
|
torch.randint
torch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor filled with random integers generated uniformly
between "low" (inclusive) and "high" (exclusive).
The shape of the tensor is defined by the variable argument "size".
Note:
With the global dtype default ("torch.float32"), this function
returns a tensor with dtype "torch.int64".
Parameters:
* low (int, optional) -- Lowest integer to be drawn
from the distribution. Default: 0.
* **high** (*int*) -- One above the highest integer to be drawn
from the distribution.
* **size** (*tuple*) -- a tuple defining the shape of the output
tensor.
Keyword Arguments:
* generator ("torch.Generator", optional) -- a pseudorandom
number generator for sampling
* **out** (*Tensor**, **optional*) -- the output tensor.
|
https://pytorch.org/docs/stable/generated/torch.randint.html
|
pytorch docs
|
dtype (torch.dtype, optional) -- if "None", this
function returns a tensor with dtype "torch.int64".
layout ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
device ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
requires_grad (bool, optional) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.randint(3, 5, (3,))
tensor([4, 3, 4])
>>> torch.randint(10, (2, 2))
tensor([[0, 2],
[5, 5]])
>>> torch.randint(3, 10, (2, 2))
tensor([[4, 5],
[6, 7]])
|
https://pytorch.org/docs/stable/generated/torch.randint.html
|
pytorch docs
|
torch.Tensor.hardshrink
Tensor.hardshrink(lambd=0.5) -> Tensor
See "torch.nn.functional.hardshrink()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.hardshrink.html
|
pytorch docs
|
get_default_qconfig_mapping
class torch.ao.quantization.qconfig_mapping.get_default_qconfig_mapping(backend='x86', version=0)
Return the default QConfigMapping for post training quantization.
Parameters:
* backend (***) -- the quantization backend for the default
qconfig mapping, should be one of ["x86" (default), "fbgemm",
"qnnpack", "onednn"]
* **version** (***) -- the version for the default qconfig
mapping
Return type:
QConfigMapping
|
https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.get_default_qconfig_mapping.html
|
pytorch docs
|
QuantStub
class torch.quantization.QuantStub(qconfig=None)
Quantize stub module, before calibration, this is same as an
observer, it will be swapped as nnq.Quantize in convert.
Parameters:
qconfig -- quantization configuration for the tensor, if
qconfig is not provided, we will get qconfig from parent modules
|
https://pytorch.org/docs/stable/generated/torch.quantization.QuantStub.html
|
pytorch docs
|
torch.any
torch.any(input) -> Tensor
Tests if any element in "input" evaluates to True.
Note:
This function matches the behaviour of NumPy in returning output
of dtype *bool* for all supported dtypes except *uint8*. For
*uint8* the dtype of output is *uint8* itself.
Example:
>>> a = torch.rand(1, 2).bool()
>>> a
tensor([[False, True]], dtype=torch.bool)
>>> torch.any(a)
tensor(True, dtype=torch.bool)
>>> a = torch.arange(0, 3)
>>> a
tensor([0, 1, 2])
>>> torch.any(a)
tensor(True)
torch.any(input, dim, keepdim=False, *, out=None) -> Tensor
For each row of "input" in the given dimension "dim", returns
True if any element in the row evaluate to True and False
otherwise.
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
|
https://pytorch.org/docs/stable/generated/torch.any.html
|
pytorch docs
|
the output tensor having 1 fewer dimension than "input".
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4, 2) < 0
>>> a
tensor([[ True, True],
[False, True],
[ True, True],
[False, False]])
>>> torch.any(a, 1)
tensor([ True, True, True, False])
>>> torch.any(a, 0)
tensor([True, True])
|
https://pytorch.org/docs/stable/generated/torch.any.html
|
pytorch docs
|
torch.Tensor.chunk
Tensor.chunk(chunks, dim=0) -> List of Tensors
See "torch.chunk()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.chunk.html
|
pytorch docs
|
torch.Tensor.erfinv
Tensor.erfinv() -> Tensor
See "torch.erfinv()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.erfinv.html
|
pytorch docs
|
torch.Tensor.sparse_resize_
Tensor.sparse_resize_(size, sparse_dim, dense_dim) -> Tensor
Resizes "self" sparse tensor to the desired size and the number of
sparse and dense dimensions.
Note:
If the number of specified elements in "self" is zero, then
"size", "sparse_dim", and "dense_dim" can be any size and
positive integers such that "len(size) == sparse_dim +
dense_dim".If "self" specifies one or more elements, however,
then each dimension in "size" must not be smaller than the
corresponding dimension of "self", "sparse_dim" must equal the
number of sparse dimensions in "self", and "dense_dim" must equal
the number of dense dimensions in "self".
Warning:
Throws an error if "self" is not a sparse tensor.
Parameters:
* size (torch.Size) -- the desired size. If "self" is non-
empty sparse tensor, the desired size cannot be smaller than
the original size.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_.html
|
pytorch docs
|
the original size.
* **sparse_dim** (*int*) -- the number of sparse dimensions
* **dense_dim** (*int*) -- the number of dense dimensions
|
https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_.html
|
pytorch docs
|
torch.linalg.multi_dot
torch.linalg.multi_dot(tensors, *, out=None)
Efficiently multiplies two or more matrices by reordering the
multiplications so that the fewest arithmetic operations are
performed.
Supports inputs of float, double, cfloat and cdouble dtypes. This
function does not support batched inputs.
Every tensor in "tensors" must be 2D, except for the first and last
which may be 1D. If the first tensor is a 1D vector of shape (n,)
it is treated as a row vector of shape (1, n), similarly if the
last tensor is a 1D vector of shape (n,) it is treated as a
column vector of shape (n, 1).
If the first and last tensors are matrices, the output will be a
matrix. However, if either is a 1D vector, then the output will be
a 1D vector.
Differences with numpy.linalg.multi_dot:
Unlike numpy.linalg.multi_dot, the first and last tensors must
either be 1D or 2D whereas NumPy allows them to be nD
Warning:
|
https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html
|
pytorch docs
|
Warning:
This function does not broadcast.
Note:
This function is implemented by chaining "torch.mm()" calls after
computing the optimal matrix multiplication order.
Note:
The cost of multiplying two matrices with shapes *(a, b)* and
*(b, c)* is *a * b * c*. Given matrices *A*, *B*, *C* with shapes
*(10, 100)*, *(100, 5)*, *(5, 50)* respectively, we can calculate
the cost of different multiplication orders as follows:
\begin{align*} \operatorname{cost}((AB)C) &= 10 \times 100
\times 5 + 10 \times 5 \times 50 = 7500 \\
\operatorname{cost}(A(BC)) &= 10 \times 100 \times 50 + 100
\times 5 \times 50 = 75000 \end{align*}
In this case, multiplying *A* and *B* first followed by *C* is 10
times faster.
Parameters:
tensors (Sequence[Tensor]) -- two or more tensors to
multiply. The first and last tensors may be 1D or 2D. Every
other tensor must be 2D.
Keyword Arguments:
|
https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html
|
pytorch docs
|
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Examples:
>>> from torch.linalg import multi_dot
>>> multi_dot([torch.tensor([1, 2]), torch.tensor([2, 3])])
tensor(8)
>>> multi_dot([torch.tensor([[1, 2]]), torch.tensor([2, 3])])
tensor([8])
>>> multi_dot([torch.tensor([[1, 2]]), torch.tensor([[2], [3]])])
tensor([[8]])
>>> A = torch.arange(2 * 3).view(2, 3)
>>> B = torch.arange(3 * 2).view(3, 2)
>>> C = torch.arange(2 * 2).view(2, 2)
>>> multi_dot((A, B, C))
tensor([[ 26, 49],
[ 80, 148]])
|
https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html
|
pytorch docs
|
default_qconfig
torch.quantization.qconfig.default_qconfig
alias of QConfig(activation=functools.partial(, quant_min=0,
quant_max=127){}, weight=functools.partial(,
dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})
|
https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qconfig.html
|
pytorch docs
|
torch.fake_quantize_per_tensor_affine
torch.fake_quantize_per_tensor_affine(input, scale, zero_point, quant_min, quant_max) -> Tensor
Returns a new tensor with the data in "input" fake quantized using
"scale", "zero_point", "quant_min" and "quant_max".
\text{output} = min( \text{quant\_max}, max(
\text{quant\_min}, \text{std::nearby\_int}(\text{input}
/ \text{scale}) + \text{zero\_point} ) )
Parameters:
* input (Tensor) -- the input value(s), "torch.float32"
tensor
* **scale** (double scalar or "float32" Tensor) -- quantization
scale
* **zero_point** (int64 scalar or "int32" Tensor) --
quantization zero_point
* **quant_min** (*int64*) -- lower bound of the quantized domain
* **quant_max** (*int64*) -- upper bound of the quantized domain
Returns:
A newly fake_quantized "torch.float32" tensor
Return type:
Tensor
Example:
|
https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_tensor_affine.html
|
pytorch docs
|
Return type:
Tensor
Example:
>>> x = torch.randn(4)
>>> x
tensor([ 0.0552, 0.9730, 0.3973, -1.0780])
>>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255)
tensor([0.1000, 1.0000, 0.4000, 0.0000])
>>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255)
tensor([0.6000, 0.4000, 0.0000, 0.0000])
|
https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_tensor_affine.html
|
pytorch docs
|
torch.Tensor.rad2deg
Tensor.rad2deg() -> Tensor
See "torch.rad2deg()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.rad2deg.html
|
pytorch docs
|
torch.Tensor.view
Tensor.view(*shape) -> Tensor
Returns a new tensor with the same data as the "self" tensor but of
a different "shape".
The returned tensor shares the same data and must have the same
number of elements, but may have a different size. For a tensor to
be viewed, the new view size must be compatible with its original
size and stride, i.e., each new view dimension must either be a
subspace of an original dimension, or only span across original
dimensions d, d+1, \dots, d+k that satisfy the following
contiguity-like condition that \forall i = d, \dots, d+k-1,
\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]
Otherwise, it will not be possible to view "self" tensor as "shape"
without copying it (e.g., via "contiguous()"). When it is unclear
whether a "view()" can be performed, it is advisable to use
"reshape()", which returns a view if the shapes are compatible, and
|
https://pytorch.org/docs/stable/generated/torch.Tensor.view.html
|
pytorch docs
|
copies (equivalent to calling "contiguous()") otherwise.
Parameters:
shape (torch.Size or int...) -- the desired size
Example:
>>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
>>> a = torch.randn(1, 2, 3, 4)
>>> a.size()
torch.Size([1, 2, 3, 4])
>>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])
>>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])
>>> torch.equal(b, c)
False
view(dtype) -> Tensor
Returns a new tensor with the same data as the "self" tensor but of
a different "dtype".
If the element size of "dtype" is different than that of
|
https://pytorch.org/docs/stable/generated/torch.Tensor.view.html
|
pytorch docs
|
"self.dtype", then the size of the last dimension of the output
will be scaled proportionally. For instance, if "dtype" element
size is twice that of "self.dtype", then each pair of elements in
the last dimension of "self" will be combined, and the size of the
last dimension of the output will be half that of "self". If
"dtype" element size is half that of "self.dtype", then each
element in the last dimension of "self" will be split in two, and
the size of the last dimension of the output will be double that of
"self". For this to be possible, the following conditions must be
true:
* "self.dim()" must be greater than 0.
* "self.stride(-1)" must be 1.
Additionally, if the element size of "dtype" is greater than that
of "self.dtype", the following conditions must be true as well:
* "self.size(-1)" must be divisible by the ratio between the
element sizes of the dtypes.
* "self.storage_offset()" must be divisible by the ratio between
|
https://pytorch.org/docs/stable/generated/torch.Tensor.view.html
|
pytorch docs
|
the element sizes of the dtypes.
* The strides of all dimensions, except the last dimension, must
be divisible by the ratio between the element sizes of the
dtypes.
If any of the above conditions are not met, an error is thrown.
Warning:
This overload is not supported by TorchScript, and using it in a
Torchscript program will cause undefined behavior.
Parameters:
dtype ("torch.dtype") -- the desired dtype
Example:
>>> x = torch.randn(4, 4)
>>> x
tensor([[ 0.9482, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.dtype
torch.float32
>>> y = x.view(torch.int32)
>>> y
tensor([[ 1064483442, -1124191867, 1069546515, -1089989247],
[-1105482831, 1061112040, 1057999968, -1084397505],
|
https://pytorch.org/docs/stable/generated/torch.Tensor.view.html
|
pytorch docs
|
[-1071760287, -1123489973, -1097310419, -1084649136],
[-1101533110, 1073668768, -1082790149, -1088634448]],
dtype=torch.int32)
>>> y[0, 0] = 1000000000
>>> x
tensor([[ 0.0047, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.view(torch.cfloat)
tensor([[ 0.0047-0.0310j, 1.4999-0.5316j],
[-0.1520+0.7472j, 0.5617-0.8649j],
[-2.4724-0.0334j, -0.2976-0.8499j],
[-0.2109+1.9913j, -0.9607-0.6123j]])
>>> x.view(torch.cfloat).size()
torch.Size([4, 2])
>>> x.view(torch.uint8)
tensor([[ 0, 202, 154, 59, 182, 243, 253, 188, 185, 252, 191, 63, 240, 22,
8, 191],
[227, 165, 27, 190, 128, 72, 63, 63, 146, 203, 15, 63, 22, 106,
93, 191],
|
https://pytorch.org/docs/stable/generated/torch.Tensor.view.html
|
pytorch docs
|
93, 191],
[205, 59, 30, 192, 112, 206, 8, 189, 7, 95, 152, 190, 12, 147,
89, 191],
[ 43, 246, 87, 190, 235, 226, 254, 63, 111, 240, 117, 191, 177, 191,
28, 191]], dtype=torch.uint8)
>>> x.view(torch.uint8).size()
torch.Size([4, 16])
|
https://pytorch.org/docs/stable/generated/torch.Tensor.view.html
|
pytorch docs
|
MultiStepLR
class torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=- 1, verbose=False)
Decays the learning rate of each parameter group by gamma once the
number of epoch reaches one of the milestones. Notice that such
decay can happen simultaneously with other changes to the learning
rate from outside this scheduler. When last_epoch=-1, sets initial
lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **milestones** (*list*) -- List of epoch indices. Must be
increasing.
* **gamma** (*float*) -- Multiplicative factor of learning rate
decay. Default: 0.1.
* **last_epoch** (*int*) -- The index of last epoch. Default:
-1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
Assuming optimizer uses lr = 0.05 for all groups
lr = 0.05 if epoch < 30
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html
|
pytorch docs
|
lr = 0.05 if epoch < 30
lr = 0.005 if 30 <= epoch < 80
lr = 0.0005 if epoch >= 80
scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html
|
pytorch docs
|
torch.Tensor.sqrt_
Tensor.sqrt_() -> Tensor
In-place version of "sqrt()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.sqrt_.html
|
pytorch docs
|
torch.autograd.function.FunctionCtx.save_for_backward
FunctionCtx.save_for_backward(*tensors)
Saves given tensors for a future call to "backward()".
"save_for_backward" should be called at most once, only from inside
the "forward()" method, and only with tensors.
All tensors intended to be used in the backward pass should be
saved with "save_for_backward" (as opposed to directly on "ctx") to
prevent incorrect gradients and memory leaks, and enable the
application of saved tensor hooks. See
"torch.autograd.graph.saved_tensors_hooks".
Note that if intermediary tensors, tensors that are neither inputs
nor outputs of "forward()", are saved for backward, your custom
Function may not support double backward. Custom Functions that do
not support double backward should decorate their "backward()"
method with "@once_differentiable" so that performing double
|
https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html
|
pytorch docs
|
backward raises an error. If you'd like to support double backward,
you can either recompute intermediaries based on the inputs during
backward or return the intermediaries as the outputs of the custom
Function. See the double backward tutorial for more details.
In "backward()", saved tensors can be accessed through the
"saved_tensors" attribute. Before returning them to the user, a
check is made to ensure they weren't used in any in-place operation
that modified their content.
Arguments can also be "None". This is a no-op.
See Extending torch.autograd for more details on how to use this
method.
Example::
>>> class Func(Function):
>>> @staticmethod
>>> def forward(ctx, x: torch.Tensor, y: torch.Tensor, z: int):
>>> w = x * z
>>> out = x * y + y * z + w * y
>>> ctx.save_for_backward(x, y, w, out)
>>> ctx.z = z # z is not a tensor
>>> return out
>>>
|
https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html
|
pytorch docs
|
return out
>>>
>>> @staticmethod
>>> @once_differentiable
>>> def backward(ctx, grad_out):
>>> x, y, w, out = ctx.saved_tensors
>>> z = ctx.z
>>> gx = grad_out * (y + y * z)
>>> gy = grad_out * (x + z + w)
>>> gz = None
>>> return gx, gy, gz
>>>
>>> a = torch.tensor(1., requires_grad=True, dtype=torch.double)
>>> b = torch.tensor(2., requires_grad=True, dtype=torch.double)
>>> c = 4
>>> d = Func.apply(a, b, c)
|
https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html
|
pytorch docs
|
torch.full
torch.full(size, fill_value, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Creates a tensor of size "size" filled with "fill_value". The
tensor's dtype is inferred from "fill_value".
Parameters:
* size (int...) -- a list, tuple, or "torch.Size" of
integers defining the shape of the output tensor.
* **fill_value** (*Scalar*) -- the value to fill the output
tensor with.
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
|
https://pytorch.org/docs/stable/generated/torch.full.html
|
pytorch docs
|
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.full((2, 3), 3.141592)
tensor([[ 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416]])
|
https://pytorch.org/docs/stable/generated/torch.full.html
|
pytorch docs
|
torch.Tensor.digamma
Tensor.digamma() -> Tensor
See "torch.digamma()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.digamma.html
|
pytorch docs
|
default_dynamic_quant_observer
torch.quantization.observer.default_dynamic_quant_observer
alias of functools.partial(,
dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){}
|
https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_dynamic_quant_observer.html
|
pytorch docs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.