text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
(required).
* **memory** (*Tensor*) -- the sequence from the last layer
of the encoder (required).
* **tgt_mask** (*Optional**[**Tensor**]*) -- the mask for the
tgt sequence (optional).
* **memory_mask** (*Optional**[**Tensor**]*) -- the mask for
the memory sequence (optional).
* **tgt_key_padding_mask** (*Optional**[**Tensor**]*) -- the
mask for the tgt keys per batch (optional).
* **memory_key_padding_mask** (*Optional**[**Tensor**]*) --
the mask for the memory keys per batch (optional).
Return type:
*Tensor*
Shape:
see the docs in Transformer class.
|
https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html
|
pytorch docs
|
avg_pool3d
class torch.ao.nn.quantized.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)
Applies 3D average-pooling operation in kD \ times kH \times kW
regions by step size sD \times sH \times sW steps. The number of
output features is equal to the number of input planes.
Note:
The input quantization parameters propagate to the output.
Parameters:
* input -- quantized input tensor (\text{minibatch} ,
\text{in_channels} , iH , iW)
* **kernel_size** -- size of the pooling region. Can be a single
number or a tuple *(kD, kH, kW)*
* **stride** -- stride of the pooling operation. Can be a single
number or a tuple *(sD, sH, sW)*. Default: "kernel_size"
* **padding** -- implicit zero paddings on both sides of the
input. Can be a single number or a tuple *(padD, padH, padW)*.
Default: 0
|
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool3d.html
|
pytorch docs
|
Default: 0
* **ceil_mode** -- when True, will use *ceil* instead of *floor*
in the formula to compute the output shape. Default: "False"
* **count_include_pad** -- when True, will include the zero-
padding in the averaging calculation. Default: "True"
* **divisor_override** -- if specified, it will be used as
divisor, otherwise size of the pooling region will be used.
Default: None
|
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool3d.html
|
pytorch docs
|
torch.Tensor.nan_to_num
Tensor.nan_to_num(nan=0.0, posinf=None, neginf=None) -> Tensor
See "torch.nan_to_num()".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.nan_to_num.html
|
pytorch docs
|
torch.nn.functional.dropout
torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False)
During training, randomly zeroes some of the elements of the input
tensor with probability "p" using samples from a Bernoulli
distribution.
See "Dropout" for details.
Parameters:
* p (float) -- probability of an element to be zeroed.
Default: 0.5
* **training** (*bool*) -- apply dropout if is "True". Default:
"True"
* **inplace** (*bool*) -- If set to "True", will do this
operation in-place. Default: "False"
Return type:
Tensor
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout.html
|
pytorch docs
|
torch.Tensor.dot
Tensor.dot(other) -> Tensor
See "torch.dot()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.dot.html
|
pytorch docs
|
torch.Tensor.fmin
Tensor.fmin(other) -> Tensor
See "torch.fmin()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.fmin.html
|
pytorch docs
|
torch.Tensor.expand
Tensor.expand(*sizes) -> Tensor
Returns a new view of the "self" tensor with singleton dimensions
expanded to a larger size.
Passing -1 as the size for a dimension means not changing the size
of that dimension.
Tensor can be also expanded to a larger number of dimensions, and
the new ones will be appended at the front. For the new dimensions,
the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a
new view on the existing tensor where a dimension of size one is
expanded to a larger size by setting the "stride" to 0. Any
dimension of size 1 can be expanded to an arbitrary value without
allocating new memory.
Parameters:
sizes (torch.Size or int...*) -- the desired
expanded size
Warning:
More than one element of an expanded tensor may refer to a single
memory location. As a result, in-place operations (especially
|
https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html
|
pytorch docs
|
ones that are vectorized) may result in incorrect behavior. If
you need to write to the tensors, please clone them first.
Example:
>>> x = torch.tensor([[1], [2], [3]])
>>> x.size()
torch.Size([3, 1])
>>> x.expand(3, 4)
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
>>> x.expand(-1, 4) # -1 means not changing the size of that dimension
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
|
https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html
|
pytorch docs
|
RReLU
class torch.nn.RReLU(lower=0.125, upper=0.3333333333333333, inplace=False)
Applies the randomized leaky rectified liner unit function,
element-wise, as described in the paper:
Empirical Evaluation of Rectified Activations in Convolutional
Network.
The function is defined as:
\text{RReLU}(x) = \begin{cases} x & \text{if } x \geq 0 \\
ax & \text{ otherwise } \end{cases}
where a is randomly sampled from uniform distribution
\mathcal{U}(\text{lower}, \text{upper}).
See: https://arxiv.org/pdf/1505.00853.pdf
Parameters:
* lower (float) -- lower bound of the uniform
distribution. Default: \frac{1}{8}
* **upper** (*float*) -- upper bound of the uniform
distribution. Default: \frac{1}{3}
* **inplace** (*bool*) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
|
https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html
|
pytorch docs
|
[image]
Examples:
>>> m = nn.RReLU(0.1, 0.3)
>>> input = torch.randn(2)
>>> output = m(input)
|
https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html
|
pytorch docs
|
torch.Tensor.transpose_
Tensor.transpose_(dim0, dim1) -> Tensor
In-place version of "transpose()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.transpose_.html
|
pytorch docs
|
torch.nn.functional.max_unpool1d
torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)
Computes a partial inverse of "MaxPool1d".
See "MaxUnpool1d" for details.
Return type:
Tensor
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool1d.html
|
pytorch docs
|
torch.nn.functional.linear
torch.nn.functional.linear(input, weight, bias=None) -> Tensor
Applies a linear transformation to the incoming data: y = xA^T + b.
This operation supports 2-D "weight" with sparse layout
Warning:
Sparse support is a beta feature and some layout(s)/dtype/device
combinations may not be supported, or may not have autograd
support. If you notice missing functionality please open a
feature request.
This operator supports TensorFloat32.
Shape:
* Input: (*, in\_features) where *** means any number of
additional dimensions, including none
* Weight: (out\_features, in\_features) or (in\_features)
* Bias: (out\_features) or ()
* Output: (*, out\_features) or (*), based on the shape of the
weight
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.linear.html
|
pytorch docs
|
torch.nansum
torch.nansum(input, *, dtype=None) -> Tensor
Returns the sum of all elements, treating Not a Numbers (NaNs) as
zero.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
dtype ("torch.dtype", optional) -- the desired data type of
returned tensor. If specified, the input tensor is casted to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Example:
>>> a = torch.tensor([1., 2., float('nan'), 4.])
>>> torch.nansum(a)
tensor(7.)
torch.nansum(input, dim, keepdim=False, *, dtype=None) -> Tensor
Returns the sum of each row of the "input" tensor in the given
dimension "dim", treating Not a Numbers (NaNs) as zero. If "dim" is
a list of dimensions, reduce over all of them.
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
|
https://pytorch.org/docs/stable/generated/torch.nansum.html
|
pytorch docs
|
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- the
dimension or dimensions to reduce. If "None", all dimensions
are reduced.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
dtype ("torch.dtype", optional) -- the desired data type of
returned tensor. If specified, the input tensor is casted to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Example:
>>> torch.nansum(torch.tensor([1., float("nan")]))
1.0
>>> a = torch.tensor([[1, 2], [3., float("nan")]])
>>> torch.nansum(a)
tensor(6.)
>>> torch.nansum(a, dim=0)
tensor([4., 2.])
>>> torch.nansum(a, dim=1)
|
https://pytorch.org/docs/stable/generated/torch.nansum.html
|
pytorch docs
|
torch.nansum(a, dim=1)
tensor([3., 3.])
|
https://pytorch.org/docs/stable/generated/torch.nansum.html
|
pytorch docs
|
torch.Tensor.maximum
Tensor.maximum(other) -> Tensor
See "torch.maximum()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.maximum.html
|
pytorch docs
|
torch.t
torch.t(input) -> Tensor
Expects "input" to be <= 2-D tensor and transposes dimensions 0 and
1.
0-D and 1-D tensors are returned as is. When input is a 2-D tensor
this is equivalent to "transpose(input, 0, 1)".
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x = torch.randn(())
>>> x
tensor(0.1995)
>>> torch.t(x)
tensor(0.1995)
>>> x = torch.randn(3)
>>> x
tensor([ 2.4320, -0.4608, 0.7702])
>>> torch.t(x)
tensor([ 2.4320, -0.4608, 0.7702])
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.4875, 0.9158, -0.5872],
[ 0.3938, -0.6929, 0.6932]])
>>> torch.t(x)
tensor([[ 0.4875, 0.3938],
[ 0.9158, -0.6929],
[-0.5872, 0.6932]])
See also "torch.transpose()".
|
https://pytorch.org/docs/stable/generated/torch.t.html
|
pytorch docs
|
torch.Tensor.lt_
Tensor.lt_(other) -> Tensor
In-place version of "lt()".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.lt_.html
|
pytorch docs
|
torch.nn.functional.binary_cross_entropy
torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean')
Function that measures the Binary Cross Entropy between the target
and input probabilities.
See "BCELoss" for details.
Parameters:
* input (Tensor) -- Tensor of arbitrary shape as
probabilities.
* **target** (*Tensor*) -- Tensor of the same shape as input
with values between 0 and 1.
* **weight** (*Tensor**, **optional*) -- a manual rescaling
weight if provided it's repeated to match input tensor shape
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
multiple elements per sample. If the field "size_average" is
set to "False", the losses are instead summed for each
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html
|
pytorch docs
|
minibatch. Ignored when reduce is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Return type:
Tensor
Examples:
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html
|
pytorch docs
|
Return type:
Tensor
Examples:
>>> input = torch.randn(3, 2, requires_grad=True)
>>> target = torch.rand(3, 2, requires_grad=False)
>>> loss = F.binary_cross_entropy(torch.sigmoid(input), target)
>>> loss.backward()
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html
|
pytorch docs
|
torch.fmin
torch.fmin(input, other, *, out=None) -> Tensor
Computes the element-wise minimum of "input" and "other".
This is like "torch.minimum()" except it handles NaNs differently:
if exactly one of the two elements being compared is a NaN then the
non-NaN element is taken as the minimum. Only if both elements are
NaN is NaN propagated.
This function is a wrapper around C++'s "std::fmin" and is similar
to NumPy's "fmin" function.
Supports broadcasting to a common shape, type promotion, and
integer and floating-point inputs.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([2.2, float('nan'), 2.1, float('nan')])
>>> b = torch.tensor([-9.3, 0.1, float('nan'), float('nan')])
>>> torch.fmin(a, b)
tensor([-9.3000, 0.1000, 2.1000, nan])
|
https://pytorch.org/docs/stable/generated/torch.fmin.html
|
pytorch docs
|
torch.min
torch.min(input) -> Tensor
Returns the minimum value of all elements in the "input" tensor.
Warning:
This function produces deterministic (sub)gradients unlike
"min(dim=0)"
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6750, 1.0857, 1.7197]])
>>> torch.min(a)
tensor(0.6750)
torch.min(input, dim, keepdim=False, *, out=None)
Returns a namedtuple "(values, indices)" where "values" is the
minimum value of each row of the "input" tensor in the given
dimension "dim". And "indices" is the index location of each
minimum value found (argmin).
If "keepdim" is "True", the output tensors are of the same size as
"input" except in the dimension "dim" where they are of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensors having 1 fewer dimension than "input".
Note:
|
https://pytorch.org/docs/stable/generated/torch.min.html
|
pytorch docs
|
Note:
If there are multiple minimal values in a reduced row then the
indices of the first minimal value are returned.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
out (tuple, optional) -- the tuple of two output
tensors (min, min_indices)
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.6248, 1.1334, -1.1899, -0.2803],
[-1.4644, -0.2635, -0.3651, 0.6134],
[ 0.2457, 0.0384, 1.0128, 0.7015],
[-0.1153, 2.9849, 2.1458, 0.5788]])
>>> torch.min(a, 1)
torch.return_types.min(values=tensor([-1.1899, -1.4644, 0.0384, -0.1153]), indices=tensor([2, 0, 1, 0]))
torch.min(input, other, *, out=None) -> Tensor
See "torch.minimum()".
|
https://pytorch.org/docs/stable/generated/torch.min.html
|
pytorch docs
|
torch.Tensor.remainder
Tensor.remainder(divisor) -> Tensor
See "torch.remainder()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.remainder.html
|
pytorch docs
|
torch._assert
torch._assert(condition, message)
A wrapper around Python's assert which is symbolically traceable.
|
https://pytorch.org/docs/stable/generated/torch._assert.html
|
pytorch docs
|
torch.foreach_log10
torch.foreach_log10(self: List[Tensor]) -> None
Apply "torch.log10()" to each Tensor of the input list.
|
https://pytorch.org/docs/stable/generated/torch._foreach_log10_.html
|
pytorch docs
|
torch.Tensor.argsort
Tensor.argsort(dim=- 1, descending=False) -> LongTensor
See "torch.argsort()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.argsort.html
|
pytorch docs
|
CosineAnnealingWarmRestarts
class torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=- 1, verbose=False)
Set the learning rate of each parameter group using a cosine
annealing schedule, where \eta_{max} is set to the initial lr,
T_{cur} is the number of epochs since the last restart and T_{i} is
the number of epochs between two warm restarts in SGDR:
\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} -
\eta_{min})\left(1 +
\cos\left(\frac{T_{cur}}{T_{i}}\pi\right)\right)
When T_{cur}=T_{i}, set \eta_t = \eta_{min}. When T_{cur}=0 after
restart, set \eta_t=\eta_{max}.
It has been proposed in SGDR: Stochastic Gradient Descent with Warm
Restarts.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **T_0** (*int*) -- Number of iterations for the first restart.
* **T_mult** (*int**, **optional*) -- A factor increases T_{i}
after a restart. Default: 1.
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html
|
pytorch docs
|
after a restart. Default: 1.
* **eta_min** (*float**, **optional*) -- Minimum learning rate.
Default: 0.
* **last_epoch** (*int**, **optional*) -- The index of last
epoch. Default: -1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
step(epoch=None)
Step could be called after every batch update
-[ Example ]-
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html
|
pytorch docs
|
-[ Example ]-
>>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> iters = len(dataloader)
>>> for epoch in range(20):
>>> for i, sample in enumerate(dataloader):
>>> inputs, labels = sample['inputs'], sample['labels']
>>> optimizer.zero_grad()
>>> outputs = net(inputs)
>>> loss = criterion(outputs, labels)
>>> loss.backward()
>>> optimizer.step()
>>> scheduler.step(epoch + i / iters)
This function can be called in an interleaved way.
-[ Example ]-
>>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> for epoch in range(20):
>>> scheduler.step()
>>> scheduler.step(26)
>>> scheduler.step() # scheduler.step(27), instead of scheduler(20)
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html
|
pytorch docs
|
torch.inverse
torch.inverse(input, *, out=None) -> Tensor
Alias for "torch.linalg.inv()"
|
https://pytorch.org/docs/stable/generated/torch.inverse.html
|
pytorch docs
|
upsample_bilinear
class torch.ao.nn.quantized.functional.upsample_bilinear(input, size=None, scale_factor=None)
Upsamples the input, using bilinear upsampling.
Warning:
This function is deprecated in favor of
"torch.nn.quantized.functional.interpolate()". This is equivalent
with "nn.quantized.functional.interpolate(..., mode='bilinear',
align_corners=True)".
Note:
The input quantization parameters propagate to the output.
Note:
Only 2D inputs are supported
Parameters:
* input (Tensor) -- quantized input
* **size** (*int** or **Tuple**[**int**, **int**]*) -- output
spatial size.
* **scale_factor** (*int** or **Tuple**[**int**, **int**]*) --
multiplier for spatial size
|
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample_bilinear.html
|
pytorch docs
|
torch.Tensor.diagonal_scatter
Tensor.diagonal_scatter(src, offset=0, dim1=0, dim2=1) -> Tensor
See "torch.diagonal_scatter()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.diagonal_scatter.html
|
pytorch docs
|
torch.unflatten
torch.unflatten(input, dim, sizes) -> Tensor
Expands a dimension of the input tensor over multiple dimensions.
See also:
"torch.flatten()" the inverse of this function. It coalesces
several dimensions into one.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- Dimension to be unflattened, specified as
an index into "input.shape".
* **sizes** (*Tuple**[**int**]*) -- New shape of the unflattened
dimension. One of its elements can be *-1* in which case the
corresponding output dimension is inferred. Otherwise, the
product of "sizes" *must* equal "input.shape[dim]".
Returns:
A View of input with the specified dimension unflattened.
Examples::
>>> torch.unflatten(torch.randn(3, 4, 1), 1, (2, 2)).shape
torch.Size([3, 2, 2, 1])
>>> torch.unflatten(torch.randn(3, 4, 1), 1, (-1, 2)).shape
torch.Size([3, 2, 2, 1])
|
https://pytorch.org/docs/stable/generated/torch.unflatten.html
|
pytorch docs
|
torch.Size([3, 2, 2, 1])
>>> torch.unflatten(torch.randn(5, 12, 3), -1, (2, 2, 3, 1, 1)).shape
torch.Size([5, 2, 2, 3, 1, 1, 3])
|
https://pytorch.org/docs/stable/generated/torch.unflatten.html
|
pytorch docs
|
torch.nn.functional.interpolate
torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None, antialias=False)
Down/up samples the input to either the given "size" or the given
"scale_factor"
The algorithm used for interpolation is determined by "mode".
Currently temporal, spatial and volumetric sampling are supported,
i.e. expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form: mini-batch x
channels x [optional depth] x [optional height] x width.
The modes available for resizing are: nearest, linear (3D-
only), bilinear, bicubic (4D-only), trilinear (5D-only),
area, nearest-exact
Parameters:
* input (Tensor) -- the input tensor
* **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,
**int**] or **Tuple**[**int**, **int**, **int**]*) -- output
spatial size.
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html
|
pytorch docs
|
spatial size.
* **scale_factor** (*float** or **Tuple**[**float**]*) --
multiplier for spatial size. If *scale_factor* is a tuple, its
length has to match the number of spatial dimensions;
*input.dim() - 2*.
* **mode** (*str*) -- algorithm used for upsampling: "'nearest'"
| "'linear'" | "'bilinear'" | "'bicubic'" | "'trilinear'" |
"'area'" | "'nearest-exact'". Default: "'nearest'"
* **align_corners** (*bool**, **optional*) -- Geometrically, we
consider the pixels of the input and output as squares rather
than points. If set to "True", the input and output tensors
are aligned by the center points of their corner pixels,
preserving the values at the corner pixels. If set to "False",
the input and output tensors are aligned by the corner points
of their corner pixels, and the interpolation uses edge value
padding for out-of-boundary values, making this operation
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html
|
pytorch docs
|
independent of input size when "scale_factor" is kept the
same. This only has an effect when "mode" is "'linear'",
"'bilinear'", "'bicubic'" or "'trilinear'". Default: "False"
* **recompute_scale_factor** (*bool**, **optional*) -- recompute
the scale_factor for use in the interpolation calculation. If
*recompute_scale_factor* is "True", then *scale_factor* must
be passed in and *scale_factor* is used to compute the output
*size*. The computed output *size* will be used to infer new
scales for the interpolation. Note that when *scale_factor* is
floating-point, it may differ from the recomputed
*scale_factor* due to rounding and precision issues. If
*recompute_scale_factor* is "False", then *size* or
*scale_factor* will be used directly for interpolation.
Default: "None".
* **antialias** (*bool**, **optional*) -- flag to apply anti-
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html
|
pytorch docs
|
aliasing. Default: "False". Using anti-alias option together
with "align_corners=False", interpolation result would match
Pillow result for downsampling operation. Supported modes:
"'bilinear'", "'bicubic'".
Return type:
Tensor
Note:
With "mode='bicubic'", it's possible to cause overshoot, in other
words it can produce negative values or values greater than 255
for images. Explicitly call "result.clamp(min=0, max=255)" if you
want to reduce the overshoot when displaying the image.
Note:
Mode "mode='nearest-exact'" matches Scikit-Image and PIL nearest
neighbours interpolation algorithms and fixes known issues with
"mode='nearest'". This mode is introduced to keep backward
compatibility. Mode "mode='nearest'" matches buggy OpenCV's
"INTER_NEAREST" interpolation algorithm.
Note:
This operation may produce nondeterministic gradients when given
tensors on a CUDA device. See Reproducibility for more
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html
|
pytorch docs
|
information.
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html
|
pytorch docs
|
torch.not_equal
torch.not_equal(input, other, *, out=None) -> Tensor
Alias for "torch.ne()".
|
https://pytorch.org/docs/stable/generated/torch.not_equal.html
|
pytorch docs
|
LPPool2d
class torch.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False)
Applies a 2D power-average pooling over an input signal composed of
several input planes.
On each window, the function computed is:
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
At p = \infty, one gets Max Pooling
At p = 1, one gets Sum Pooling (which is proportional to average
pooling)
The parameters "kernel_size", "stride" can either be:
* a single "int" -- in which case the same value is used for the
height and width dimension
* a "tuple" of two ints -- in which case, the first *int* is
used for the height dimension, and the second *int* for the
width dimension
Note:
If the sum to the power of *p* is zero, the gradient of this
function is not defined. This implementation will set the
gradient to zero in this case.
Parameters:
* kernel_size (Union[int, Tuple[int*,
|
https://pytorch.org/docs/stable/generated/torch.nn.LPPool2d.html
|
pytorch docs
|
int]*]) -- the size of the window
* **stride** (*Union**[**int**, **Tuple**[**int**, **int**]**]*)
-- the stride of the window. Default value is "kernel_size"
* **ceil_mode** (*bool*) -- when True, will use *ceil* instead
of *floor* to compute the output shape
Shape:
* Input: (N, C, H_{in}, W_{in})
* Output: (N, C, H_{out}, W_{out}), where
H_{out} = \left\lfloor\frac{H_{in} -
\text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor
W_{out} = \left\lfloor\frac{W_{in} -
\text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor
Examples:
>>> # power-2 pool of square window of size=3, stride=2
>>> m = nn.LPPool2d(2, 3, stride=2)
>>> # pool of non-square window of power 1.2
>>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1))
>>> input = torch.randn(20, 16, 50, 32)
>>> output = m(input)
|
https://pytorch.org/docs/stable/generated/torch.nn.LPPool2d.html
|
pytorch docs
|
default_histogram_observer
torch.quantization.observer.default_histogram_observer
alias of functools.partial(, quant_min=0,
quant_max=127){}
|
https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_histogram_observer.html
|
pytorch docs
|
torch.Tensor.cumsum_
Tensor.cumsum_(dim, dtype=None) -> Tensor
In-place version of "cumsum()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.cumsum_.html
|
pytorch docs
|
PixelShuffle
class torch.nn.PixelShuffle(upscale_factor)
Rearranges elements in a tensor of shape (, C \times r^2, H, W) to
a tensor of shape (, C, H \times r, W \times r), where r is an
upscale factor.
This is useful for implementing efficient sub-pixel convolution
with a stride of 1/r.
See the paper: Real-Time Single Image and Video Super-Resolution
Using an Efficient Sub-Pixel Convolutional Neural Network by Shi
et. al (2016) for more details.
Parameters:
upscale_factor (int) -- factor to increase spatial
resolution by
Shape:
* Input: (*, C_{in}, H_{in}, W_{in}), where * is zero or more
batch dimensions
* Output: (*, C_{out}, H_{out}, W_{out}), where
C_{out} = C_{in} \div \text{upscale\_factor}^2
H_{out} = H_{in} \times \text{upscale\_factor}
W_{out} = W_{in} \times \text{upscale\_factor}
Examples:
>>> pixel_shuffle = nn.PixelShuffle(3)
>>> input = torch.randn(1, 9, 4, 4)
|
https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html
|
pytorch docs
|
input = torch.randn(1, 9, 4, 4)
>>> output = pixel_shuffle(input)
>>> print(output.size())
torch.Size([1, 1, 12, 12])
|
https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html
|
pytorch docs
|
default_histogram_fake_quant
torch.quantization.fake_quantize.default_histogram_fake_quant
alias of functools.partial(,
observer=, quant_min=0,
quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine,
reduce_range=True){}
|
https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_histogram_fake_quant.html
|
pytorch docs
|
torch.Tensor.min
Tensor.min(dim=None, keepdim=False)
See "torch.min()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.min.html
|
pytorch docs
|
Conv3d
class torch.ao.nn.qat.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None, device=None, dtype=None)
A Conv3d module attached with FakeQuantize modules for weight, used
for quantization aware training.
We adopt the same interface as torch.nn.Conv3d, please see https
://pytorch.org/docs/stable/nn.html?highlight=conv3d#torch.nn.Conv3d
for documentation.
Similar to torch.nn.Conv3d, with FakeQuantize modules initialized
to default.
Variables:
weight_fake_quant -- fake quant module for weight
|
https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Conv3d.html
|
pytorch docs
|
PoissonNLLLoss
class torch.nn.PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')
Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
\text{target} \sim \mathrm{Poisson}(\text{input})
\text{loss}(\text{input}, \text{target}) = \text{input} -
\text{target} * \log(\text{input}) +
\log(\text{target!})
The last term can be omitted or approximated with Stirling formula.
The approximation is used for target values more than 1. For
targets less or equal to 1 zeros are added to the loss.
Parameters:
* log_input (bool, optional) -- if "True" the loss is
computed as \exp(\text{input}) - \text{target}\text{input},
if "False" the loss is \text{input} -
\text{target}\log(\text{input}+\text{eps}).
* **full** (*bool**, **optional*) --
|
https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html
|
pytorch docs
|
full (bool, optional) --
whether to compute full loss, i. e. to add the Stirling
approximation term
\text{target}*\log(\text{target}) - \text{target} + 0.5 *
\log(2\pi\text{target}).
size_average (bool, optional) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
eps (float, optional) -- Small value to avoid
evaluation of \log(0) when "log_input = False". Default: 1e-8
reduce (bool, optional) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
|
https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html
|
pytorch docs
|
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Examples:
>>> loss = nn.PoissonNLLLoss()
>>> log_input = torch.randn(5, 2, requires_grad=True)
>>> target = torch.randn(5, 2)
>>> output = loss(log_input, target)
>>> output.backward()
Shape:
* Input: (*), where * means any number of dimensions.
* Target: (*), same shape as the input.
|
https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html
|
pytorch docs
|
Target: (*), same shape as the input.
Output: scalar by default. If "reduction" is "'none'", then
(*), the same shape as the input.
|
https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html
|
pytorch docs
|
torch._foreach_acos
torch._foreach_acos(self: List[Tensor]) -> List[Tensor]
Apply "torch.acos()" to each Tensor of the input list.
|
https://pytorch.org/docs/stable/generated/torch._foreach_acos.html
|
pytorch docs
|
torch.bincount
torch.bincount(input, weights=None, minlength=0) -> Tensor
Count the frequency of each value in an array of non-negative ints.
The number of bins (size 1) is one larger than the largest value in
"input" unless "input" is empty, in which case the result is a
tensor of size 0. If "minlength" is specified, the number of bins
is at least "minlength" and if "input" is empty, then the result is
tensor of size "minlength" filled with zeros. If "n" is the value
at position "i", "out[n] += weights[i]" if "weights" is specified
else "out[n] += 1".
Note:
This operation may produce nondeterministic gradients when given
tensors on a CUDA device. See Reproducibility for more
information.
Parameters:
* input (Tensor) -- 1-d int tensor
* **weights** (*Tensor*) -- optional, weight for each value in
the input tensor. Should be of same size as input tensor.
|
https://pytorch.org/docs/stable/generated/torch.bincount.html
|
pytorch docs
|
minlength (int) -- optional, minimum number of bins.
Should be non-negative.
Returns:
a tensor of shape "Size([max(input) + 1])" if "input" is non-
empty, else "Size(0)"
Return type:
output (Tensor)
Example:
>>> input = torch.randint(0, 8, (5,), dtype=torch.int64)
>>> weights = torch.linspace(0, 1, steps=5)
>>> input, weights
(tensor([4, 3, 6, 3, 4]),
tensor([ 0.0000, 0.2500, 0.5000, 0.7500, 1.0000])
>>> torch.bincount(input)
tensor([0, 0, 0, 2, 2, 0, 1])
>>> input.bincount(weights)
tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000])
|
https://pytorch.org/docs/stable/generated/torch.bincount.html
|
pytorch docs
|
torch.tril
torch.tril(input, diagonal=0, *, out=None) -> Tensor
Returns the lower triangular part of the matrix (2-D tensor) or
batch of matrices "input", the other elements of the result tensor
"out" are set to 0.
The lower triangular part of the matrix is defined as the elements
on and below the diagonal.
The argument "diagonal" controls which diagonal to consider. If
"diagonal" = 0, all elements on and below the main diagonal are
retained. A positive value includes just as many diagonals above
the main diagonal, and similarly a negative value excludes just as
many diagonals below the main diagonal. The main diagonal are the
set of indices \lbrace (i, i) \rbrace for i \in [0, \min{d_{1},
d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.
Parameters:
* input (Tensor) -- the input tensor.
* **diagonal** (*int**, **optional*) -- the diagonal to consider
Keyword Arguments:
|
https://pytorch.org/docs/stable/generated/torch.tril.html
|
pytorch docs
|
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(3, 3)
>>> a
tensor([[-1.0813, -0.8619, 0.7105],
[ 0.0935, 0.1380, 2.2112],
[-0.3409, -0.9828, 0.0289]])
>>> torch.tril(a)
tensor([[-1.0813, 0.0000, 0.0000],
[ 0.0935, 0.1380, 0.0000],
[-0.3409, -0.9828, 0.0289]])
>>> b = torch.randn(4, 6)
>>> b
tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461],
[ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145],
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864],
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]])
>>> torch.tril(b, diagonal=1)
tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000],
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000],
|
https://pytorch.org/docs/stable/generated/torch.tril.html
|
pytorch docs
|
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]])
>>> torch.tril(b, diagonal=-1)
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000],
[-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]])
|
https://pytorch.org/docs/stable/generated/torch.tril.html
|
pytorch docs
|
torch._foreach_expm1
torch._foreach_expm1(self: List[Tensor]) -> List[Tensor]
Apply "torch.expm1()" to each Tensor of the input list.
|
https://pytorch.org/docs/stable/generated/torch._foreach_expm1.html
|
pytorch docs
|
torch.cuda.max_memory_allocated
torch.cuda.max_memory_allocated(device=None)
Returns the maximum GPU memory occupied by tensors in bytes for a
given device.
By default, this returns the peak allocated memory since the
beginning of this program. "reset_peak_memory_stats()" can be used
to reset the starting point in tracking this metric. For example,
these two functions can measure the peak allocated memory usage of
each iteration in a training loop.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
int
Note:
See Memory management for more details about GPU memory
management.
|
https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html
|
pytorch docs
|
torch.cuda.caching_allocator_delete
torch.cuda.caching_allocator_delete(mem_ptr)
Deletes memory allocated using the CUDA memory allocator.
Memory allocated with "caching_allocator_alloc()". is freed here.
The associated device and stream are tracked inside the allocator.
Parameters:
mem_ptr (int) -- memory address to be freed by the
allocator.
Note:
See Memory management for more details about GPU memory
management.
|
https://pytorch.org/docs/stable/generated/torch.cuda.caching_allocator_delete.html
|
pytorch docs
|
torch.Tensor.multinomial
Tensor.multinomial(num_samples, replacement=False, *, generator=None) -> Tensor
See "torch.multinomial()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.multinomial.html
|
pytorch docs
|
torch.foreach_zero
torch.foreach_zero(self: List[Tensor]) -> None
Apply "torch.zero()" to each Tensor of the input list.
|
https://pytorch.org/docs/stable/generated/torch._foreach_zero_.html
|
pytorch docs
|
torch.Tensor.round_
Tensor.round_(decimals=0) -> Tensor
In-place version of "round()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.round_.html
|
pytorch docs
|
torch.Tensor.msort
Tensor.msort() -> Tensor
See "torch.msort()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.msort.html
|
pytorch docs
|
torch.Tensor.resolve_conj
Tensor.resolve_conj() -> Tensor
See "torch.resolve_conj()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.resolve_conj.html
|
pytorch docs
|
LazyConvTranspose1d
class torch.nn.LazyConvTranspose1d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
A "torch.nn.ConvTranspose1d" module with lazy initialization of the
"in_channels" argument of the "ConvTranspose1d" that is inferred
from the "input.size(1)". The attributes that will be lazily
initialized are weight and bias.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* out_channels (int) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- "dilation *
(kernel_size - 1) - padding" zero-padding will be added to
|
https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose1d.html
|
pytorch docs
|
both sides of the input. Default: 0
* **output_padding** (*int** or **tuple**, **optional*) --
Additional size added to one side of the output shape.
Default: 0
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
See also:
"torch.nn.ConvTranspose1d" and
"torch.nn.modules.lazy.LazyModuleMixin"
cls_to_become
alias of "ConvTranspose1d"
|
https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose1d.html
|
pytorch docs
|
torch.Tensor.erfinv_
Tensor.erfinv_() -> Tensor
In-place version of "erfinv()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.erfinv_.html
|
pytorch docs
|
torch.Tensor.rot90
Tensor.rot90(k, dims) -> Tensor
See "torch.rot90()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.rot90.html
|
pytorch docs
|
torch.Tensor.tril_
Tensor.tril_(diagonal=0) -> Tensor
In-place version of "tril()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.tril_.html
|
pytorch docs
|
torch.Tensor.float
Tensor.float(memory_format=torch.preserve_format) -> Tensor
"self.float()" is equivalent to "self.to(torch.float32)". See
"to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.float.html
|
pytorch docs
|
Embedding
class torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, _freeze=False, device=None, dtype=None)
A simple lookup table that stores embeddings of a fixed dictionary
and size.
This module is often used to store word embeddings and retrieve
them using indices. The input to the module is a list of indices,
and the output is the corresponding word embeddings.
Parameters:
* num_embeddings (int) -- size of the dictionary of
embeddings
* **embedding_dim** (*int*) -- the size of each embedding vector
* **padding_idx** (*int**, **optional*) -- If specified, the
entries at "padding_idx" do not contribute to the gradient;
therefore, the embedding vector at "padding_idx" is not
updated during training, i.e. it remains as a fixed "pad". For
a newly constructed Embedding, the embedding vector at
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
"padding_idx" will default to all zeros, but can be updated to
another value to be used as the padding vector.
* **max_norm** (*float**, **optional*) -- If given, each
embedding vector with norm larger than "max_norm" is
renormalized to have norm "max_norm".
* **norm_type** (*float**, **optional*) -- The p of the p-norm
to compute for the "max_norm" option. Default "2".
* **scale_grad_by_freq** (*bool**, **optional*) -- If given,
this will scale gradients by the inverse of frequency of the
words in the mini-batch. Default "False".
* **sparse** (*bool**, **optional*) -- If "True", gradient
w.r.t. "weight" matrix will be a sparse tensor. See Notes for
more details regarding sparse gradients.
Variables:
weight (Tensor) -- the learnable weights of the module of
shape (num_embeddings, embedding_dim) initialized from
\mathcal{N}(0, 1)
Shape:
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
\mathcal{N}(0, 1)
Shape:
* Input: (*), IntTensor or LongTensor of arbitrary shape
containing the indices to extract
* Output: (*, H), where *** is the input shape and
H=\text{embedding\_dim}
Note:
Keep in mind that only a limited number of optimizers support
sparse gradients: currently it's "optim.SGD" (*CUDA* and *CPU*),
"optim.SparseAdam" (*CUDA* and *CPU*) and "optim.Adagrad" (*CPU*)
Note:
When "max_norm" is not "None", "Embedding"'s forward method will
modify the "weight" tensor in-place. Since tensors needed for
gradient computations cannot be modified in-place, performing a
differentiable operation on "Embedding.weight" before calling
"Embedding"'s forward method requires cloning "Embedding.weight"
when "max_norm" is not "None". For example:
n, d, m = 3, 5, 7
embedding = nn.Embedding(n, d, max_norm=True)
W = torch.randn((m, d), requires_grad=True)
idx = torch.tensor([1, 2])
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
idx = torch.tensor([1, 2])
a = embedding.weight.clone() @ W.t() # weight must be cloned for this to be differentiable
b = embedding(idx) @ W.t() # modifies weight in-place
out = (a.unsqueeze(0) + b.unsqueeze(1))
loss = out.sigmoid().prod()
loss.backward()
Examples:
>>> # an Embedding module containing 10 tensors of size 3
>>> embedding = nn.Embedding(10, 3)
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])
>>> embedding(input)
tensor([[[-0.0251, -1.6902, 0.7172],
[-0.6431, 0.0748, 0.6969],
[ 1.4970, 1.3448, -0.9685],
[-0.3677, -2.7265, -0.1685]],
[[ 1.4970, 1.3448, -0.9685],
[ 0.4362, -0.4004, 0.9400],
[-0.6431, 0.0748, 0.6969],
[ 0.9124, -2.3616, 1.1151]]])
>>> # example with padding_idx
>>> embedding = nn.Embedding(10, 3, padding_idx=0)
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
input = torch.LongTensor([[0, 2, 0, 5]])
>>> embedding(input)
tensor([[[ 0.0000, 0.0000, 0.0000],
[ 0.1535, -2.0309, 0.9315],
[ 0.0000, 0.0000, 0.0000],
[-0.1655, 0.9897, 0.0635]]])
>>> # example of changing `pad` vector
>>> padding_idx = 0
>>> embedding = nn.Embedding(3, 3, padding_idx=padding_idx)
>>> embedding.weight
Parameter containing:
tensor([[ 0.0000, 0.0000, 0.0000],
[-0.7895, -0.7089, -0.0364],
[ 0.6778, 0.5803, 0.2678]], requires_grad=True)
>>> with torch.no_grad():
... embedding.weight[padding_idx] = torch.ones(3)
>>> embedding.weight
Parameter containing:
tensor([[ 1.0000, 1.0000, 1.0000],
[-0.7895, -0.7089, -0.0364],
[ 0.6778, 0.5803, 0.2678]], requires_grad=True)
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)
Creates Embedding instance from given 2-dimensional FloatTensor.
Parameters:
* **embeddings** (*Tensor*) -- FloatTensor containing weights
for the Embedding. First dimension is being passed to
Embedding as "num_embeddings", second as "embedding_dim".
* **freeze** (*bool**, **optional*) -- If "True", the tensor
does not get updated in the learning process. Equivalent to
"embedding.weight.requires_grad = False". Default: "True"
* **padding_idx** (*int**, **optional*) -- If specified, the
entries at "padding_idx" do not contribute to the gradient;
therefore, the embedding vector at "padding_idx" is not
updated during training, i.e. it remains as a fixed "pad".
* **max_norm** (*float**, **optional*) -- See module
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
initialization documentation.
* **norm_type** (*float**, **optional*) -- See module
initialization documentation. Default "2".
* **scale_grad_by_freq** (*bool**, **optional*) -- See module
initialization documentation. Default "False".
* **sparse** (*bool**, **optional*) -- See module
initialization documentation.
Examples:
>>> # FloatTensor containing pretrained weights
>>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
>>> embedding = nn.Embedding.from_pretrained(weight)
>>> # Get embeddings for index 1
>>> input = torch.LongTensor([1])
>>> embedding(input)
tensor([[ 4.0000, 5.1000, 6.3000]])
|
https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
|
pytorch docs
|
torch.Tensor.amax
Tensor.amax(dim=None, keepdim=False) -> Tensor
See "torch.amax()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.amax.html
|
pytorch docs
|
torch.sparse_csc_tensor
torch.sparse_csc_tensor(ccol_indices, row_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor
Constructs a sparse tensor in CSC (Compressed Sparse Column) with
specified values at the given "ccol_indices" and "row_indices".
Sparse matrix multiplication operations in CSC format are typically
faster than that for sparse tensors in COO format. Make you have a
look at the note on the data type of the indices.
Note:
If the "device" argument is not specified the device of the given
"values" and indices tensor(s) must match. If, however, the
argument is specified the input Tensors will be converted to the
given device and in turn determine the device of the constructed
sparse tensor.
Parameters:
* ccol_indices (array_like) -- (B+1)-dimensional array of
size "(*batchsize, ncols + 1)". The last element of each
|
https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html
|
pytorch docs
|
batch is the number of non-zeros. This tensor encodes the
index in values and row_indices depending on where the given
column starts. Each successive number in the tensor subtracted
by the number before it denotes the number of elements in a
given column.
* **row_indices** (*array_like*) -- Row co-ordinates of each
element in values. (B+1)-dimensional tensor with the same
length as values.
* **values** (*array_list*) -- Initial values for the tensor.
Can be a list, tuple, NumPy "ndarray", scalar, and other types
that represents a (1+K)-dimensional tensor where "K" is the
number of dense dimensions.
* **size** (list, tuple, "torch.Size", optional) -- Size of the
sparse tensor: "(*batchsize, nrows, ncols, *densesize)". If
not provided, the size will be inferred as the minimum size
big enough to hold all non-zero elements.
Keyword Arguments:
|
https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html
|
pytorch docs
|
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if None, infers data type from
"values".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **check_invariants** (*bool**, **optional*) -- If sparse
tensor invariants are checked. Default: as returned by
"torch.sparse.check_sparse_tensor_invariants.is_enabled()",
initially False.
Example::
>>> ccol_indices = [0, 2, 4]
>>> row_indices = [0, 1, 0, 1]
>>> values = [1, 2, 3, 4]
|
https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html
|
pytorch docs
|
values = [1, 2, 3, 4]
>>> torch.sparse_csc_tensor(torch.tensor(ccol_indices, dtype=torch.int64),
... torch.tensor(row_indices, dtype=torch.int64),
... torch.tensor(values), dtype=torch.double)
tensor(ccol_indices=tensor([0, 2, 4]),
row_indices=tensor([0, 1, 0, 1]),
values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,
dtype=torch.float64, layout=torch.sparse_csc)
|
https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html
|
pytorch docs
|
torch.Tensor.trace
Tensor.trace() -> Tensor
See "torch.trace()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.trace.html
|
pytorch docs
|
torch.cuda.memory_summary
torch.cuda.memory_summary(device=None, abbreviated=False)
Returns a human-readable printout of the current memory allocator
statistics for a given device.
This can be useful to display periodically during training, or when
handling out-of-memory exceptions.
Parameters:
* device (torch.device or int, optional) --
selected device. Returns printout for the current device,
given by "current_device()", if "device" is "None" (default).
* **abbreviated** (*bool**, **optional*) -- whether to return an
abbreviated summary (default: False).
Return type:
str
Note:
See Memory management for more details about GPU memory
management.
|
https://pytorch.org/docs/stable/generated/torch.cuda.memory_summary.html
|
pytorch docs
|
torch.diff
torch.diff(input, n=1, dim=- 1, prepend=None, append=None) -> Tensor
Computes the n-th forward difference along the given dimension.
The first-order differences are given by out[i] = input[i + 1] -
input[i]. Higher-order differences are calculated by using
"torch.diff()" recursively.
Parameters:
* input (Tensor) -- the tensor to compute the differences
on
* **n** (*int**, **optional*) -- the number of times to
recursively compute the difference
* **dim** (*int**, **optional*) -- the dimension to compute the
difference along. Default is the last dimension.
* **prepend** (*Tensor**, **optional*) -- values to prepend or
append to "input" along "dim" before computing the difference.
Their dimensions must be equivalent to that of input, and
their shapes must match input's shape except on "dim".
* **append** (*Tensor**, **optional*) -- values to prepend or
|
https://pytorch.org/docs/stable/generated/torch.diff.html
|
pytorch docs
|
append to "input" along "dim" before computing the difference.
Their dimensions must be equivalent to that of input, and
their shapes must match input's shape except on "dim".
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([1, 3, 2])
>>> torch.diff(a)
tensor([ 2, -1])
>>> b = torch.tensor([4, 5])
>>> torch.diff(a, append=b)
tensor([ 2, -1, 2, 1])
>>> c = torch.tensor([[1, 2, 3], [3, 4, 5]])
>>> torch.diff(c, dim=0)
tensor([[2, 2, 2]])
>>> torch.diff(c, dim=1)
tensor([[1, 1],
[1, 1]])
|
https://pytorch.org/docs/stable/generated/torch.diff.html
|
pytorch docs
|
torch.Tensor.eq_
Tensor.eq_(other) -> Tensor
In-place version of "eq()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.eq_.html
|
pytorch docs
|
torch.addbmm
torch.addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) -> Tensor
Performs a batch matrix-matrix product of matrices stored in
"batch1" and "batch2", with a reduced add step (all matrix
multiplications get accumulated along the first dimension). "input"
is added to the final result.
"batch1" and "batch2" must be 3-D tensors each containing the same
number of matrices.
If "batch1" is a (b \times n \times m) tensor, "batch2" is a (b
\times m \times p) tensor, "input" must be broadcastable with a (n
\times p) tensor and "out" will be a (n \times p) tensor.
out = \beta\ \text{input} + \alpha\ (\sum_{i=0}^{b-1}
\text{batch1}_i \mathbin{@} \text{batch2}_i)
If "beta" is 0, then "input" will be ignored, and nan and inf
in it will not be propagated.
For inputs of type FloatTensor or DoubleTensor, arguments
"beta" and "alpha" must be real numbers, otherwise they should be
integers.
|
https://pytorch.org/docs/stable/generated/torch.addbmm.html
|
pytorch docs
|
integers.
This operator supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Parameters:
* batch1 (Tensor) -- the first batch of matrices to be
multiplied
* **batch2** (*Tensor*) -- the second batch of matrices to be
multiplied
Keyword Arguments:
* beta (Number, optional) -- multiplier for "input"
(\beta)
* **input** (*Tensor*) -- matrix to be added
* **alpha** (*Number**, **optional*) -- multiplier for *batch1 @
batch2* (\alpha)
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> M = torch.randn(3, 5)
>>> batch1 = torch.randn(10, 3, 4)
>>> batch2 = torch.randn(10, 4, 5)
>>> torch.addbmm(M, batch1, batch2)
tensor([[ 6.6311, 0.0503, 6.9768, -12.0362, -2.1653],
[ -4.8185, -1.4255, -6.6760, 8.9453, 2.5743],
|
https://pytorch.org/docs/stable/generated/torch.addbmm.html
|
pytorch docs
|
[ -3.8202, 4.3691, 1.0943, -1.1109, 5.4730]])
|
https://pytorch.org/docs/stable/generated/torch.addbmm.html
|
pytorch docs
|
ScriptModule
class torch.jit.ScriptModule
A wrapper around C++ "torch::jit::Module". "ScriptModule"s contain
methods, attributes, parameters, and constants. These can be
accessed the same way as on a normal "nn.Module".
add_module(name, module)
Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
Parameters:
* **name** (*str*) -- name of the child module. The child
module can be accessed from this module using the given
name
* **module** (*Module*) -- child module to be added to the
module.
apply(fn)
Applies "fn" recursively to every submodule (as returned by
".children()") as well as self. Typical use includes
initializing the parameters of a model (see also torch.nn.init).
Parameters:
**fn** ("Module" -> None) -- function to be applied to each
submodule
Returns:
self
Return type:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Returns:
self
Return type:
Module
Example:
>>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16()
Casts all floating point parameters and buffers to "bfloat16"
datatype.
Note:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
buffers(recurse=True)
Returns an iterator over module buffers.
Parameters:
**recurse** (*bool*) -- if True, then yields buffers of this
module and all submodules. Otherwise, yields only buffers
that are direct members of this module.
Yields:
*torch.Tensor* -- module buffer
Return type:
*Iterator*[*Tensor*]
Example:
>>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children()
Returns an iterator over immediate children modules.
Yields:
*Module* -- a child module
Return type:
*Iterator*[*Module*]
property code
Returns a pretty-printed representation (as valid Python syntax)
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.