text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
of the internal graph for the "forward" method. See Inspecting
Code for details.
property code_with_constants
Returns a tuple of:
[0] a pretty-printed representation (as valid Python syntax) of
the internal graph for the "forward" method. See *code*. [1] a
ConstMap following the CONSTANT.cN format of the output in [0].
The indices in the [0] output are keys to the underlying
constant's values.
See Inspecting Code for details.
cpu()
Moves all model parameters and buffers to the CPU.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
cuda(device=None)
Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different
objects. So it should be called before constructing optimizer if
the module will live on GPU while being optimized.
Note:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Note:
This method modifies the module in-place.
Parameters:
**device** (*int**, **optional*) -- if specified, all
parameters will be copied to that device
Returns:
self
Return type:
Module
double()
Casts all floating point parameters and buffers to "double"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
eval()
Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations
of particular modules for details of their behaviors in
training/evaluation mode, if they are affected, e.g. "Dropout",
"BatchNorm", etc.
This is equivalent with "self.train(False)".
See Locally disabling gradient computation for a comparison
between *.eval()* and several similar mechanisms that may be
confused with it.
Returns:
self
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Returns:
self
Return type:
Module
extra_repr()
Set the extra representation of the module
To print customized extra information, you should re-implement
this method in your own modules. Both single-line and multi-line
strings are acceptable.
Return type:
str
float()
Casts all floating point parameters and buffers to "float"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
get_buffer(target)
Returns the buffer given by "target" if it exists, otherwise
throws an error.
See the docstring for "get_submodule" for a more detailed
explanation of this method's functionality as well as how to
correctly specify "target".
Parameters:
**target** (*str*) -- The fully-qualified string name of the
buffer to look for. (See "get_submodule" for how to specify a
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
fully-qualified string.)
Returns:
The buffer referenced by "target"
Return type:
torch.Tensor
Raises:
**AttributeError** -- If the target string references an
invalid path or resolves to something that is not a
buffer
get_extra_state()
Returns any extra state to include in the module's state_dict.
Implement this and a corresponding "set_extra_state()" for your
module if you need to store extra state. This function is called
when building the module's *state_dict()*.
Note that extra state should be picklable to ensure working
serialization of the state_dict. We only provide provide
backwards compatibility guarantees for serializing Tensors;
other objects may break backwards compatibility if their
serialized pickled form changes.
Returns:
Any extra state to store in the module's state_dict
Return type:
object
get_parameter(target)
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
object
get_parameter(target)
Returns the parameter given by "target" if it exists, otherwise
throws an error.
See the docstring for "get_submodule" for a more detailed
explanation of this method's functionality as well as how to
correctly specify "target".
Parameters:
**target** (*str*) -- The fully-qualified string name of the
Parameter to look for. (See "get_submodule" for how to
specify a fully-qualified string.)
Returns:
The Parameter referenced by "target"
Return type:
torch.nn.Parameter
Raises:
**AttributeError** -- If the target string references an
invalid path or resolves to something that is not an
"nn.Parameter"
get_submodule(target)
Returns the submodule given by "target" if it exists, otherwise
throws an error.
For example, let's say you have an "nn.Module" "A" that looks
like this:
A(
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
like this:
A(
(net_b): Module(
(net_c): Module(
(conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
)
(linear): Linear(in_features=100, out_features=200, bias=True)
)
)
(The diagram shows an "nn.Module" "A". "A" has a nested
submodule "net_b", which itself has two submodules "net_c" and
"linear". "net_c" then has a submodule "conv".)
To check whether or not we have the "linear" submodule, we would
call "get_submodule("net_b.linear")". To check whether we have
the "conv" submodule, we would call
"get_submodule("net_b.net_c.conv")".
The runtime of "get_submodule" is bounded by the degree of
module nesting in "target". A query against "named_modules"
achieves the same result, but it is O(N) in the number of
transitive modules. So, for a simple check to see if some
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
submodule exists, "get_submodule" should always be used.
Parameters:
**target** (*str*) -- The fully-qualified string name of the
submodule to look for. (See above example for how to specify
a fully-qualified string.)
Returns:
The submodule referenced by "target"
Return type:
torch.nn.Module
Raises:
**AttributeError** -- If the target string references an
invalid path or resolves to something that is not an
"nn.Module"
property graph
Returns a string representation of the internal graph for the
"forward" method. See Interpreting Graphs for details.
half()
Casts all floating point parameters and buffers to "half"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
property inlined_graph
Returns a string representation of the internal graph for the
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
"forward" method. This graph will be preprocessed to inline all
function and method calls. See Interpreting Graphs for details.
ipu(device=None)
Moves all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different
objects. So it should be called before constructing optimizer if
the module will live on IPU while being optimized.
Note:
This method modifies the module in-place.
Parameters:
**device** (*int**, **optional*) -- if specified, all
parameters will be copied to that device
Returns:
self
Return type:
Module
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from "state_dict" into this module
and its descendants. If "strict" is "True", then the keys of
"state_dict" must exactly match the keys returned by this
module's "state_dict()" function.
Parameters:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Parameters:
* state_dict (dict) -- a dict containing parameters and
persistent buffers.
* **strict** (*bool**, **optional*) -- whether to strictly
enforce that the keys in "state_dict" match the keys
returned by this module's "state_dict()" function. Default:
"True"
Returns:
* **missing_keys** is a list of str containing the missing
keys
* **unexpected_keys** is a list of str containing the
unexpected keys
Return type:
"NamedTuple" with "missing_keys" and "unexpected_keys" fields
Note:
If a parameter or buffer is registered as "None" and its
corresponding key exists in "state_dict", "load_state_dict()"
will raise a "RuntimeError".
modules()
Returns an iterator over all modules in the network.
Yields:
*Module* -- a module in the network
Return type:
*Iterator*[*Module*]
Note:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Iterator[Module]
Note:
Duplicate modules are returned only once. In the following
example, "l" will be returned only once.
Example:
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
... print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix='', recurse=True, remove_duplicate=True)
Returns an iterator over module buffers, yielding both the name
of the buffer as well as the buffer itself.
Parameters:
* **prefix** (*str*) -- prefix to prepend to all buffer
names.
* **recurse** (*bool**, **optional*) -- if True, then yields
buffers of this module and all submodules. Otherwise,
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
yields only buffers that are direct members of this module.
Defaults to True.
* **remove_duplicate** (*bool**, **optional*) -- whether to
remove the duplicated buffers in the result. Defaults to
True.
Yields:
*(str, torch.Tensor)* -- Tuple containing the name and buffer
Return type:
*Iterator*[*Tuple*[str, *Tensor*]]
Example:
>>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size())
named_children()
Returns an iterator over immediate children modules, yielding
both the name of the module as well as the module itself.
Yields:
*(str, Module)* -- Tuple containing a name and child module
Return type:
*Iterator*[*Tuple*[str, *Module*]]
Example:
>>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module)
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
print(module)
named_modules(memo=None, prefix='', remove_duplicate=True)
Returns an iterator over all modules in the network, yielding
both the name of the module as well as the module itself.
Parameters:
* **memo** (*Optional**[**Set**[**Module**]**]*) -- a memo to
store the set of modules already added to the result
* **prefix** (*str*) -- a prefix that will be added to the
name of the module
* **remove_duplicate** (*bool*) -- whether to remove the
duplicated module instances in the result or not
Yields:
*(str, Module)* -- Tuple of name and module
Note:
Duplicate modules are returned only once. In the following
example, "l" will be returned only once.
Example:
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
... print(idx, '->', m)
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
... print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True, remove_duplicate=True)
Returns an iterator over module parameters, yielding both the
name of the parameter as well as the parameter itself.
Parameters:
* **prefix** (*str*) -- prefix to prepend to all parameter
names.
* **recurse** (*bool*) -- if True, then yields parameters of
this module and all submodules. Otherwise, yields only
parameters that are direct members of this module.
* **remove_duplicate** (*bool**, **optional*) -- whether to
remove the duplicated parameters in the result. Defaults to
True.
Yields:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
True.
Yields:
*(str, Parameter)* -- Tuple containing the name and parameter
Return type:
*Iterator*[*Tuple*[str, *Parameter*]]
Example:
>>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size())
parameters(recurse=True)
Returns an iterator over module parameters.
This is typically passed to an optimizer.
Parameters:
**recurse** (*bool*) -- if True, then yields parameters of
this module and all submodules. Otherwise, yields only
parameters that are direct members of this module.
Yields:
*Parameter* -- module parameter
Return type:
*Iterator*[*Parameter*]
Example:
>>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
register_backward_hook(hook)
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
register_backward_hook(hook)
Registers a backward hook on the module.
This function is deprecated in favor of
"register_full_backward_hook()" and the behavior of this
function will change in future versions.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module.
This is typically used to register a buffer that should not to
be considered a model parameter. For example, BatchNorm's
"running_mean" is not a parameter, but is part of the module's
state. Buffers, by default, are persistent and will be saved
alongside parameters. This behavior can be changed by setting
"persistent" to "False". The only difference between a
persistent buffer and a non-persistent buffer is that the latter
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
will not be a part of this module's "state_dict".
Buffers can be accessed as attributes using given names.
Parameters:
* **name** (*str*) -- name of the buffer. The buffer can be
accessed from this module using the given name
* **tensor** (*Tensor** or **None*) -- buffer to be
registered. If "None", then operations that run on buffers,
such as "cuda", are ignored. If "None", the buffer is
**not** included in the module's "state_dict".
* **persistent** (*bool*) -- whether the buffer is part of
this module's "state_dict".
Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook, *, prepend=False, with_kwargs=False)
Registers a forward hook on the module.
The hook will be called every time after "forward()" has
computed an output.
If "with_kwargs" is "False" or not specified, the input contains
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
only the positional arguments given to the module. Keyword
arguments won't be passed to the hooks and only to the
"forward". The hook can modify the output. It can modify the
input inplace but it will not have effect on forward since this
is called after "forward()" is called. The hook should have the
following signature:
hook(module, args, output) -> None or modified output
If "with_kwargs" is "True", the forward hook will be passed the
"kwargs" given to the forward function and be expected to return
the output possibly modified. The hook should have the following
signature:
hook(module, args, kwargs, output) -> None or modified output
Parameters:
* **hook** (*Callable*) -- The user defined hook to be
registered.
* **prepend** (*bool*) -- If "True", the provided "hook" will
be fired before all existing "forward" hooks on this
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "forward" hooks on this
"torch.nn.modules.Module". Note that global "forward" hooks
registered with "register_module_forward_hook()" will fire
before all hooks registered by this method. Default:
"False"
* **with_kwargs** (*bool*) -- If "True", the "hook" will be
passed the kwargs given to the forward function. Default:
"False"
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)
Registers a forward pre-hook on the module.
The hook will be called every time before "forward()" is
invoked.
If "with_kwargs" is false or not specified, the input contains
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
only the positional arguments given to the module. Keyword
arguments won't be passed to the hooks and only to the
"forward". The hook can modify the input. User can either return
a tuple or a single modified value in the hook. We will wrap the
value into a tuple if a single value is returned (unless that
value is already a tuple). The hook should have the following
signature:
hook(module, args) -> None or modified input
If "with_kwargs" is true, the forward pre-hook will be passed
the kwargs given to the forward function. And if the hook
modifies the input, both the args and kwargs should be returned.
The hook should have the following signature:
hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
Parameters:
* **hook** (*Callable*) -- The user defined hook to be
registered.
* **prepend** (*bool*) -- If true, the provided "hook" will
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
be fired before all existing "forward_pre" hooks on this
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "forward_pre" hooks on
this "torch.nn.modules.Module". Note that global
"forward_pre" hooks registered with
"register_module_forward_pre_hook()" will fire before all
hooks registered by this method. Default: "False"
* **with_kwargs** (*bool*) -- If true, the "hook" will be
passed the kwargs given to the forward function. Default:
"False"
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_full_backward_hook(hook, prepend=False)
Registers a backward hook on the module.
The hook will be called every time the gradients with respect to
a module are computed, i.e. the hook will execute if and only if
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
the gradients with respect to module outputs are computed. The
hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The "grad_input" and "grad_output" are tuples that contain the
gradients with respect to the inputs and outputs respectively.
The hook should not modify its arguments, but it can optionally
return a new gradient with respect to the input that will be
used in place of "grad_input" in subsequent computations.
"grad_input" will only correspond to the inputs given as
positional arguments and all kwarg arguments are ignored.
Entries in "grad_input" and "grad_output" will be "None" for all
non-Tensor arguments.
For technical reasons, when this hook is applied to a Module,
its forward function will receive a view of each Tensor passed
to the Module. Similarly the caller will receive a view of each
Tensor returned by the Module's forward function.
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Warning:
Modifying inputs or outputs inplace is not allowed when using
backward hooks and will raise an error.
Parameters:
* **hook** (*Callable*) -- The user-defined hook to be
registered.
* **prepend** (*bool*) -- If true, the provided "hook" will
be fired before all existing "backward" hooks on this
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "backward" hooks on this
"torch.nn.modules.Module". Note that global "backward"
hooks registered with
"register_module_full_backward_hook()" will fire before all
hooks registered by this method.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_full_backward_pre_hook(hook, prepend=False)
Registers a backward pre-hook on the module.
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
The hook will be called every time the gradients for the module
are computed. The hook should have the following signature:
hook(module, grad_output) -> Tensor or None
The "grad_output" is a tuple. The hook should not modify its
arguments, but it can optionally return a new gradient with
respect to the output that will be used in place of
"grad_output" in subsequent computations. Entries in
"grad_output" will be "None" for all non-Tensor arguments.
For technical reasons, when this hook is applied to a Module,
its forward function will receive a view of each Tensor passed
to the Module. Similarly the caller will receive a view of each
Tensor returned by the Module's forward function.
Warning:
Modifying inputs inplace is not allowed when using backward
hooks and will raise an error.
Parameters:
* **hook** (*Callable*) -- The user-defined hook to be
registered.
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
registered.
* **prepend** (*bool*) -- If true, the provided "hook" will
be fired before all existing "backward_pre" hooks on this
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "backward_pre" hooks on
this "torch.nn.modules.Module". Note that global
"backward_pre" hooks registered with
"register_module_full_backward_pre_hook()" will fire before
all hooks registered by this method.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_load_state_dict_post_hook(hook)
Registers a post hook to be run after module's "load_state_dict"
is called.
It should have the following signature::
hook(module, incompatible_keys) -> None
The "module" argument is the current module that this hook is
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
registered on, and the "incompatible_keys" argument is a
"NamedTuple" consisting of attributes "missing_keys" and
"unexpected_keys". "missing_keys" is a "list" of "str"
containing the missing keys and "unexpected_keys" is a "list" of
"str" containing the unexpected keys.
The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling "load_state_dict()"
with "strict=True" are affected by modifications the hook makes
to "missing_keys" or "unexpected_keys", as expected. Additions
to either set of keys will result in an error being thrown when
"strict=True", and clearing out both missing and unexpected keys
will avoid an error.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_module(name, module)
Alias for "add_module()".
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Alias for "add_module()".
register_parameter(name, param)
Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters:
* **name** (*str*) -- name of the parameter. The parameter
can be accessed from this module using the given name
* **param** (*Parameter** or **None*) -- parameter to be
added to the module. If "None", then operations that run on
parameters, such as "cuda", are ignored. If "None", the
parameter is **not** included in the module's "state_dict".
register_state_dict_pre_hook(hook)
These hooks will be called with arguments: "self", "prefix", and
"keep_vars" before calling "state_dict" on "self". The
registered hooks can be used to perform pre-processing before
the "state_dict" call is made.
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in
this module.
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
this module.
This method sets the parameters' "requires_grad" attributes in-
place.
This method is helpful for freezing part of the module for
finetuning or training parts of a model individually (e.g., GAN
training).
See Locally disabling gradient computation for a comparison
between *.requires_grad_()* and several similar mechanisms that
may be confused with it.
Parameters:
**requires_grad** (*bool*) -- whether autograd should record
operations on parameters in this module. Default: "True".
Returns:
self
Return type:
Module
save(f, _extra_files={})
See "torch.jit.save" for details.
set_extra_state(state)
This function is called from "load_state_dict()" to handle any
extra state found within the *state_dict*. Implement this
function and a corresponding "get_extra_state()" for your module
if you need to store extra state within its *state_dict*.
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Parameters:
state (dict) -- Extra state from the state_dict
share_memory()
See "torch.Tensor.share_memory_()"
Return type:
*T*
state_dict(*args, destination=None, prefix='', keep_vars=False)
Returns a dictionary containing references to the whole state of
the module.
Both parameters and persistent buffers (e.g. running averages)
are included. Keys are corresponding parameter and buffer names.
Parameters and buffers set to "None" are not included.
Note:
The returned object is a shallow copy. It contains references
to the module's parameters and buffers.
Warning:
Currently "state_dict()" also accepts positional arguments for
"destination", "prefix" and "keep_vars" in order. However,
this is being deprecated and keyword arguments will be
enforced in future releases.
Warning:
Please avoid the use of argument "destination" as it is not
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
designed for end-users.
Parameters:
* **destination** (*dict**, **optional*) -- If provided, the
state of module will be updated into the dict and the same
object is returned. Otherwise, an "OrderedDict" will be
created and returned. Default: "None".
* **prefix** (*str**, **optional*) -- a prefix added to
parameter and buffer names to compose the keys in
state_dict. Default: "''".
* **keep_vars** (*bool**, **optional*) -- by default the
"Tensor" s returned in the state dict are detached from
autograd. If it's set to "True", detaching will not be
performed. Default: "False".
Returns:
a dictionary containing a whole state of the module
Return type:
dict
Example:
>>> module.state_dict().keys()
['bias', 'weight']
to(args, *kwargs)
Moves and/or casts the parameters and buffers.
This can be called as
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to "torch.Tensor.to()", but only
accepts floating point or complex "dtype"s. In addition, this
method will only cast the floating point or complex parameters
and buffers to "dtype" (if given). The integral parameters and
buffers will be moved "device", if that is given, but with
dtypes unchanged. When "non_blocking" is set, it tries to
convert/move asynchronously with respect to the host if
possible, e.g., moving CPU Tensors with pinned memory to CUDA
devices.
See below for examples.
Note:
This method modifies the module in-place.
Parameters:
* **device** ("torch.device") -- the desired device of the
parameters and buffers in this module
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
parameters and buffers in this module
* **dtype** ("torch.dtype") -- the desired floating point or
complex dtype of the parameters and buffers in this module
* **tensor** (*torch.Tensor*) -- Tensor whose dtype and
device are the desired dtype and device for all parameters
and buffers in this module
* **memory_format** ("torch.memory_format") -- the desired
memory format for 4D parameters and buffers in this module
(keyword only argument)
Returns:
self
Return type:
Module
Examples:
>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_empty(*, device)
Moves the parameters and buffers to the specified device without
copying storage.
Parameters:
**device** ("torch.device") -- The desired device of the
parameters and buffers in this module.
Returns:
self
Return type:
Module
train(mode=True)
Sets the module in training mode.
This has any effect only on certain modules. See documentations
of particular modules for details of their behaviors in
training/evaluation mode, if they are affected, e.g. "Dropout",
"BatchNorm", etc.
Parameters:
**mode** (*bool*) -- whether to set training mode ("True") or
evaluation mode ("False"). Default: "True".
Returns:
self
Return type:
Module
type(dst_type)
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Module
type(dst_type)
Casts all parameters and buffers to "dst_type".
Note:
This method modifies the module in-place.
Parameters:
**dst_type** (*type** or **string*) -- the desired type
Returns:
self
Return type:
Module
xpu(device=None)
Moves all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different
objects. So it should be called before constructing optimizer if
the module will live on XPU while being optimized.
Note:
This method modifies the module in-place.
Parameters:
**device** (*int**, **optional*) -- if specified, all
parameters will be copied to that device
Returns:
self
Return type:
Module
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar
function under "torch.optim.Optimizer" for more context.
Parameters:
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
Parameters:
set_to_none (bool) -- instead of setting to zero, set
the grads to None. See "torch.optim.Optimizer.zero_grad()"
for details.
|
https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html
|
pytorch docs
|
torch.nn.functional.avg_pool2d
torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) -> Tensor
Applies 2D average-pooling operation in kH \times kW regions by
step size sH \times sW steps. The number of output features is
equal to the number of input planes.
See "AvgPool2d" for details and output shape.
Parameters:
* input -- input tensor (\text{minibatch} ,
\text{in_channels} , iH , iW)
* **kernel_size** -- size of the pooling region. Can be a single
number or a tuple *(kH, kW)*
* **stride** -- stride of the pooling operation. Can be a single
number or a tuple *(sH, sW)*. Default: "kernel_size"
* **padding** -- implicit zero paddings on both sides of the
input. Can be a single number or a tuple *(padH, padW)*.
Default: 0
* **ceil_mode** -- when True, will use *ceil* instead of *floor*
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html
|
pytorch docs
|
in the formula to compute the output shape. Default: "False"
* **count_include_pad** -- when True, will include the zero-
padding in the averaging calculation. Default: "True"
* **divisor_override** -- if specified, it will be used as
divisor, otherwise size of the pooling region will be used.
Default: None
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html
|
pytorch docs
|
torch.Tensor.values
Tensor.values() -> Tensor
Return the values tensor of a sparse COO tensor.
Warning:
Throws an error if "self" is not a sparse COO tensor.
See also "Tensor.indices()".
Note:
This method can only be called on a coalesced sparse tensor. See
"Tensor.coalesce()" for details.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.values.html
|
pytorch docs
|
torch.Tensor.to_sparse
Tensor.to_sparse(sparseDims) -> Tensor
Returns a sparse copy of the tensor. PyTorch supports sparse
tensors in coordinate format.
Parameters:
sparseDims (int, optional) -- the number of sparse
dimensions to include in the new sparse tensor
Example:
>>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])
>>> d
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
>>> d.to_sparse()
tensor(indices=tensor([[1, 1],
[0, 2]]),
values=tensor([ 9, 10]),
size=(3, 3), nnz=2, layout=torch.sparse_coo)
>>> d.to_sparse(1)
tensor(indices=tensor([[1]]),
values=tensor([[ 9, 0, 10]]),
size=(3, 3), nnz=1, layout=torch.sparse_coo)
to_sparse(*, layout=None, blocksize=None, dense_dim=None) -> Tensor
Returns a sparse tensor with the specified layout and blocksize.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html
|
pytorch docs
|
If the "self" is strided, the number of dense dimensions could be
specified, and a hybrid sparse tensor will be created, with
dense_dim dense dimensions and self.dim() - 2 - dense_dim batch
dimension.
Note:
If the "self" layout and blocksize parameters match with the
specified layout and blocksize, return "self". Otherwise, return
a sparse tensor copy of "self".
Parameters:
* layout ("torch.layout", optional) -- The desired sparse
layout. One of "torch.sparse_coo", "torch.sparse_csr",
"torch.sparse_csc", "torch.sparse_bsr", or "torch.sparse_bsc".
Default: if "None", "torch.sparse_coo".
* **blocksize** (list, tuple, "torch.Size", optional) -- Block
size of the resulting BSR or BSC tensor. For other layouts,
specifying the block size that is not "None" will result in a
RuntimeError exception. A block size must be a tuple of
length two such that its items evenly divide the two sparse
|
https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html
|
pytorch docs
|
dimensions.
* **dense_dim** (*int**, **optional*) -- Number of dense
dimensions of the resulting CSR, CSC, BSR or BSC tensor. This
argument should be used only if "self" is a strided tensor,
and must be a value between 0 and dimension of "self" tensor
minus two.
Example:
>>> x = torch.tensor([[1, 0], [0, 0], [2, 3]])
>>> x.to_sparse(layout=torch.sparse_coo)
tensor(indices=tensor([[0, 2, 2],
[0, 0, 1]]),
values=tensor([1, 2, 3]),
size=(3, 2), nnz=3, layout=torch.sparse_coo)
>>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(1, 2))
tensor(crow_indices=tensor([0, 1, 1, 2]),
col_indices=tensor([0, 0]),
values=tensor([[[1, 0]],
[[2, 3]]]), size=(3, 2), nnz=2, layout=torch.sparse_bsr)
>>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(2, 1))
RuntimeError: Tensor size(-2) 3 needs to be divisible by blocksize[0] 2
|
https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html
|
pytorch docs
|
x.to_sparse(layout=torch.sparse_csr, blocksize=(3, 1))
RuntimeError: to_sparse for Strided to SparseCsr conversion does not use specified blocksize
>>> x = torch.tensor([[[1], [0]], [[0], [0]], [[2], [3]]])
>>> x.to_sparse(layout=torch.sparse_csr, dense_dim=1)
tensor(crow_indices=tensor([0, 1, 1, 3]),
col_indices=tensor([0, 0, 1]),
values=tensor([[1],
[2],
[3]]), size=(3, 2, 1), nnz=3, layout=torch.sparse_csr)
|
https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html
|
pytorch docs
|
torch.Tensor.ccol_indices
Tensor.ccol_indices()
|
https://pytorch.org/docs/stable/generated/torch.Tensor.ccol_indices.html
|
pytorch docs
|
torch.Tensor.select
Tensor.select(dim, index) -> Tensor
See "torch.select()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.select.html
|
pytorch docs
|
Adamax
class torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, foreach=None, *, maximize=False, differentiable=False)
Implements Adamax algorithm (a variant of Adam based on infinity
norm).
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \gamma \text{ (lr)}, \beta_1,
\beta_2 \text{ (betas)},\theta_0 \text{
(params)},f(\theta) \text{ (objective)}, \: \lambda
\text{ (weight decay)},
\\ &\hspace{13mm} \epsilon \text{ (epsilon)}
\\ &\textbf{initialize} : m_0 \leftarrow 0 \text{ ( first
moment)}, u_0 \leftarrow 0 \text{ ( infinity norm)}
\\[-1.ex] &\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\ &\hspace{5mm}g_t
\leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\
&\hspace{5mm}if \: \lambda \neq 0
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
&\hspace{5mm}if \: \lambda \neq 0
\ &\hspace{10mm} g_t \leftarrow g_t + \lambda
\theta_{t-1} \ &\hspace{5mm}m_t
\leftarrow \beta_1 m_{t-1} + (1 - \beta_1) g_t
\ &\hspace{5mm}u_t \leftarrow \mathrm{max}(\beta_2
u_{t-1}, |g_{t}|+\epsilon) \ &\hspace{5mm}\theta_t
\leftarrow \theta_{t-1} - \frac{\gamma m_t}{(1-\beta^t_1) u_t}
\ &\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to Adam: A
Method for Stochastic Optimization.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
2e-3)
* **betas** (*Tuple**[**float**, **float**]**, **optional*) --
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
coefficients used for computing running averages of gradient
and its square
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **weight_decay** (*float**, **optional*) -- weight decay (L2
penalty) (default: 0)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used. If unspecified by the
user (so foreach is None), we will try to use foreach over the
for-loop implementation on CUDA, since it is usually
significantly more performant. (default: None)
* **maximize** (*bool**, **optional*) -- maximize the params
based on the objective, instead of minimizing (default: False)
* **differentiable** (*bool**, **optional*) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether).
|
https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html
|
pytorch docs
|
torch.foreach_cos
torch.foreach_cos(self: List[Tensor]) -> None
Apply "torch.cos()" to each Tensor of the input list.
|
https://pytorch.org/docs/stable/generated/torch._foreach_cos_.html
|
pytorch docs
|
ModuleDict
class torch.nn.ModuleDict(modules=None)
Holds submodules in a dictionary.
"ModuleDict" can be indexed like a regular Python dictionary, but
modules it contains are properly registered, and will be visible by
all "Module" methods.
"ModuleDict" is an ordered dictionary that respects
the order of insertion, and
in "update()", the order of the merged "OrderedDict", "dict"
(started from Python 3.6) or another "ModuleDict" (the argument
to "update()").
Note that "update()" with other unordered mapping types (e.g.,
Python's plain "dict" before Python version 3.6) does not preserve
the order of the merged mapping.
Parameters:
modules (iterable, optional) -- a mapping (dictionary)
of (string: module) or an iterable of key-value pairs of type
(string, module)
Example:
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
|
https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html
|
pytorch docs
|
super(MyModule, self).init()
self.choices = nn.ModuleDict({
'conv': nn.Conv2d(10, 10, 3),
'pool': nn.MaxPool2d(3)
})
self.activations = nn.ModuleDict([
['lrelu', nn.LeakyReLU()],
['prelu', nn.PReLU()]
])
def forward(self, x, choice, act):
x = self.choices[choice](x)
x = self.activations[act](x)
return x
clear()
Remove all items from the ModuleDict.
items()
Return an iterable of the ModuleDict key/value pairs.
Return type:
*Iterable*[*Tuple*[str, *Module*]]
keys()
Return an iterable of the ModuleDict keys.
Return type:
*Iterable*[str]
pop(key)
Remove key from the ModuleDict and return its module.
Parameters:
**key** (*str*) -- key to pop from the ModuleDict
Return type:
*Module*
|
https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html
|
pytorch docs
|
Return type:
Module
update(modules)
Update the "ModuleDict" with the key-value pairs from a mapping
or an iterable, overwriting existing keys.
Note:
If "modules" is an "OrderedDict", a "ModuleDict", or an
iterable of key-value pairs, the order of new elements in it
is preserved.
Parameters:
**modules** (*iterable*) -- a mapping (dictionary) from
string to "Module", or an iterable of key-value pairs of type
(string, "Module")
values()
Return an iterable of the ModuleDict values.
Return type:
*Iterable*[*Module*]
|
https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html
|
pytorch docs
|
torch.Tensor.tensor_split
Tensor.tensor_split(indices_or_sections, dim=0) -> List of Tensors
See "torch.tensor_split()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.tensor_split.html
|
pytorch docs
|
OneCycleLR
class torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=- 1, verbose=False)
Sets the learning rate of each parameter group according to the
1cycle learning rate policy. The 1cycle policy anneals the learning
rate from an initial learning rate to some maximum learning rate
and then from that maximum learning rate to some minimum learning
rate much lower than the initial learning rate. This policy was
initially described in the paper Super-Convergence: Very Fast
Training of Neural Networks Using Large Learning Rates.
The 1cycle learning rate policy changes the learning rate after
every batch. step should be called after a batch has been used
for training.
This scheduler is not chainable.
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
|
pytorch docs
|
This scheduler is not chainable.
Note also that the total number of steps in the cycle can be
determined in one of two ways (listed in order of precedence):
A value for total_steps is explicitly provided.
A number of epochs (epochs) and a number of steps per epoch
(steps_per_epoch) are provided. In this case, the number of
total steps is inferred by total_steps = epochs *
steps_per_epoch
You must either provide a value for total_steps or provide a value
for both epochs and steps_per_epoch.
The default behaviour of this scheduler follows the fastai
implementation of 1cycle, which claims that "unpublished work has
shown even better results by using only two phases". To mimic the
behaviour of the original paper instead, set "three_phase=True".
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **max_lr** (*float** or **list*) -- Upper learning rate
boundaries in the cycle for each parameter group.
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
|
pytorch docs
|
total_steps (int) -- The total number of steps in the
cycle. Note that if a value is not provided here, then it must
be inferred by providing a value for epochs and
steps_per_epoch. Default: None
epochs (int) -- The number of epochs to train for. This
is used along with steps_per_epoch in order to infer the total
number of steps in the cycle if a value for total_steps is not
provided. Default: None
steps_per_epoch (int) -- The number of steps per epoch
to train for. This is used along with epochs in order to infer
the total number of steps in the cycle if a value for
total_steps is not provided. Default: None
pct_start (float) -- The percentage of the cycle (in
number of steps) spent increasing the learning rate. Default:
0.3
anneal_strategy (str) -- {'cos', 'linear'} Specifies the
annealing strategy: "cos" for cosine annealing, "linear" for
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
|
pytorch docs
|
linear annealing. Default: 'cos'
* **cycle_momentum** (*bool*) -- If "True", momentum is cycled
inversely to learning rate between 'base_momentum' and
'max_momentum'. Default: True
* **base_momentum** (*float** or **list*) -- Lower momentum
boundaries in the cycle for each parameter group. Note that
momentum is cycled inversely to learning rate; at the peak of
a cycle, momentum is 'base_momentum' and learning rate is
'max_lr'. Default: 0.85
* **max_momentum** (*float** or **list*) -- Upper momentum
boundaries in the cycle for each parameter group.
Functionally, it defines the cycle amplitude (max_momentum -
base_momentum). Note that momentum is cycled inversely to
learning rate; at the start of a cycle, momentum is
'max_momentum' and learning rate is 'base_lr' Default: 0.95
* **div_factor** (*float*) -- Determines the initial learning
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
|
pytorch docs
|
rate via initial_lr = max_lr/div_factor Default: 25
* **final_div_factor** (*float*) -- Determines the minimum
learning rate via min_lr = initial_lr/final_div_factor
Default: 1e4
* **three_phase** (*bool*) -- If "True", use a third phase of
the schedule to annihilate the learning rate according to
'final_div_factor' instead of modifying the second phase (the
first two phases will be symmetrical about the step indicated
by 'pct_start').
* **last_epoch** (*int*) -- The index of the last batch. This
parameter is used when resuming a training job. Since *step()*
should be invoked after each batch instead of after each
epoch, this number represents the total number of *batches*
computed, not the total number of epochs computed. When
last_epoch=-1, the schedule is started from the beginning.
Default: -1
* **verbose** (*bool*) -- If "True", prints a message to stdout
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
|
pytorch docs
|
for each update. Default: "False".
-[ Example ]-
data_loader = torch.utils.data.DataLoader(...)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10)
for epoch in range(10):
for batch in data_loader:
train_batch(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
|
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
|
pytorch docs
|
torch.Tensor.untyped_storage
Tensor.untyped_storage() -> torch.UntypedStorage
Returns the underlying "UntypedStorage".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.untyped_storage.html
|
pytorch docs
|
InstanceNorm3d
class torch.ao.nn.quantized.InstanceNorm3d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)
This is the quantized version of "InstanceNorm3d".
Additional args:
* scale - quantization scale of the output, type: double.
* **zero_point** - quantization zero point of the output, type:
long.
|
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm3d.html
|
pytorch docs
|
torch.nn.functional.avg_pool1d
torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) -> Tensor
Applies a 1D average pooling over an input signal composed of
several input planes.
See "AvgPool1d" for details and output shape.
Parameters:
* input -- input tensor of shape (\text{minibatch} ,
\text{in_channels} , iW)
* **kernel_size** -- the size of the window. Can be a single
number or a tuple *(kW,)*
* **stride** -- the stride of the window. Can be a single number
or a tuple *(sW,)*. Default: "kernel_size"
* **padding** -- implicit zero paddings on both sides of the
input. Can be a single number or a tuple *(padW,)*. Default: 0
* **ceil_mode** -- when True, will use *ceil* instead of *floor*
to compute the output shape. Default: "False"
* **count_include_pad** -- when True, will include the zero-
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool1d.html
|
pytorch docs
|
padding in the averaging calculation. Default: "True"
Examples:
>>> # pool of square window of size=3, stride=2
>>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32)
>>> F.avg_pool1d(input, kernel_size=3, stride=2)
tensor([[[ 2., 4., 6.]]])
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool1d.html
|
pytorch docs
|
torch.Tensor.bitwise_not_
Tensor.bitwise_not_() -> Tensor
In-place version of "bitwise_not()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_not_.html
|
pytorch docs
|
torch.Tensor.sparse_dim
Tensor.sparse_dim() -> int
Return the number of sparse dimensions in a sparse tensor "self".
Note:
Returns "0" if "self" is not a sparse tensor.
See also "Tensor.dense_dim()" and hybrid tensors.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_dim.html
|
pytorch docs
|
torch.Tensor.hypot
Tensor.hypot(other) -> Tensor
See "torch.hypot()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.hypot.html
|
pytorch docs
|
torch.scatter
torch.scatter(input, dim, index, src) -> Tensor
Out-of-place version of "torch.Tensor.scatter_()"
|
https://pytorch.org/docs/stable/generated/torch.scatter.html
|
pytorch docs
|
torch.swapdims
torch.swapdims(input, dim0, dim1) -> Tensor
Alias for "torch.transpose()".
This function is equivalent to NumPy's swapaxes function.
Examples:
>>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.swapdims(x, 0, 1)
tensor([[[0, 1],
[4, 5]],
[[2, 3],
[6, 7]]])
>>> torch.swapdims(x, 0, 2)
tensor([[[0, 4],
[2, 6]],
[[1, 5],
[3, 7]]])
|
https://pytorch.org/docs/stable/generated/torch.swapdims.html
|
pytorch docs
|
torch.Tensor.true_divide_
Tensor.true_divide_(value) -> Tensor
In-place version of "true_divide_()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.true_divide_.html
|
pytorch docs
|
torch.Tensor.fmax
Tensor.fmax(other) -> Tensor
See "torch.fmax()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.fmax.html
|
pytorch docs
|
torch.is_storage
torch.is_storage(obj)
Returns True if obj is a PyTorch storage object.
Parameters:
obj (Object) -- Object to test
|
https://pytorch.org/docs/stable/generated/torch.is_storage.html
|
pytorch docs
|
Generator
class torch.Generator(device='cpu')
Creates and returns a generator object that manages the state of
the algorithm which produces pseudo random numbers. Used as a
keyword argument in many In-place random sampling functions.
Parameters:
device ("torch.device", optional) -- the desired device for
the generator.
Returns:
An torch.Generator object.
Return type:
Generator
Example:
>>> g_cpu = torch.Generator()
>>> g_cuda = torch.Generator(device='cuda')
device
Generator.device -> device
Gets the current device of the generator.
Example:
>>> g_cpu = torch.Generator()
>>> g_cpu.device
device(type='cpu')
get_state() -> Tensor
Returns the Generator state as a "torch.ByteTensor".
Returns:
A "torch.ByteTensor" which contains all the necessary bits to
restore a Generator to a specific point in time.
Return type:
Tensor
|
https://pytorch.org/docs/stable/generated/torch.Generator.html
|
pytorch docs
|
Return type:
Tensor
Example:
>>> g_cpu = torch.Generator()
>>> g_cpu.get_state()
initial_seed() -> int
Returns the initial seed for generating random numbers.
Example:
>>> g_cpu = torch.Generator()
>>> g_cpu.initial_seed()
2147483647
manual_seed(seed) -> Generator
Sets the seed for generating random numbers. Returns a
*torch.Generator* object. It is recommended to set a large seed,
i.e. a number that has a good balance of 0 and 1 bits. Avoid
having many 0 bits in the seed.
Parameters:
**seed** (*int*) -- The desired seed. Value must be within
the inclusive range *[-0x8000_0000_0000_0000,
0xffff_ffff_ffff_ffff]*. Otherwise, a RuntimeError is raised.
Negative inputs are remapped to positive values with the
formula *0xffff_ffff_ffff_ffff + seed*.
Returns:
An torch.Generator object.
Return type:
Generator
|
https://pytorch.org/docs/stable/generated/torch.Generator.html
|
pytorch docs
|
Return type:
Generator
Example:
>>> g_cpu = torch.Generator()
>>> g_cpu.manual_seed(2147483647)
seed() -> int
Gets a non-deterministic random number from std::random_device
or the current time and uses it to seed a Generator.
Example:
>>> g_cpu = torch.Generator()
>>> g_cpu.seed()
1516516984916
set_state(new_state) -> void
Sets the Generator state.
Parameters:
**new_state** (*torch.ByteTensor*) -- The desired state.
Example:
>>> g_cpu = torch.Generator()
>>> g_cpu_other = torch.Generator()
>>> g_cpu.set_state(g_cpu_other.get_state())
|
https://pytorch.org/docs/stable/generated/torch.Generator.html
|
pytorch docs
|
torch.Tensor.ge_
Tensor.ge_(other) -> Tensor
In-place version of "ge()".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.ge_.html
|
pytorch docs
|
torch.Tensor.pin_memory
Tensor.pin_memory() -> Tensor
Copies the tensor to pinned memory, if it's not already pinned.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.pin_memory.html
|
pytorch docs
|
torch.Tensor.gt
Tensor.gt(other) -> Tensor
See "torch.gt()".
|
https://pytorch.org/docs/stable/generated/torch.Tensor.gt.html
|
pytorch docs
|
torch.cummax
torch.cummax(input, dim, *, out=None)
Returns a namedtuple "(values, indices)" where "values" is the
cumulative maximum of elements of "input" in the dimension "dim".
And "indices" is the index location of each maximum value found in
the dimension "dim".
y_i = max(x_1, x_2, x_3, \dots, x_i)
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to do the operation over
Keyword Arguments:
out (tuple, optional) -- the result tuple of two
output tensors (values, indices)
Example:
>>> a = torch.randn(10)
>>> a
tensor([-0.3449, -1.5447, 0.0685, -1.5104, -1.1706, 0.2259, 1.4696, -1.3284,
1.9946, -0.8209])
>>> torch.cummax(a, dim=0)
torch.return_types.cummax(
values=tensor([-0.3449, -0.3449, 0.0685, 0.0685, 0.0685, 0.2259, 1.4696, 1.4696,
1.9946, 1.9946]),
indices=tensor([0, 0, 2, 2, 2, 5, 6, 6, 8, 8]))
|
https://pytorch.org/docs/stable/generated/torch.cummax.html
|
pytorch docs
|
torch.nn.functional.upsample_nearest
torch.nn.functional.upsample_nearest(input, size=None, scale_factor=None)
Upsamples the input, using nearest neighbours' pixel values.
Warning:
This function is deprecated in favor of
"torch.nn.functional.interpolate()". This is equivalent with
"nn.functional.interpolate(..., mode='nearest')".
Currently spatial and volumetric upsampling are supported (i.e.
expected inputs are 4 or 5 dimensional).
Parameters:
* input (Tensor) -- input
* **size** (*int** or **Tuple**[**int**, **int**] or
**Tuple**[**int**, **int**, **int**]*) -- output spatia size.
* **scale_factor** (*int*) -- multiplier for spatial size. Has
to be an integer.
Note:
This operation may produce nondeterministic gradients when given
tensors on a CUDA device. See Reproducibility for more
information.
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample_nearest.html
|
pytorch docs
|
torch.Tensor.index_put_
Tensor.index_put_(indices, values, accumulate=False) -> Tensor
Puts values from the tensor "values" into the tensor "self" using
the indices specified in "indices" (which is a tuple of Tensors).
The expression "tensor.index_put_(indices, values)" is equivalent
to "tensor[indices] = values". Returns "self".
If "accumulate" is "True", the elements in "values" are added to
"self". If accumulate is "False", the behavior is undefined if
indices contain duplicate elements.
Parameters:
* indices (tuple of LongTensor) -- tensors used to index
into self.
* **values** (*Tensor*) -- tensor of same dtype as *self*.
* **accumulate** (*bool*) -- whether to accumulate into self
|
https://pytorch.org/docs/stable/generated/torch.Tensor.index_put_.html
|
pytorch docs
|
FloatFunctional
class torch.ao.nn.quantized.FloatFunctional
State collector class for float operations.
The instance of this class can be used instead of the "torch."
prefix for some operations. See example usage below.
Note:
This class does not provide a "forward" hook. Instead, you must
use one of the underlying functions (e.g. "add").
Examples:
>>> f_add = FloatFunctional()
>>> a = torch.tensor(3.0)
>>> b = torch.tensor(4.0)
>>> f_add.add(a, b) # Equivalent to ``torch.add(a, b)``
Valid operation names:
* add
* cat
* mul
* add_relu
* add_scalar
* mul_scalar
|
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.FloatFunctional.html
|
pytorch docs
|
torch.Tensor.hypot_
Tensor.hypot_(other) -> Tensor
In-place version of "hypot()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.hypot_.html
|
pytorch docs
|
torch.Tensor.mm
Tensor.mm(mat2) -> Tensor
See "torch.mm()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.mm.html
|
pytorch docs
|
ELU
class torch.nn.ELU(alpha=1.0, inplace=False)
Applies the Exponential Linear Unit (ELU) function, element-wise,
as described in the paper: Fast and Accurate Deep Network Learning
by Exponential Linear Units (ELUs).
ELU is defined as:
\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha *
(\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}
Parameters:
* alpha (float) -- the \alpha value for the ELU
formulation. Default: 1.0
* **inplace** (*bool*) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.ELU()
>>> input = torch.randn(2)
>>> output = m(input)
|
https://pytorch.org/docs/stable/generated/torch.nn.ELU.html
|
pytorch docs
|
torch.Tensor.swapdims
Tensor.swapdims(dim0, dim1) -> Tensor
See "torch.swapdims()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.swapdims.html
|
pytorch docs
|
torch.Tensor.atan
Tensor.atan() -> Tensor
See "torch.atan()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.atan.html
|
pytorch docs
|
torch.optim.Optimizer.step
Optimizer.step(closure)
Performs a single optimization step (parameter update).
Parameters:
closure (Callable) -- A closure that reevaluates the model
and returns the loss. Optional for most optimizers.
Note:
Unless otherwise specified, this function should not modify the
".grad" field of the parameters.
|
https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.step.html
|
pytorch docs
|
torch.Tensor.is_quantized
Tensor.is_quantized
Is "True" if the Tensor is quantized, "False" otherwise.
|
https://pytorch.org/docs/stable/generated/torch.Tensor.is_quantized.html
|
pytorch docs
|
torch.Tensor.arcsinh
Tensor.arcsinh() -> Tensor
See "torch.arcsinh()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.arcsinh.html
|
pytorch docs
|
torch.baddbmm
torch.baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) -> Tensor
Performs a batch matrix-matrix product of matrices in "batch1" and
"batch2". "input" is added to the final result.
"batch1" and "batch2" must be 3-D tensors each containing the same
number of matrices.
If "batch1" is a (b \times n \times m) tensor, "batch2" is a (b
\times m \times p) tensor, then "input" must be broadcastable with
a (b \times n \times p) tensor and "out" will be a (b \times n
\times p) tensor. Both "alpha" and "beta" mean the same as the
scaling factors used in "torch.addbmm()".
\text{out}_i = \beta\ \text{input}_i + \alpha\ (\text{batch1}_i
\mathbin{@} \text{batch2}_i)
If "beta" is 0, then "input" will be ignored, and nan and inf
in it will not be propagated.
For inputs of type FloatTensor or DoubleTensor, arguments
"beta" and "alpha" must be real numbers, otherwise they should be
integers.
|
https://pytorch.org/docs/stable/generated/torch.baddbmm.html
|
pytorch docs
|
integers.
This operator supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Parameters:
* input (Tensor) -- the tensor to be added
* **batch1** (*Tensor*) -- the first batch of matrices to be
multiplied
* **batch2** (*Tensor*) -- the second batch of matrices to be
multiplied
Keyword Arguments:
* beta (Number, optional) -- multiplier for "input"
(\beta)
* **alpha** (*Number**, **optional*) -- multiplier for
\text{batch1} \mathbin{@} \text{batch2} (\alpha)
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> M = torch.randn(10, 3, 5)
>>> batch1 = torch.randn(10, 3, 4)
>>> batch2 = torch.randn(10, 4, 5)
>>> torch.baddbmm(M, batch1, batch2).size()
torch.Size([10, 3, 5])
|
https://pytorch.org/docs/stable/generated/torch.baddbmm.html
|
pytorch docs
|
HistogramObserver
class torch.quantization.observer.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)
The module records the running histogram of tensor values along
with min/max values. "calculate_qparams" will calculate scale and
zero_point.
Parameters:
* bins (int) -- Number of bins to use for the histogram
* **upsample_rate** (*int*) -- Factor by which the histograms
are upsampled, this is used to interpolate histograms with
varying ranges across observations
* **dtype** (*dtype*) -- dtype argument to the *quantize* node
needed to implement the reference model spec
* **qscheme** -- Quantization scheme to be used
* **reduce_range** -- Reduces the range of the quantized data
type by 1 bit
|
https://pytorch.org/docs/stable/generated/torch.quantization.observer.HistogramObserver.html
|
pytorch docs
|
type by 1 bit
* **eps** (*Tensor*) -- Epsilon value for float32, Defaults to
*torch.finfo(torch.float32).eps*.
The scale and zero point are computed as follows:
Create the histogram of the incoming inputs.
The histogram is computed continuously, and the ranges per
bin change with every new tensor observed.
Search the distribution in the histogram for optimal min/max
values.
The search for the min/max values ensures the minimization of
the quantization error with respect to the floating point
model.
Compute the scale and zero point the same way as in the
"MinMaxObserver"
|
https://pytorch.org/docs/stable/generated/torch.quantization.observer.HistogramObserver.html
|
pytorch docs
|
torch.promote_types
torch.promote_types(type1, type2) -> dtype
Returns the "torch.dtype" with the smallest size and scalar kind
that is not smaller nor of lower kind than either type1 or
type2. See type promotion documentation for more information on
the type promotion logic.
Parameters:
* type1 ("torch.dtype") --
* **type2** ("torch.dtype") --
Example:
>>> torch.promote_types(torch.int32, torch.float32)
torch.float32
>>> torch.promote_types(torch.uint8, torch.long)
torch.long
|
https://pytorch.org/docs/stable/generated/torch.promote_types.html
|
pytorch docs
|
torch.Tensor.resolve_neg
Tensor.resolve_neg() -> Tensor
See "torch.resolve_neg()"
|
https://pytorch.org/docs/stable/generated/torch.Tensor.resolve_neg.html
|
pytorch docs
|
torch.nn.functional.threshold_
torch.nn.functional.threshold_(input, threshold, value) -> Tensor
In-place version of "threshold()".
|
https://pytorch.org/docs/stable/generated/torch.nn.functional.threshold_.html
|
pytorch docs
|
torch.linalg.lstsq
torch.linalg.lstsq(A, B, rcond=None, *, driver=None)
Computes a solution to the least squares problem of a system of
linear equations.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the least squares
problem for a linear system AX = B with A \in \mathbb{K}^{m
\times n}, B \in \mathbb{K}^{m \times k} is defined as
\min_{X \in \mathbb{K}^{n \times k}} \|AX - B\|_F
where |-|_F denotes the Frobenius norm.
Supports inputs of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if the inputs are batches of
matrices then the output has the same batch dimensions.
"driver" chooses the backend function that will be used. For CPU
inputs the valid values are 'gels', 'gelsy', 'gelsd,
'gelss'. To choose the best driver on CPU consider:
If "A" is well-conditioned (its condition number is not too
large), or you do not mind some precision loss.
|
https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html
|
pytorch docs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.