Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
5,001 | 83,238 |
The type of parameter 'p' in torch.nn.TripletMarginLoss wrong
|
module: nn, triaged
|
### π The doc issue
Parameter 'p' of torch.nn.TripletMarginLoss is int value on the documentation parameter part. However, in source code, p's type is float. By running code, it can be proved that parameter 'p' supports float value.
```
import torch
results={}
arg_class = torch.nn.TripletMarginLoss(margin=1.0,p=24.5)
arg_3_0 = torch.rand([100, 128], dtype=torch.float32)
arg_3_1 = torch.rand([100, 128], dtype=torch.float32)
arg_3_2 = torch.rand([100, 128], dtype=torch.float32)
arg_3 = [arg_3_0,arg_3_1,arg_3_2,]
results['res'] = arg_class(*arg_3)
```
Above code works.
### Suggest a potential alternative/fix
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,002 | 83,234 |
torch.nn.ReplicationPad{1|2}d supports more input dimension than are written on documentation
|
module: docs, module: nn, triaged
|
### π The doc issue
torch.nn.ReplicationPad1d supports (C,W) or (N, C, W) input, and needs a int or 2-tuple padding on the documentation. That is to say, torch.nn.ReplicationPad1d only supports 2D or 3D tensor.
torch.nn.ReplicationPad2d supports (C,H,W) or (N, C, H,W) input, and needs a int or 4-tuple padding on the documentation. That is to say, torch.nn.ReplicationPad2d only supports 3D or 4D tensor.
After running code, I find that torch.nn.ReplicationPad1d also supports 4D tensor with 4-tuple padding, and torch.nn.ReplicationPad2d also supports 5D tensor with 6-tuple padding.
```
import torch
results={}
arg_1 = [3,0,2,1]
arg_class = torch.nn.ReplicationPad1d(arg_1,)
arg_2 = torch.rand([25, 2, 46, 1], dtype=torch.float32)
results['res'] = arg_class(arg_2)
```
```
import torch
results={}
arg_1 = [2,2,2,2,2,2]
arg_class = torch.nn.ReplicationPad2d(arg_1,)
arg_2 = torch.rand([4, 1, 1, 3, 3], dtype=torch.float32)
results['res'] = arg_class(arg_2)
```
Above code runs well.
### Suggest a potential alternative/fix
_No response_
cc @svekars @holly1238 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,003 | 93,619 |
Enable pyre
|
triaged, oncall: pt2
|
I had an unused variable in a PR today, pyre (https://github.com/facebook/pyre-check) would have caught it
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
5,004 | 83,232 |
torch.nn.PixelShuffle error message wrong
|
module: nn, triaged
|
### π Describe the bug
There is something wrong in torch.nn.PixelShuffle's error message. It seems that the result is overflow.
```
import torch
results={}
arg_class = torch.nn.PixelShuffle(10000000000)
arg_2 = torch.rand([16, 256, 72, 72], dtype=torch.float32)
results['res'] = arg_class(arg_2)
```
Above code outputs: RuntimeError: pixel_shuffle expects its input's 'channel' dimension to be divisible by the square of upscale_factor, but input.size(-3)=256 is not divisible by 7766279631452241920.
But actually the error message should be RuntimeError: pixel_shuffle expects its input's 'channel' dimension to be divisible by the square of upscale_factor, but input.size(-3)=256 is not divisible by 100000000000000000000.
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,005 | 83,229 |
torch.nn.MaxUnpool2d get negative size tensor
|
module: nn, triaged
|
### π Describe the bug
When the kernel_size is negative, torch.nn.MaxUnpool2d will output negative size tensor.
In torch.nn.MaxUnpool3d, if the kernel_size is <=0, the program will die, ctrl+c cannot quit and have to force to kill the program.
```
import torch
results={}
arg_1 = -100
arg_2 = False
arg_class = torch.nn.MaxUnpool2d(arg_1,stride=arg_2,)
arg_3_0 = torch.rand([1, 1, 2, 2], dtype=torch.float32)
arg_3_1 = torch.randint(-1,64,[1, 1, 2, 2], dtype=torch.int64)
arg_3 = [arg_3_0,arg_3_1,]
results['res'] = arg_class(*arg_3)
print(results['res'].shape)
#torch.Size([1, 1, -100, -100])
```
```
import torch
pool = torch.nn.MaxPool3d(3, stride=2, return_indices=True)
unpool = torch.nn.MaxUnpool3d(-3, stride=2)
output, indices = pool(torch.randn(20, 16, 51, 33, 15))
unpooled_output = unpool(output, indices)
print(unpooled_output.size())
#program die
```
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,006 | 83,221 |
torch.nn.InstanceNorm{1|2|3}d doesn't verify the value type of parameter num_features
|
module: nn, triaged
|
### π Describe the bug
Parameter 'num_features' is the number of features or channels C of the input. However, I found that num_features can be set to negative integral / string / list and other type value. torch.nn.InstanceNorm1d doesn't verify whether the value of num_features and input channels are equal.
```
import torch
results={}
arg_1 = 'max'
arg_2 = False
arg_class = torch.nn.InstanceNorm1d(arg_1,affine=arg_2,)
arg_3 = torch.rand([20, 100, 40], dtype=torch.float32)
results['res'] = arg_class(arg_3)
```
Above code works.
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 2 |
5,007 | 83,214 |
torchgen.model.FunctionSchema.parse fails with following ops' schema
|
triaged, module: codegen
|
### π Describe the bug
Is this expected?
Do we want to handle following case?
```
aten::to.prim_Device(Tensor(a) self, Device? device, int? dtype=None, bool non_blocking=False, bool copy=False) -> Tensor(b|a)
unrecognized alias annotation b|a
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1223, in parse
returns = parse_returns(return_decl)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2578, in parse_returns
return tuple(Return.parse(arg) for arg in return_decl.split(", "))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2578, in <genexpr>
return tuple(Return.parse(arg) for arg in return_decl.split(", "))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1900, in parse
annotation = Annotation.parse(match.group(1))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1588, in parse
assert m is not None, f"unrecognized alias annotation {ann}"
AssertionError: unrecognized alias annotation b|a
prims::as_strided(Tensor(a!) a, int[] size, int[] stride, int storage_offset) -> Tensor(a!)
If you have a schema with mutable positional args, we expect them to not be returned. schema: prims::as_strided(Tensor(a!) a, int[] size, int[] stride, int storage_offset) -> Tensor(a!)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1224, in parse
r = FunctionSchema(name=name, arguments=arguments, returns=returns)
File "<string>", line 6, in __init__
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1245, in __post_init__
assert not any(
AssertionError: If you have a schema with mutable positional args, we expect them to not be returned. schema: prims::as_strided(Tensor(a!) a, int[] size, int[] stride, int storage_offset) -> Tensor(a!)
prims::copy_to(Tensor(a!) a, Tensor b) -> Tensor(a!)
If you have a schema with mutable positional args, we expect them to not be returned. schema: prims::copy_to(Tensor(a!) a, Tensor b) -> Tensor(a!)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1224, in parse
r = FunctionSchema(name=name, arguments=arguments, returns=returns)
File "<string>", line 6, in __init__
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1245, in __post_init__
assert not any(
AssertionError: If you have a schema with mutable positional args, we expect them to not be returned. schema: prims::copy_to(Tensor(a!) a, Tensor b) -> Tensor(a!)
prims::resize(Tensor(a!) a, int[] shape) -> Tensor(a!)
If you have a schema with mutable positional args, we expect them to not be returned. schema: prims::resize(Tensor(a!) a, int[] shape) -> Tensor(a!)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1224, in parse
r = FunctionSchema(name=name, arguments=arguments, returns=returns)
File "<string>", line 6, in __init__
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1245, in __post_init__
assert not any(
AssertionError: If you have a schema with mutable positional args, we expect them to not be returned. schema: prims::resize(Tensor(a!) a, int[] shape) -> Tensor(a!)
aten::add.t(t[] a, t[] b) -> t[]
unrecognized type t
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1636, in _parse
return BaseType(BaseTy[t])
File "/fsx/users/bahuang/conda/envs/pt_dev/lib/python3.9/enum.py", line 432, in __getitem__
return cls._member_map_[name]
KeyError: 't'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1845, in parse
type = Type.parse(type_s)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1629, in _parse
return ListType(elem=Type.parse(m.group(1)), size=size)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1638, in _parse
raise RuntimeError(f"unrecognized type {t}")
RuntimeError: unrecognized type t
aten::eq.enum(AnyEnumType a, AnyEnumType b) -> bool
unrecognized type AnyEnumType
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1636, in _parse
return BaseType(BaseTy[t])
File "/fsx/users/bahuang/conda/envs/pt_dev/lib/python3.9/enum.py", line 432, in __getitem__
return cls._member_map_[name]
KeyError: 'AnyEnumType'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1845, in parse
type = Type.parse(type_s)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1638, in _parse
raise RuntimeError(f"unrecognized type {t}")
RuntimeError: unrecognized type AnyEnumType
aten::mul.left_t(t[] l, int n) -> t[]
unrecognized type t
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1636, in _parse
return BaseType(BaseTy[t])
File "/fsx/users/bahuang/conda/envs/pt_dev/lib/python3.9/enum.py", line 432, in __getitem__
return cls._member_map_[name]
KeyError: 't'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1845, in parse
type = Type.parse(type_s)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1629, in _parse
return ListType(elem=Type.parse(m.group(1)), size=size)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1638, in _parse
raise RuntimeError(f"unrecognized type {t}")
RuntimeError: unrecognized type t
aten::mul.right_(int n, t[] l) -> t[]
unrecognized type t
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1636, in _parse
return BaseType(BaseTy[t])
File "/fsx/users/bahuang/conda/envs/pt_dev/lib/python3.9/enum.py", line 432, in __getitem__
return cls._member_map_[name]
KeyError: 't'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1845, in parse
type = Type.parse(type_s)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1629, in _parse
return ListType(elem=Type.parse(m.group(1)), size=size)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1638, in _parse
raise RuntimeError(f"unrecognized type {t}")
RuntimeError: unrecognized type t
aten::ne.enum(AnyEnumType a, AnyEnumType b) -> bool
unrecognized type AnyEnumType
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1636, in _parse
return BaseType(BaseTy[t])
File "/fsx/users/bahuang/conda/envs/pt_dev/lib/python3.9/enum.py", line 432, in __getitem__
return cls._member_map_[name]
KeyError: 'AnyEnumType'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1845, in parse
type = Type.parse(type_s)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1617, in parse
r = Type._parse(t)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1638, in _parse
raise RuntimeError(f"unrecognized type {t}")
RuntimeError: unrecognized type AnyEnumType
aten::rot90(Tensor self, int k=1, int[] dims=[0, 1]) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_fft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_fft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_ifft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_ifft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_rfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_rfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_irfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_irfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_hfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_hfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_ihfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> Tensor
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
aten::fft_ihfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/_ops.py", line 39, in __init__
self._parsed_schema: FunctionSchema = FunctionSchema.parse(str(schema))
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1222, in parse
arguments = Arguments.parse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2183, in parse
positional, kwarg_only, out = Arguments._preparse(args)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 2154, in _preparse
parg = Argument.parse(arg)
File "/fsx/users/bahuang/repos/pytorch_fsx/torchgen/model.py", line 1824, in parse
type_and_annot, name_and_default = arg.rsplit(" ", 1)
ValueError: not enough values to unpack (expected 2, got 1)
```
### Versions
master
cc @ezyang @bhosmer @bdhirsh
| 2 |
5,008 | 93,807 |
[VDD] unique_by_key from _embedding_bag_dense_backward isn't blocklisted by CUDA graphs
|
triaged, module: cuda graphs, oncall: pt2, module: dynamo
|
repro:
```
jf get --update D38599092
CUDA_LAUNCH_BLOCKING=1 buck2 run @mode/opt -c python.package_style=inplace //hpc/torchrec/models/feed/benchmark:vdd_benchmark -- --iters 31 --compile True --cudagraphs True --pad_seq_embs=true --dynamo True
```
Fails with
```
RuntimeError: unique_by_key: failed to synchronize: cudaErrorStreamCaptureUnsupported: operation not permitted when stream is capturing
```
Relevant backtrace:
```
Traceback (most recent call last):
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/b01f384851ab2430/hpc/torchrec/models/feed/benchmark/__vdd_benchmark__/vdd_benchmark#link-tree/torchinductor/compile_fx.py", line 195, in cudagraphify
static_outputs = model(*static_inputs)
File "/tmp/torchinductor_ezyang/gj/cgjht4biqqcp6eqw6noomuiivkokp3rk5czqlfqt3ttsy27p2xlj.py", line 3408, in call
buf304 = torch.ops.aten._embedding_bag_dense_backward.default(buf302, buf303, getitem_80, getitem_81, getitem_82, 100, False, 0, None)
```
cc @mcarilli @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 0 |
5,009 | 83,204 |
Enable freezing parts of the model in Fully Sharded Data Parallel
|
oncall: distributed, triaged, module: fsdp
|
### π The feature, motivation and pitch
Due to how `FlattenParamsWrapper` is used in the current FSDP implementation, there doesn't seem to be a straightforward way to shard a parameter which doesn't need to be optimized.
The particular use case I'm aiming at is where one shards the whole model's parameters to save memory, while only computing gradients for a small subset of parameters.
### Alternatives
_No response_
### Additional context
Related to https://github.com/pytorch/pytorch/issues/76501
@awgu
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @ezyang
| 13 |
5,010 | 83,197 |
Check support of FSDP + set_materialize_grads(False)
|
oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
We should investigate whether FSDP will work well or any assumptions break with custom functions and setting `ctx.set_materialize_grad(False)` for undefined / None gradients: https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html#torch.autograd.function.FunctionCtx.set_materialize_grads.
In particular, some assumptions around here might break: https://github.com/pytorch/pytorch/blob/f534b2c627da65bbee7ccc8f7e054da0ba48eb79/torch/distributed/fsdp/fully_sharded_data_parallel.py#L2884
### Versions
main
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @ezyang
| 0 |
5,011 | 83,193 |
module 'torch.distributed' has no attribute 'pipeline' - macOS, PyTorch 1.12.1
|
oncall: distributed, triaged, pipeline parallelism, release notes: distributed (pipeline)
|
### π Describe the bug
On macOS 12.5, I installed PyTorch 1.12.1 using Miniconda.
The following referring to class `Pipe` raised an exception `AttributeError: module 'torch.distributed' has no attribute 'pipeline`.
```
import torch
model = torch.distributed.pipeline.sync.Pipe(model, chunks=8)
```
However, the following works
```
from torch.distributed.pipeline.sync import Pipe
model = Pipe(model, chunks=8)
```
And the following works too.
```
from torch.distributed.pipeline.sync import Pipe
model = torch.distributed.pipeline.sync.Pipe(model, chunks=8)
```
### Versions
macOS 12.5
Python 3.10.5 | packaged by conda-forge
PyTorch 1.21.1 installed using Miniconda
```
>>> torch.__version__
'1.12.1'
```
It seems that PyTorch Distributed is enabled.
```
>>> import torch
>>> torch.distributed.is_available()
True
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,012 | 83,175 |
torch.nn.GRU runs long time, when num_layers is large
|
module: nn, triaged, module: edge cases
|
### π Describe the bug
When num_layers is 100000, torch.nn.GRU runs more than 5 minutes. Program hangs and doesn't return result.
```
import torch
results={}
arg_1 = 10
arg_2 = 20
arg_3 = 100000
arg_class = torch.nn.GRU(arg_1,arg_2,arg_3,)
```
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,013 | 83,169 |
torch.nn.functional.softplus / torch.nn.Softplus parameter beta can be set to zero
|
module: nn, triaged
|
### π Describe the bug
According to the documentation, Softplus(x)=(1/Ξ²)βlog(1+exp(Ξ²βx)). Beta is the denominator. However, I found that beta can be assigned to zero. Is this reasonable?
```
import torch
results={}
arg_1 = torch.rand([], dtype=torch.float32)
results['res'] = torch.nn.functional.softplus(arg_1,beta=0,threshold=20)
```
```
import torch
results={}
arg_class = torch.nn.Softplus(beta=0)
arg_1 = torch.rand([2], dtype=torch.float32)
results['res'] = arg_class(arg_1)
```
Above code runs.
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,014 | 83,168 |
deepcopy of LazyLinear fails
|
module: nn, triaged, actionable
|
### π Describe the bug
When running
```
l_linear = LazyLinear(10)
deepcopy(l_linear)
```
the following error is triggered
```
Exception has occurred: TypeError
empty() received an invalid combination of arguments - got (int, dtype=NoneType, device=bool), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of SymInts size, *, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
File "/Users/ndufour/Documents/pytorch/rl/debug_deepcopy.py", line 9, in <module>
deepcopy(l_linear)
```
This prevents making deepcopy of models before doing forward on one of them
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220725
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.3.0a0+516a8cd
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220725
[pip3] torchrl==0.0.1a0+008bebd
[pip3] torchvision==0.14.0.dev20220725
[conda] functorch 0.3.0a0+516a8cd pypi_0 pypi
[conda] numpy 1.23.1 py39h42add53_0
[conda] numpy-base 1.23.1 py39hadd41eb_0
[conda] pytorch 1.13.0.dev20220725 py3.9_0 pytorch-nightly
[conda] torchrl 0.0.1a0+008bebd dev_0 <develop>
[conda] torchvision 0.14.0.dev20220725 py39_cpu pytorch-nightly
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,015 | 83,163 |
torch.nn.functional.log_softmax parameter '_stacklevel' undocumented
|
module: nn, triaged, actionable
|
### π The doc issue
The documentation of the very popular torch.nn.functional.log_softmax doesn't explain what _stacklevel is, why it's set to 3 and what to do with it.
### Suggest a potential alternative/fix
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 7 |
5,016 | 83,161 |
Optimize for mobile metal model
|
oncall: mobile
|
### π Describe the bug
Traceback (most recent call last):
File "metal_model.py", line 15, in <module>
script_model_metal = optimize_for_mobile(script_model, backend='metal')
File "/anaconda3/envs/metal/lib/python3.8/site-packages/torch/utils/mobile_optimizer.py", line 69, in optimize_for_mobile
optimized_cpp_module = torch._C._jit_pass_metal_optimize_for_mobile(script_module._c, preserved_methods_str)
RuntimeError: 0INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1623448216815/work/torch/csrc/jit/ir/alias_analysis.cpp":532, please report a bug to PyTorch. We don't have an op for metal_prepack::conv2d_prepack but it isn't a special case. Argument types: Tensor, Tensor?, int[], int[], int[], int, NoneType, NoneType,
### Versions
Collecting environment information...
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.0
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 anaconda
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 anaconda
[conda] mkl-service 2.4.0 py38h7f8727e_0 anaconda
[conda] mkl_fft 1.3.1 py38hd3c417c_0 anaconda
[conda] mkl_random 1.2.2 py38h51133e4_0 anaconda
[conda] numpy 1.22.3 py38he7a7128_0 anaconda
[conda] numpy-base 1.22.3 py38hf524024_0 anaconda
[conda] pytorch 1.10.1 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.1 py38_cu113 pytorch
[conda] torchvision 0.11.2 py38_cu113 pytorch
| 0 |
5,017 | 83,159 |
Expand Learning rate scheduling to any optimization hyperparameter
|
feature, module: optimizer, triaged, needs design, module: LrScheduler
|
### π The feature, motivation and pitch
I'm developing a new optimizer using the PyTorch optimizer framework. It does not depend on a learning rate, but on a KL-divergence. I would like to schedule other optimization hyperparameters (given in the optimizer config) with the PyTorch scheduler. Right now, the key `lr` is hardcoded inside the schedulers.
I would like to have an option where you can specify the key/name of the scheduled variable when creating a scheduler. This would also unify other schedulers (for example weight decay).
### Alternatives
I considered renaming my optimization hyperparameter `kl_div` to `lr` and use the learning rate scheduler. However, since it is not a learning rate this is not correct and may cause confusion on the user.
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD
| 0 |
5,018 | 83,157 |
Fail to install torch for source
|
module: build, triaged
|
### π Describe the bug
Hi, I try to install PyTorch from source, but I met some error.
Here is the env:
PyTorch-1.11.0 + CUDA10.2 + cudnn7.6.5 + GCC6.5 + CMAKE 3.22.1 + Ubuntu 14.04
Note: I set `CC` and `CXX` to `gcc-6` and `g++-6`. It seems the `collect_env.py` doesn't collect a right gcc version.
I follow the instructions from the README doc, that is
```
conda install astunparse numpy ninja pyyaml setuptools cmake cffi typing_extensions future six requests dataclasses
conda install mkl mkl-include
conda install -c pytorch magma-cuda102
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py install
```
Log sumary:
```
[1/1783] Linking CXX executable bin/c10_Device_test
FAILED: bin/c10_Device_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_Device_test.dir/core/Device_test.cpp.o -o bin/c10_Device_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[2/1783] Linking CXX executable bin/c10_TypeList_test
[3/1783] Linking CXX executable bin/c10_Half_test
[4/1783] Linking CXX executable bin/c10_Array_test
[5/1783] Linking CXX executable bin/c10_ConstexprCrc_test
[6/1783] Linking CXX executable bin/c10_TypeIndex_test
[7/1783] Linking CXX executable bin/c10_Bitset_test
[8/1783] Linking CXX executable bin/c10_InlineStreamGuard_test
FAILED: bin/c10_InlineStreamGuard_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_InlineStreamGuard_test.dir/core/impl/InlineStreamGuard_test.cpp.o -o bin/c10_InlineStreamGuard_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[9/1783] Linking CXX executable bin/c10_C++17_test
[10/1783] Linking CXX executable bin/c10_LeftRight_test
FAILED: bin/c10_LeftRight_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o -o bin/c10_LeftRight_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_readsCanBeConcurrent_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x5f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_readsCanBeConcurrent_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x9f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_writesCanBeConcurrentWithReads_readThenWrite_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0xdf): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_writesCanBeConcurrentWithReads_readThenWrite_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x11f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_writesCanBeConcurrentWithReads_writeThenRead_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x15f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:LeftRight_test.cpp:(.text+0x19f): more undefined references to `std::thread::_State::~_State()' follow
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `LeftRightTest_readsCanBeConcurrent_Test::TestBody()':
LeftRight_test.cpp:(.text+0x888): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
LeftRight_test.cpp:(.text+0x8d7): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `LeftRightTest_writesCanBeConcurrentWithReads_readThenWrite_Test::TestBody()':
LeftRight_test.cpp:(.text+0xab2): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
LeftRight_test.cpp:(.text+0xb05): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `LeftRightTest_writesCanBeConcurrentWithReads_writeThenRead_Test::TestBody()':
LeftRight_test.cpp:(.text+0xce2): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:LeftRight_test.cpp:(.text+0xd35): more undefined references to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())' follow
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_readsCanBeConcurrent_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x4b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_readsCanBeConcurrent_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x8b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_writesCanBeConcurrentWithReads_readThenWrite_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0xcb): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_writesCanBeConcurrentWithReads_readThenWrite_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x10b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<LeftRightTest_writesCanBeConcurrentWithReads_writeThenRead_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
LeftRight_test.cpp:(.text+0x14b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:LeftRight_test.cpp:(.text+0x18b): more undefined references to `std::thread::_State::~_State()' follow
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:(.data.rel.ro+0x28): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:(.data.rel.ro+0x40): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:(.data.rel.ro+0x58): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:(.data.rel.ro+0x70): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:(.data.rel.ro+0x88): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o:(.data.rel.ro+0xa0): more undefined references to `typeinfo for std::thread::_State' follow
collect2: error: ld returned 1 exit status
[11/1783] Linking CXX executable bin/c10_DeviceGuard_test
FAILED: bin/c10_DeviceGuard_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_DeviceGuard_test.dir/core/DeviceGuard_test.cpp.o -o bin/c10_DeviceGuard_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[12/1783] Linking CXX executable bin/c10_accumulate_test
FAILED: bin/c10_accumulate_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_accumulate_test.dir/util/accumulate_test.cpp.o -o bin/c10_accumulate_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[13/1783] Linking CXX executable bin/c10_DispatchKeySet_test
FAILED: bin/c10_DispatchKeySet_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_DispatchKeySet_test.dir/core/DispatchKeySet_test.cpp.o -o bin/c10_DispatchKeySet_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[14/1783] Linking CXX executable bin/c10_ordered_preserving_dict_test
FAILED: bin/c10_ordered_preserving_dict_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_ordered_preserving_dict_test.dir/util/ordered_preserving_dict_test.cpp.o -o bin/c10_ordered_preserving_dict_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[15/1783] Linking CXX executable bin/c10_flags_test
FAILED: bin/c10_flags_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_flags_test.dir/util/flags_test.cpp.o -o bin/c10_flags_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[16/1783] Linking CXX executable bin/c10_complex_math_test
[17/1783] Linking CXX executable bin/c10_bfloat16_test
[18/1783] Linking CXX executable bin/c10_SizesAndStrides_test
FAILED: bin/c10_SizesAndStrides_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_SizesAndStrides_test.dir/core/impl/SizesAndStrides_test.cpp.o -o bin/c10_SizesAndStrides_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[19/1783] Linking CXX executable bin/c10_TypeTraits_test
[20/1783] Linking CXX executable bin/c10_complex_test
[21/1783] Linking CXX executable bin/c10_irange_test
[22/1783] Linking CXX executable bin/c10_Metaprogramming_test
[23/1783] Linking CXX static library lib/libfbgemm.a
[24/1783] Linking CXX executable bin/c10_either_test
[25/1783] Linking CXX executable bin/c10_intrusive_ptr_test
[26/1783] Linking CXX executable bin/c10_exception_test
FAILED: bin/c10_exception_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_exception_test.dir/util/exception_test.cpp.o -o bin/c10_exception_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[27/1783] Linking CXX executable bin/c10_SmallVectorTest
FAILED: bin/c10_SmallVectorTest
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_SmallVectorTest.dir/util/SmallVectorTest.cpp.o -o bin/c10_SmallVectorTest -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[28/1783] Linking CXX executable bin/c10_logging_test
FAILED: bin/c10_logging_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_logging_test.dir/util/logging_test.cpp.o -o bin/c10_logging_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[29/1783] Linking CXX executable bin/c10_InlineDeviceGuard_test
FAILED: bin/c10_InlineDeviceGuard_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_InlineDeviceGuard_test.dir/core/impl/InlineDeviceGuard_test.cpp.o -o bin/c10_InlineDeviceGuard_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
lib/libc10.so: undefined reference to `typeinfo for std::thread::_State'
lib/libc10.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
lib/libc10.so: undefined reference to `std::thread::_State::~_State()'
collect2: error: ld returned 1 exit status
[30/1783] Linking CXX executable bin/c10_ThreadLocal_test
FAILED: bin/c10_ThreadLocal_test
: && /usr/bin/g++-6 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/openmpi/lib -pthread c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o -o bin/c10_ThreadLocal_test -Wl,-rpath,/home/duanjiangfei/pytorch/build/lib: lib/libc10.so lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a -pthread && :
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestThreadWithLocalScopeVar_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x85f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestThreadWithGlobalScopeVar_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x89f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestObjectsAreReleased_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x8df): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestObjectsAreReleasedByNonstaticThreadLocal_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x91f): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `(anonymous namespace)::ThreadLocalTest_TestThreadWithLocalScopeVar_Test::TestBody()':
ThreadLocal_test.cpp:(.text+0x2aae): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `(anonymous namespace)::ThreadLocalTest_TestThreadWithGlobalScopeVar_Test::TestBody()':
ThreadLocal_test.cpp:(.text+0x2e7c): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `(anonymous namespace)::ThreadLocalTest_TestObjectsAreReleased_Test::TestBody()':
ThreadLocal_test.cpp:(.text+0x4d41): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `(anonymous namespace)::ThreadLocalTest_TestObjectsAreReleasedByNonstaticThreadLocal_Test::TestBody()':
ThreadLocal_test.cpp:(.text+0x5202): undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestThreadWithLocalScopeVar_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x84b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestThreadWithGlobalScopeVar_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x88b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestObjectsAreReleased_Test::TestBody()::{lambda()#2} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x8cb): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o: In function `std::thread::_State_impl<std::_Bind_simple<(anonymous namespace)::ThreadLocalTest_TestObjectsAreReleasedByNonstaticThreadLocal_Test::TestBody()::{lambda()#1} ()> >::~_State_impl()':
ThreadLocal_test.cpp:(.text+0x90b): undefined reference to `std::thread::_State::~_State()'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o:(.data.rel.ro+0x250): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o:(.data.rel.ro+0x268): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o:(.data.rel.ro+0x280): undefined reference to `typeinfo for std::thread::_State'
c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o:(.data.rel.ro+0x298): undefined reference to `typeinfo for std::thread::_State'
collect2: error: ld returned 1 exit status
[31/1783] Linking CXX static library lib/libdnnl.a
[32/1783] Building NVCC (Device) object third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda_private.cu.o
[33/1783] Building NVCC (Device) object third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda.cu.o
[34/1783] Building NVCC (Device) object third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o
ninja: build stopped: subcommand failed.
Building wheel torch-1.11.0a0+gitbc2c6ed
-- Building version 1.11.0a0+gitbc2c6ed
cmake --build . --target install --config Release
```
Here is the detailed error log:
https://gist.github.com/JF-D/dc2507af41343e78478261ee10c68aa8
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 14.04.6 LTS (x86_64)
GCC version: (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.19
Python version: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-041500-generic-x86_64-with-glibc2.10
Is CUDA available: N/A
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: GeForce GTX TITAN X
GPU 1: GeForce GTX TITAN X
GPU 2: GeForce GTX TITAN X
GPU 3: GeForce GTX TITAN X
GPU 4: GeForce GTX TITAN X
GPU 5: GeForce GTX TITAN X
GPU 6: GeForce GTX TITAN X
GPU 7: GeForce GTX TITAN X
Nvidia driver version: 440.44
cuDNN version: /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[conda] blas 1.0 mkl
[conda] magma-cuda102 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.22.3 pypi_0 pypi
```
cc @malfet @seemethere
| 0 |
5,019 | 83,153 |
torch.nn.Hardtanh allows min_val > max_val
|
module: nn, triaged
|
### π Describe the bug
torch.nn.Hardtanh allows min_val greater than max_val. It doesn't throw exception, and the document has no 'add note' for case min_val > max_val.
```
import torch
results={}
arg_1 = torch.rand([80, 192, 9, 9], dtype=torch.float32)
arg_2 = 6.0
arg_3 = 0.0
arg_4 = True
results['res'] = torch.nn.functional.hardtanh(arg_1,arg_2,arg_3,arg_4,)
```
Above code works.
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,020 | 83,152 |
When padding is big int, torch.nn.functional.fold runs too long and can't return result
|
module: nn, triaged
|
### π Describe the bug
When I run the code, there is no error information reports. After 5 mins running, there is no response and I can't stop the cmd, I have to kill the cmd.
```
import torch
results={}
arg_1_tensor = torch.rand([1, 12, 12], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2 = [4,5,]
arg_3 = [2,2,]
arg_4 = 1
arg_5 = 36028797018963968
arg_6 = 1
results['res'] = torch.nn.functional.fold(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,)
```
### Versions
pytorch: 1.8.1
python: 3.8.3
os: win11
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,021 | 83,151 |
Make FSDP easier to debug when erroring in backward pass
|
high priority, triage review, oncall: distributed, triaged, module: fsdp
|
### π The feature, motivation and pitch
Recently a lot of FSDP enablement efforts have been hard to debug when there's an error in backward pass, because we just get the error "autograd returned null without setting an error" without too much additional detail.
To fix this, we should comb through the code that runs in bwd and ensure we at least use `p_assert` everywhere, and maybe consider more solutions such as wrapping all code that can throw with try/except, adding additional details in the except and throwing a better error.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,022 | 83,149 |
bf16 strided tensor wrong calculation
|
high priority, triaged, module: bfloat16, module: correctness (silent), module: reductions, module: intel
|
### π Describe the bug
Issue when transpose is taken on bfloat16 tensor and passed to sum. the output is not correct. Example has been described below:
If you run same code in float32 then it works fine.
```
import torch
x = torch.ones([10, 13, 3, 3], dtype=torch.bfloat16)
x_trans = x.transpose(2, 3)
x_sum = torch.sum(x_trans, (0, 1, 2))
print(x_sum)
```
```
output: tensor([432., 432., 432.], dtype=torch.bfloat16)
but except output is tensor([390., 390., 390.])
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==1.6.5
[pip3] torch==1.12.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.0
[conda] mkl 2022.0.2 pypi_0 pypi
[conda] mkl-include 2022.0.2 pypi_0 pypi
[conda] numpy 1.22.3 pypi_0 pypi
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] torch 1.12.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.13.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @VitalyFedyunin @frank-wei
| 11 |
5,023 | 83,148 |
Cannot call CUDAGeneratorImpl::current_seed during CUDA graph capture
|
module: cuda, triaged
|
### π Describe the bug
When attempting to use
```
model = torch.cuda.make_graphed_callables(model, (rand_data,))
```
and our model contains checkpoints or sequential checkpoints like this:
```
x = checkpoint(self.layer1, x, use_reentrant=False)
```
I got this error
```
Traceback (most recent call last):
File "test_resnet.py", line 11, in <module>
model = torch.cuda.make_graphed_callables(model, (rand_data,))
File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/cuda/graphs.py", line 279, in make_graphed_callables
outputs = func(*args)
File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lustre/chendingyu1/resnet.py", line 285, in forward
return self._forward_impl(x)
File "/mnt/lustre/chendingyu1/resnet.py", line 273, in _forward_impl
x = checkpoint(self.layer1, x, use_reentrant=False)
File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/utils/checkpoint.py", line 240, in checkpoint
*args
File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/utils/checkpoint.py", line 333, in _checkpoint_without_reentrant
fwd_gpu_devices, fwd_gpu_states = get_device_states(*args)
File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/utils/checkpoint.py", line 44, in get_device_states
fwd_gpu_states.append(torch.cuda.get_rng_state())
File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/cuda/random.py", line 31, in get_rng_state
return default_generator.get_state()
RuntimeError: Cannot call CUDAGeneratorImpl::current_seed during CUDA graph capture. If you need this call to be captured, please file an issue. Current cudaStreamCaptureStatus: cudaStreamCaptureStatusActive
```
Setting `preserve_rng_state=False` seems to get around this problem, but it will behave differently when recomputing activations.
If I use `use_reentrant=True` in checkpoint function, I got the following error:
```RuntimeError: Checkpointing is not compatible with .grad() or when an `inputs` parameter is passed to .backward(). Please use .backward() and do not pass its `inputs` argument.```
Can I use checkpoint in cuda graph without setting `preserve_rng_state=False` or `use_reentrant=False`? As the trackback says, "If you need this call to be captured, please file an issue".
### Versions
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.6.1810-Core
Is CUDA available: True
CUDA runtime version: 9.0.176
Nvidia driver version: 460.32.03
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] spring==0.7.2+cu112.torch1110.mvapich2.nartgpu.develop.805601a8
[pip3] torch==1.11.0+cu113
[pip3] torchvision==0.12.0+cu113
[conda] numpy 1.21.5 pypi_0 pypi
[conda] spring 0.7.0+cu112.torch1110.mvapich2.pmi2.nartgpu pypi_0 pypi
[conda] torch 1.11.0+cu113 pypi_0 pypi
[conda] torchvision 0.12.0+cu113 pypi_0 pypi
cc @ngimel
| 2 |
5,024 | 83,144 |
[MPS] Bug on training CNN+LSTM
|
triaged, module: mps
|
### π Describe the bug
Following training on M1MAX GPU
when I training a CNN+LSTM model on Pytorch v1.12.1, it goes with this error
loc("total derivative last state"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<1x82x64xf32>' and 'tensor<1x32x64xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
this does not happened on previous Pytorch V11.2.0, I guess something wrong with new LSTM result matrix transformation?
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.5 | packaged by conda-forge | (main, Jun 14 2022, 07:07:06) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchinfo==1.7.0
[pip3] torchvision==0.13.1
[conda] numpy 1.23.1 py310h220015d_0
[conda] numpy-base 1.23.1 py310h742c864_0
[conda] pytorch 1.12.1 py3.10_0 pytorch
[conda] torchaudio 0.12.1 py310_cpu pytorch
[conda] torchinfo 1.7.0 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.13.1 py310_cpu pytorch
cc @kulinseth @albanD
| 10 |
5,025 | 83,143 |
Bug in building pytorch deploy from source in macos USE_DEPLOY=1
|
oncall: package/deploy, imported
|
### π Describe the bug
I'm trying to use the `torch::deploy` feature, and follow the document in [this website](https://pytorch.org/docs/stable/deploy.html) to build pytorch from source. First, I suceeded in building it with `USE_DEPLOY=0`. Then I started fresh (I cleaned with the instruction on [this website](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#tips-and-debugging), with ` USE_DEPLOY=1`. But it failes. This is done in Macos system. The same issue happens in a linux system too (See this [issue](https://github.com/pytorch/pytorch/issues/82382)).
Could you help me check this or show me a working example of installing torch::deploy from source?
* The command:
```bash
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
export USE_DEPLOY=1
DEBUG=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_CUDA=0 BUILD_TEST=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py develop
```
* The error message:
```
Building wheel torch-1.13.0a0+git8a6c104
-- Building version 1.13.0a0+git8a6c104
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=False -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_INSTALL_PREFIX=/Users/fenkexin/Desktop/forked/pytorch/torch -DCMAKE_PREFIX_PATH=/Users/fenkexin/opt/anaconda3/lib/python3.9/site-packages;/Users/fenkexin/opt/anaconda3 -DJAVA_HOME=/Users/fenkexin/Library/Java/JavaVirtualMachines/corretto-11.0.14.1/Contents/Home -DNUMPY_INCLUDE_DIR=/Users/fenkexin/opt/anaconda3/lib/python3.9/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/Users/fenkexin/opt/anaconda3/bin/python -DPYTHON_INCLUDE_DIR=/Users/fenkexin/opt/anaconda3/include/python3.9 -DPYTHON_LIBRARY=/Users/fenkexin/opt/anaconda3/lib/libpython3.9.a -DTORCH_BUILD_VERSION=1.13.0a0+git8a6c104 -DUSE_CUDA=0 -DUSE_DEPLOY=1 -DUSE_DISTRIBUTED=0 -DUSE_FBGEMM=0 -DUSE_MKLDNN=0 -DUSE_NNPACK=0 -DUSE_NUMPY=True -DUSE_QNNPACK=0 -DUSE_XNNPACK=0 /Users/fenkexin/Desktop/forked/pytorch
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- The C compiler identification is AppleClang 13.1.6.13160021
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
-- CLANG_VERSION_STRING: Apple clang version 13.1.6 (clang-1316.0.21.2.5)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
-- sdk version: 12.3, mps supported: ON
-- MPSGraph framework found
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Success
-- Performing Test SUPPORT_GLIBCXX_USE_C99
-- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
-- Turning off deprecation warning due to glog.
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Failed
-- Performing Test C_HAS_AVX_2
-- Performing Test C_HAS_AVX_2 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Success
-- Performing Test C_HAS_AVX512_1
-- Performing Test C_HAS_AVX512_1 - Failed
-- Performing Test C_HAS_AVX512_2
-- Performing Test C_HAS_AVX512_2 - Failed
-- Performing Test C_HAS_AVX512_3
-- Performing Test C_HAS_AVX512_3 - Failed
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Failed
-- Performing Test CXX_HAS_AVX_2
-- Performing Test CXX_HAS_AVX_2 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Success
-- Performing Test CXX_HAS_AVX512_1
-- Performing Test CXX_HAS_AVX512_1 - Failed
-- Performing Test CXX_HAS_AVX512_2
-- Performing Test CXX_HAS_AVX512_2 - Failed
-- Performing Test CXX_HAS_AVX512_3
-- Performing Test CXX_HAS_AVX512_3 - Failed
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
--
-- 3.13.0.0
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Performing Test protobuf_HAVE_BUILTIN_ATOMICS
-- Performing Test protobuf_HAVE_BUILTIN_ATOMICS - Success
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/Users/fenkexin/Desktop/forked/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL libraries: /Users/fenkexin/opt/anaconda3/lib/libmkl_intel_lp64.dylib;/Users/fenkexin/opt/anaconda3/lib/libmkl_intel_thread.dylib;/Users/fenkexin/opt/anaconda3/lib/libmkl_core.dylib;/Users/fenkexin/opt/anaconda3/lib/libiomp5.dylib;/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/lib/libpthread.tbd;/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/lib/libm.tbd
-- MKL include directory: /Users/fenkexin/opt/anaconda3/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: /Users/fenkexin/opt/anaconda3/lib/libiomp5.dylib
-- The ASM compiler identification is Clang
-- Found assembler: /Library/Developer/CommandLineTools/usr/bin/clang
CMake Warning at cmake/Dependencies.cmake:844 (message):
Turning USE_FAKELOWP off as it depends on USE_FBGEMM.
Call Stack (most recent call first):
CMakeLists.txt:708 (include)
-- Using third party subdirectory Eigen.
-- Found PythonInterp: /Users/fenkexin/opt/anaconda3/bin/python (found suitable version "3.9.12", minimum required is "3.0")
-- Found PythonLibs: /Users/fenkexin/opt/anaconda3/lib/libpython3.9.a (found suitable version "3.9.12", minimum required is "3.0")
-- Using third_party/pybind11.
-- pybind11 include dirs: /Users/fenkexin/Desktop/forked/pytorch/cmake/../third_party/pybind11/include
CMake Warning (dev) at /Users/fenkexin/opt/anaconda3/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
cmake/Dependencies.cmake:1222 (find_package)
CMakeLists.txt:708 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /Users/fenkexin/opt/anaconda3/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
cmake/Dependencies.cmake:1222 (find_package)
CMakeLists.txt:708 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Adding OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include
-- Will link against OpenMP libraries: /Users/fenkexin/opt/anaconda3/lib/libiomp5.dylib
CMake Warning at cmake/Dependencies.cmake:1513 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:708 (include)
-- Found PythonInterp: /Users/fenkexin/opt/anaconda3/bin/python (found version "3.9.12")
-- Found PythonLibs: /Users/fenkexin/opt/anaconda3/lib/libpython3.9.a (found version "3.9.12")
Generated: /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
--
-- ******** Summary ********
-- CMake version : 3.19.6
-- CMake command : /Users/fenkexin/opt/anaconda3/bin/cmake
-- System : Darwin
-- C++ compiler : /Library/Developer/CommandLineTools/usr/bin/clang++
-- C++ compiler version : 13.1.6.13160021
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include -Wnon-virtual-dtor
-- Build type : Debug
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : /Users/fenkexin/opt/anaconda3/lib/python3.9/site-packages;/Users/fenkexin/opt/anaconda3
-- CMAKE_INSTALL_PREFIX : /Users/fenkexin/Desktop/forked/pytorch/torch
-- CMAKE_MODULE_PATH : /Users/fenkexin/Desktop/forked/pytorch/cmake/Modules
--
-- ONNX version : 1.12.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
--
-- ******** Summary ********
-- CMake version : 3.19.6
-- CMake command : /Users/fenkexin/opt/anaconda3/bin/cmake
-- System : Darwin
-- C++ compiler : /Library/Developer/CommandLineTools/usr/bin/clang++
-- C++ compiler version : 13.1.6.13160021
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include -Wnon-virtual-dtor
-- Build type : Debug
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
-- CMAKE_PREFIX_PATH : /Users/fenkexin/opt/anaconda3/lib/python3.9/site-packages;/Users/fenkexin/opt/anaconda3
-- CMAKE_INSTALL_PREFIX : /Users/fenkexin/Desktop/forked/pytorch/torch
-- CMAKE_MODULE_PATH : /Users/fenkexin/Desktop/forked/pytorch/cmake/Modules
--
-- ONNX version : 1.4.1
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor
-- Removing -DNDEBUG from compile flags
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False
CMake Warning at cmake/Dependencies.cmake:1713 (message):
Not compiling with MAGMA. Suppress this warning with -DUSE_MAGMA=OFF.
Call Stack (most recent call first):
CMakeLists.txt:708 (include)
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Found a library with LAPACK API (mkl).
disabling CUDA because NOT USE_CUDA is set
-- USE_CUDNN is set to 0. Compiling without cuDNN support
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
disabling MKLDNN because USE_MKLDNN is not set
-- Looking for mmap
-- Looking for mmap - found
-- Looking for shm_open
-- Looking for shm_open - found
-- Looking for shm_unlink
-- Looking for shm_unlink - found
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - not found
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- Version: 7.0.3
-- Build type: Debug
-- CXX_STANDARD: 14
-- Performing Test has_std_14_flag
-- Performing Test has_std_14_flag - Success
-- Performing Test has_std_1y_flag
-- Performing Test has_std_1y_flag - Success
-- Performing Test SUPPORTS_USER_DEFINED_LITERALS
-- Performing Test SUPPORTS_USER_DEFINED_LITERALS - Success
-- Performing Test FMT_HAS_VARIANT
-- Performing Test FMT_HAS_VARIANT - Success
-- Required features: cxx_variadic_templates
-- Performing Test HAS_NULLPTR_WARNING
-- Performing Test HAS_NULLPTR_WARNING - Success
-- Looking for strtod_l
-- Looking for strtod_l - found
-- Using CPU-only version of Kineto
-- Configuring Kineto dependency:
-- KINETO_SOURCE_DIR = /Users/fenkexin/Desktop/forked/pytorch/third_party/kineto/libkineto
-- KINETO_BUILD_TESTS = OFF
-- KINETO_LIBRARY_TYPE = static
INFO CUDA_SOURCE_DIR =
INFO ROCM_SOURCE_DIR =
INFO CUPTI unavailable or disabled - not building GPU profilers
-- Kineto: FMT_SOURCE_DIR = /Users/fenkexin/Desktop/forked/pytorch/third_party/fmt
-- Kineto: FMT_INCLUDE_DIR = /Users/fenkexin/Desktop/forked/pytorch/third_party/fmt/include
INFO CUPTI_INCLUDE_DIR = /extras/CUPTI/include
INFO ROCTRACER_INCLUDE_DIR = /include/roctracer
-- Configured Kineto (CPU)
-- Performing Test HAS_WERROR_FORMAT
-- Performing Test HAS_WERROR_FORMAT - Success
-- Performing Test HAS_WERROR_CAST_FUNCTION_TYPE
-- Performing Test HAS_WERROR_CAST_FUNCTION_TYPE - Success
-- Performing Test HAS_WERROR_SIGN_COMPARE
-- Performing Test HAS_WERROR_SIGN_COMPARE - Success
-- Looking for backtrace
-- Looking for backtrace - found
-- backtrace facility detected in default set of libraries
-- Found Backtrace: /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include
-- don't use NUMA
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Using ATen parallel backend: OMP
disabling CUDA because USE_CUDA is set false
CMake Deprecation Warning at third_party/sleef/CMakeLists.txt:91 (cmake_policy):
The OLD behavior for policy CMP0066 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
-- Found OpenSSL: /Users/fenkexin/opt/anaconda3/lib/libcrypto.dylib (found version "1.1.1q")
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Success
-- Found OpenMP_C: -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include (found version "5.0")
-- Found OpenMP_CXX: -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include (found version "5.0")
-- Found OpenMP: TRUE (found version "5.0")
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Failed
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed
-- Configuring build for SLEEF-v3.6.0
Target system: Darwin-21.6.0
Target processor: x86_64
Host system: Darwin-21.6.0
Host processor: x86_64
Detected C compiler: AppleClang @ /Library/Developer/CommandLineTools/usr/bin/clang
CMake: 3.19.6
Make program: /Users/fenkexin/opt/anaconda3/bin/ninja
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- Building static test bins: OFF
-- MPFR : /usr/local/lib/libmpfr.dylib
-- MPFR header file in /usr/local/include
-- GMP : /usr/local/lib/libgmp.dylib
-- RT :
-- FFTW3 : LIBFFTW3-NOTFOUND
-- OPENSSL : 1.1.1q
-- SDE : SDE_COMMAND-NOTFOUND
-- RUNNING_ON_TRAVIS :
-- COMPILER_SUPPORTS_OPENMP :
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: /Users/fenkexin/Desktop/forked/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: /Users/fenkexin/Desktop/forked/pytorch/build/aten/src/ATen/core/aten_interned_strings.h
core header install: /Users/fenkexin/Desktop/forked/pytorch/build/aten/src/ATen/core/enum_tag.h
CMake Warning (dev) at torch/CMakeLists.txt:467:
Syntax Warning in cmake code at column 107
Argument not separated from preceding token by whitespace.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at torch/CMakeLists.txt:467:
Syntax Warning in cmake code at column 115
Argument not separated from preceding token by whitespace.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /Users/fenkexin/opt/anaconda3/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
caffe2/CMakeLists.txt:1288 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /Users/fenkexin/opt/anaconda3/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
caffe2/CMakeLists.txt:1288 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- pytorch is compiling with OpenMP.
OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include.
OpenMP libraries: /Users/fenkexin/opt/anaconda3/lib/libiomp5.dylib.
-- Caffe2 is compiling with OpenMP.
OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include.
OpenMP libraries: /Users/fenkexin/opt/anaconda3/lib/libiomp5.dylib.
-- Using lib/python3.9/site-packages as python relative installation path
CMake Warning at CMakeLists.txt:1073 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.19.6
-- CMake command : /Users/fenkexin/opt/anaconda3/bin/cmake
-- System : Darwin
-- C++ compiler : /Library/Developer/CommandLineTools/usr/bin/clang++
-- C++ compiler id : AppleClang
-- C++ compiler version : 13.1.6.13160021
-- Using ccache if found : ON
-- Found ccache : /Users/fenkexin/opt/anaconda3/bin/ccache
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-range-loop-analysis -Wno-pass-failed -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -DUSE_MPS -fno-objc-arc -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const
-- Build type : Debug
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-- CMAKE_PREFIX_PATH : /Users/fenkexin/opt/anaconda3/lib/python3.9/site-packages;/Users/fenkexin/opt/anaconda3
-- CMAKE_INSTALL_PREFIX : /Users/fenkexin/Desktop/forked/pytorch/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 1.13.0
-- CAFFE2_VERSION : 1.13.0
-- BUILD_CAFFE2 : OFF
-- BUILD_CAFFE2_OPS : OFF
-- BUILD_CAFFE2_MOBILE : OFF
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_TENSOREXPR_BENCHMARK: OFF
-- BUILD_NVFUSER_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : True
-- Python version : 3.9.12
-- Python executable : /Users/fenkexin/opt/anaconda3/bin/python
-- Pythonlibs version : 3.9.12
-- Python library : /Users/fenkexin/opt/anaconda3/lib/libpython3.9.a
-- Python includes : /Users/fenkexin/opt/anaconda3/include/python3.9
-- Python site-packages: lib/python3.9/site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : False
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- CROSS_COMPILING_MACOSX :
-- INTERN_BUILD_MOBILE :
-- USE_BLAS : 1
-- BLAS : mkl
-- BLAS_HAS_SBGEMM :
-- USE_LAPACK : 1
-- LAPACK : mkl
-- USE_ASAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : 0
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : OFF
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : ON
-- USE_FFTW : OFF
-- USE_MKL : ON
-- USE_MKLDNN : 0
-- USE_UCC : OFF
-- USE_ITT : ON
-- USE_NCCL : OFF
-- USE_NNPACK : 0
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : ON
-- USE_TBB : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : 0
-- USE_PYTORCH_QNNPACK : ON
-- USE_XNNPACK : 0
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : 0
-- USE_DEPLOY : 1
-- Public Dependencies : caffe2::Threads;caffe2::mkl
-- Private Dependencies : pthreadpool;cpuinfo;pytorch_qnnpack;ittnotify;fp16;foxi_loader;fmt::fmt-header-only;kineto
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : ON
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
JAVA_HOME
-- Build files have been written to: /Users/fenkexin/Desktop/forked/pytorch/build
cmake --build . --target install --config Debug
[3/4] Generating ATen headers
[229/1806] Linking CXX static library lib/libprotobuf-lited.a
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobuf-lited.a(io_win32.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobuf-lited.a(io_win32.cc.o) has no symbols
[289/1806] Linking CXX static library lib/libprotobufd.a
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobufd.a(io_win32.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobufd.a(gzip_stream.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobufd.a(error_listener.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobufd.a(io_win32.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobufd.a(gzip_stream.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: lib/libprotobufd.a(error_listener.cc.o) has no symbols
[327/1806] Running gen_proto.py on onnx/onnx.in.proto
Processing /Users/fenkexin/Desktop/forked/pytorch/third_party/onnx/onnx/onnx.in.proto
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto3
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-ml.pb.h
generating /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx_pb.py
[341/1806] Running gen_proto.py on onnx/onnx-operators.in.proto
Processing /Users/fenkexin/Desktop/forked/pytorch/third_party/onnx/onnx/onnx-operators.in.proto
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto3
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-operators-ml.pb.h
generating /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx_operators_pb.py
[342/1806] Running gen_proto.py on onnx/onnx-data.in.proto
Processing /Users/fenkexin/Desktop/forked/pytorch/third_party/onnx/onnx/onnx-data.in.proto
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto3
Writing /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx-data.pb.h
generating /Users/fenkexin/Desktop/forked/pytorch/build/third_party/onnx/onnx/onnx_data_pb.py
[424/1806] Linking C shared library lib/libtorch_global_deps.dylib
ld: warning: dylib (/Users/fenkexin/opt/anaconda3/lib/libmkl_intel_lp64.dylib) was built for newer macOS version (10.12) than being linked (10.9)
ld: warning: dylib (/Users/fenkexin/opt/anaconda3/lib/libmkl_intel_thread.dylib) was built for newer macOS version (10.12) than being linked (10.9)
ld: warning: dylib (/Users/fenkexin/opt/anaconda3/lib/libmkl_core.dylib) was built for newer macOS version (10.12) than being linked (10.9)
[443/1806] Generating include/renameavx2128.h
Generating renameavx2128.h: mkrename finz_ 2 4 avx2128
[444/1806] Generating include/renameavx512fnofma.h
Generating renameavx512fnofma.h: mkrename cinz_ 8 16 avx512fnofma
[445/1806] Generating include/renamesse2.h
Generating renamesse2.h: mkrename cinz_ 2 4 sse2
[447/1806] Generating include/renamepurecfma_scalar.h
Generating renamepurecfma_scalar.h: mkrename finz_ 1 1 purecfma
[448/1806] Generating include/renamepurec_scalar.h
Generating renamepurec_scalar.h: mkrename cinz_ 1 1 purec
[449/1806] Generating include/renamesse4.h
Generating renamesse4.h: mkrename cinz_ 2 4 sse4
[450/1806] Generating include/renameavx.h
Generating renameavx.h: mkrename cinz_ 4 8 avx
[451/1806] Generating include/renamefma4.h
Generating renamefma4.h: mkrename finz_ 4 8 fma4
[452/1806] Generating include/renameavx2.h
Generating renameavx2.h: mkrename finz_ 4 8 avx2
[453/1806] Generating include/renameavx512f.h
Generating renameavx512f.h: mkrename finz_ 8 16 avx512f
[454/1806] Generating include/renamecuda.h
Generating renamecuda.h: mkrename finz_ 1 1 cuda
[460/1806] Generating ../../../include/sleef.h
Generating sleef.h: mkrename cinz_ 2 4 __m128d __m128 __m128i __m128i __SSE2__
Generating sleef.h: mkrename cinz_ 2 4 __m128d __m128 __m128i __m128i __SSE2__ sse2
Generating sleef.h: mkrename cinz_ 2 4 __m128d __m128 __m128i __m128i __SSE2__ sse4
Generating sleef.h: mkrename cinz_ 4 8 __m256d __m256 __m128i struct\ {\ __m128i\ x,\ y;\ } __AVX__
Generating sleef.h: mkrename cinz_ 4 8 __m256d __m256 __m128i struct\ {\ __m128i\ x,\ y;\ } __AVX__ avx
Generating sleef.h: mkrename finz_ 4 8 __m256d __m256 __m128i struct\ {\ __m128i\ x,\ y;\ } __AVX__ fma4
Generating sleef.h: mkrename finz_ 4 8 __m256d __m256 __m128i __m256i __AVX__ avx2
Generating sleef.h: mkrename finz_ 2 4 __m128d __m128 __m128i __m128i __SSE2__ avx2128
Generating sleef.h: mkrename finz_ 8 16 __m512d __m512 __m256i __m512i __AVX512F__
Generating sleef.h: mkrename finz_ 8 16 __m512d __m512 __m256i __m512i __AVX512F__ avx512f
Generating sleef.h: mkrename cinz_ 8 16 __m512d __m512 __m256i __m512i __AVX512F__ avx512fnofma
Generating sleef.h: mkrename cinz_ 1 1 double float int32_t int32_t __STDC__ purec
Generating sleef.h: mkrename finz_ 1 1 double float int32_t int32_t FP_FAST_FMA purecfma
[510/1806] Generating ../../../torch/utils/data/datapipes/datapipe.pyi
Generating Python interface file 'datapipe.pyi'...
[522/1806] Building CXX object torch/c...e_dt_needed.dir/remove_dt_needed.cpp.o
FAILED: torch/csrc/deploy/CMakeFiles/remove_dt_needed.dir/remove_dt_needed.cpp.o
ccache /Library/Developer/CommandLineTools/usr/bin/clang++ -DFMT_HEADER_ONLY=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Iaten/src -I../aten/src -I. -I../ -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../third_party/fmt/include -isystem ../third_party/protobuf/src -isystem /Users/fenkexin/opt/anaconda3/include -isystem ../third_party/ittapi/include -isystem ../cmake/../third_party/eigen -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/Users/fenkexin/opt/anaconda3/include -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-range-loop-analysis -Wno-pass-failed -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -DUSE_MPS -fno-objc-arc -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const -g -fno-omit-frame-pointer -O0 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk -mmacosx-version-min=10.9 -fPIE -DTH_HAVE_THREAD -std=gnu++14 -MD -MT torch/csrc/deploy/CMakeFiles/remove_dt_needed.dir/remove_dt_needed.cpp.o -MF torch/csrc/deploy/CMakeFiles/remove_dt_needed.dir/remove_dt_needed.cpp.o.d -o torch/csrc/deploy/CMakeFiles/remove_dt_needed.dir/remove_dt_needed.cpp.o -c ../torch/csrc/deploy/remove_dt_needed.cpp
../torch/csrc/deploy/remove_dt_needed.cpp:1:10: fatal error: 'elf.h' file not found
#include <elf.h>
^~~~~~~
1 error generated.
[536/1806] Generating ../../../torch/version.py
fatal: no tag exactly matches '8a6c104ce9398815989317f208eae80ea2fe6ac1'
[539/1806] Performing download step (git clone) for 'cpython'
Cloning into 'cpython'...
Note: switching to 'v3.8.6'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at db455296be Python 3.8.6
ninja: build stopped: subcommand failed.
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.19.6
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:36:29) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-lightning==1.6.5
[pip3] pytorch-metric-learning==1.3.2
[pip3] torch==1.12.1
[pip3] torchmetrics==0.7.3
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 h0a44026_0 pytorch
[conda] mkl 2019.4 233
[conda] mkl-include 2022.0.0 hecd8cb5_105
[conda] mkl-service 2.3.0 py39h9ed2024_0
[conda] mkl_fft 1.3.0 py39ha059aab_0
[conda] mkl_random 1.0.2 py39h16bde0e_0
[conda] numpy 1.19.2 py39he57783f_0
[conda] numpy-base 1.19.2 py39hde55871_0
[conda] pytorch 1.12.1 py3.9_0 pytorch
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] pytorch-metric-learning 1.3.2 pypi_0 pypi
[conda] torch 1.13.0a0+git8a6c104 dev_0 <develop>
[conda] torchmetrics 0.7.3 pypi_0 pypi
[conda] torchtext 0.13.0 pypi_0 pypi
[conda] torchvision 0.13.1 py39_cpu pytorch
| 0 |
5,026 | 83,135 |
torch.nn.functional.avg_pool{1|2|3}d error message does not match what is described in the documentation
|
module: docs, module: nn, triaged
|
### π The doc issue
Parameter 'kernel_size' and 'stride' of torch.nn.functional.avg_pool{1|2|3}d can be a single number or a tuple. However, I found that error message only mentioned tuple of ints which means parameter 'kernel_size' and 'stride' can be only int number or tuple of ints.
```
import torch
results={}
arg_1 = torch.rand([1, 1, 7], dtype=torch.float32)
arg_2 = 8.0
arg_3 = 2
arg_4 = 0
arg_5 = True
arg_6 = True
results['res'] = torch.nn.functional.avg_pool1d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,)
#TypeError: avg_pool1d(): argument 'kernel_size' (position 2) must be tuple of ints, not float
```
```
import torch
results={}
arg_1 = torch.rand([16, 528, 16, 16], dtype=torch.float32)
arg_2 = 32.0
arg_3 = 13.0
arg_4 = 0
arg_5 = False
arg_6 = True
arg_7 = None
results['res'] = torch.nn.functional.avg_pool2d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,arg_7,)
#TypeError: avg_pool2d(): argument 'stride' (position 3) must be tuple of ints, not float
```
```
import torch
results={}
arg_1 = torch.rand([20, 16, 50, 44, 31], dtype=torch.float32)
arg_2_0 = 3.0
arg_2_1 = 2
arg_2_2 = 2
arg_2 = [3.0,2,2]
arg_3_0 = 2
arg_3_1 = 1
arg_3_2 = 2
arg_3 = [2,1,2]
arg_4 = 0
arg_5 = False
arg_6 = True
arg_7 = None
results['res'] = torch.nn.functional.avg_pool3d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,arg_7,)
#TypeError: avg_pool3d(): argument 'kernel_size' must be tuple of ints, but found element of type float at pos 1
```
### Suggest a potential alternative/fix
It would be great if the doc could be written as follows:
kernel_size β size of the pooling region. Can be a int number or a tuple (kT, kH, kW).
stride β stride of the pooling operation. Can be a int number or a tuple (sT, sH, sW).
Or modify the error message so that it matches the document description.
cc @svekars @holly1238 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,027 | 83,112 |
One dlpack to rule them all
|
triaged, better-engineering, module: dlpack
|
### π Describe the bug
One here https://github.com/pytorch/pytorch/blob/master/caffe2/python/dlpack.h
And another there https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/dlpack.h
Should we make one reference to another one?
### Versions
1.12/CI
| 0 |
5,028 | 83,111 |
[FSDP] `test_summon_single_param()` is misleading
|
triaged, module: fsdp
|
Code pointer:
https://github.com/pytorch/pytorch/blob/6c60a656b02fbd09661b282bec53940b184db3ca/test/distributed/fsdp/test_fsdp_summon_full_params.py#L269-L288
---
WLOG suppose we have a world size of 2.
- The original parameter (i.e. the linear's weight) is like `[[X]]` where `X` is some initial value and the shape is `[1, 1]`.
- The unpadded unsharded flattened parameter is like `[X]` where the shape is `[1]`.
- The padded unsharded flattened parameter is like `[X, 0]` where the shape is `[2]`. (For larger world sizes, we pad more like `[X, 0, 0, 0]` for world size of 4.)
- After we run on both ranks
```
with torch.no_grad():
p[0] = self.rank + 2
```
rank 0's local shard is `[2]`, and rank 1's local shard is `[3]`.
- When we run on both ranks
```
with model.summon_full_params(model, writeback=True):
with torch.no_grad():
p.copy_(torch.zeros_like(p))
```
`p` is actually the unpadded unsharded flattened parameter on both rank 0 and rank 1. ([code0](https://github.com/pytorch/pytorch/blob/6c60a656b02fbd09661b282bec53940b184db3ca/torch/distributed/fsdp/fully_sharded_data_parallel.py#L2548), [code1](https://github.com/pytorch/pytorch/blob/6c60a656b02fbd09661b282bec53940b184db3ca/torch/distributed/fsdp/fully_sharded_data_parallel.py#L3348)) In other words, if you print `p` in the `summon_full_params()` context, you see `[2]` for both rank 0 and rank 1. This means that the `copy_()` writes to the `0`th element for both ranks. There is not an attempt to zero the `1`st element of the padded unsharded flattened parameter.
We see that this test actually tests whether changes to the padding made before `summon_full_params()` persist after `summon_full_params()`. We could change the `p.copy_(torch.zeros_like(p))` to only run on rank 0, and the test would work just the same.
Personally, I am not sure if FSDP should make any guarantees on persisting writes to the padding. It does not seem like a real use case.
cc @zhaojuanmao @mrshenli @rohan-varma @ezyang
| 1 |
5,029 | 83,107 |
FSDP crash if no parameters are used in fwd pass
|
high priority, triage review, oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
The following test raises an issue where an assert is triggered if FSDP has params that require grad, but they are not used in an iteration:
```
@skip_if_lt_x_gpu(2)
def test_fsdp_namedtuple(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(1, 1)
def forward(self, x):
return x
m = MyModule().cuda()
m = FSDP(m)
t = torch.ones(1, device="cuda", requires_grad=True)
MyOutputType = namedtuple(
"MyOutputType",
["a", "b", "c", "d"],
defaults=(t, t, t, t)
)
inp = MyOutputType()
out = m(inp)
print(out)
res = torch.cat([e for e in out]).sum()
res.backward()
```
This triggers the assert here: https://github.com/pytorch/pytorch/blob/cd5efc6f082c81fd40712127638931b9e2e5ee69/torch/distributed/fsdp/fully_sharded_data_parallel.py#L3084, presumably because the FSDP managed param requires grad, but does not get its gradient computed, so it never entered post backward.
In practice, this came up when wrapping FLAVA encoders separately, but encoders take turns being used across iterations.
### Versions
main
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,030 | 93,804 |
Direct use of torchdynamo.optimizations.analysis fails if you pass in None as an input
|
triaged, enhancement, oncall: pt2, module: dynamo
|
Example
```
diff --git a/fbcode/pytorch/torchdynamo/torchdynamo/optimizations/analysis.py b/fbcode/pytorch/torchdynamo/torchdynamo/optimizations/analysis.py
--- a/fbcode/pytorch/torchdynamo/torchdynamo/optimizations/analysis.py
+++ b/fbcode/pytorch/torchdynamo/torchdynamo/optimizations/analysis.py
@@ -38,8 +38,8 @@
def placeholder(self, target, args, kwargs):
value = super().placeholder(target, args, kwargs)
- assert isinstance(value, torch.Tensor)
- self.input_alias_groups.add(self.tensor_alias_group(value))
+ if isinstance(value, torch.Tensor):
+ self.input_alias_groups.add(self.tensor_alias_group(value))
return value
def run_node(self, n: torch.fx.Node):
diff --git a/fbcode/torchrec/sparse/jagged_tensor.py b/fbcode/torchrec/sparse/jagged_tensor.py
```
Dynamo will ensure that a None input can never occur but this is not the case for direct use of `compile_fx` api in inductor, for example
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 4 |
5,031 | 83,098 |
Redirect the old metrics.pytorch.org url to the new page
|
module: ci, triaged
|
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,032 | 83,082 |
[CI] Create periodic fuzzy testing for PyTorch build flags
|
module: ci, triaged
|
We should have CI to test flag compatibility for PyTorch.
Proposal: add a periodic job that randomly pick flags to enable + tries building.
The key blocker here is that we would need to make sure we have an owner to forward fix when the flags are incompatible.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,033 | 83,081 |
[CI] Split up periodic.yml into forward-fixable.yml and periodic.yml
|
module: ci, triaged
|
We want to move toward a future where we do not revert people based on periodic failures (and instead opt for forward fixing). To get there, we should:
1. Split up periodic.yml into two parts. Both should be periodic, but we should start creating a distinction between forward-fixable periodic tests and ones that we will revert devs for. Eventually, we want to move the second portion of tests to trunk/land validation.
2. forward-fixable should thus neither be included in our reliability stats nor block viable/strict upgrades
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,034 | 83,074 |
DPP training incompatibility with checkpoint and detach
|
oncall: distributed, triaged, module: ddp
|
### π Describe the bug
I am using pytorch ddp to train my model. Turns out if I use ddp, then I can not use checkpoint or detach gradient. The incompatibility is a big problem, because these techniques are important for my use.
My model consists of two part roughly, a language model for generate representation, where weights are detached, another part of the model is trained with gradients.
the code of the language model:
```python
if exists(config.msa_bert.msa_bert_config.model_weight) and not config.msa_bert.skip_load_msa_bert:
self.bert_model = load_pretrain(self.bert_model, config.msa_bert.msa_bert_config.model_weight)
if config.msa_bert.msa_bert_config.freeze:
print(' frezze pretrained msa transformer')
for param in self.bert_model.parameters():
param.detach_()
self.bert_model.eval()
```
Note in the other part of my model, there are recycles with detach.
```python
for i in range(n_recycle):
msa_fea, pair_fea = self.feat_extractor(msa_fea, pair_fea)
msa_fea, pair_fea = msa_fea.detach_(), pair_fea.detach_()
```
When using ddp, I have to turn on the `find_unused_parameters=True `, otherwise a error would be raised: `RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. `
Seems like if you have a model with detached params, you have to turn on this.
Here comes the problem, if I keep `find_unused_parameters=True ` and enable checkpoint, an error would be raised because a variable is marked twice.
I conjecture that during forward, those detached parameters are marked as ready because of `find_unused_parameters=True `, and somehow they are marked ready again and causes this error.
I am wondering in what cases a param would be marked as ready again?
And, what does it means for a param to be marked as ready? I think it is something to do with the autograd and the gradient compute map.
I accidentally find a solution that turn off the recycle ( i.e., turn off detach) and checkpoint while keep `find_unused_parameters=True `, the ddp training works.
However, the problem is I can not turn off them as they are important for the efficiency. Without checkpoint, the gpu memory would explode.
### Versions
python3.8
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @ezyang
| 2 |
5,035 | 83,070 |
make_fx + aot_autograd segfaults
|
module: crash, triaged, module: fx, fx, module: functorch
|
### π Describe the bug
An example is taken from the `aot_function` docstring and I tried using `make_fx` on the callable returned by `aot_function`:
```py
import torch
from functorch.compile import aot_function
from torch.fx.experimental.proxy_tensor import make_fx
def print_compile_fn(fx_module, args):
return fx_module
fn = lambda x : x.sin().cos()
aot_fn = aot_function(fn, print_compile_fn)
x = torch.randn(4, 5, requires_grad=True)
print(aot_fn(x))
try:
gm = make_fx(aot_fn)(x)
gm.graph.print_tabular()
except Exception as e:
print(e)
raise e
```
### Versions
Latest master.
cc @ezyang @SherlockNoMad @zou3519 @Chillee @samdow
| 1 |
5,036 | 83,064 |
Updating the LTS version of the torch (1.8.2 -> 1.10.2\1.11.2?)
|
oncall: binaries, triaged
|
### π The feature, motivation and pitch
The current current version of LTS torch is already more than 4 versions behind the new one.
To maintain interprise projects, you want to update the version of the torus, or understand when it is planned to be done in order to plan your roadmaps.
### Alternatives
when is it planned to be done? to plan your roadmaps.
https://discuss.pytorch.org/t/pytorch-lts-release-schedule/153282
### Additional context
recently released version of PL in which support for torch 1.8.2 has ceased
https://github.com/Lightning-AI/lightning/issues/14086
cc @ezyang @seemethere @malfet
| 1 |
5,037 | 83,060 |
torch.empty_strided argument 'size'and 'stride' documentation wrong
|
module: docs, triaged
|
### π The doc issue
The argument ('size' and 'stride') written on the document is tuple. However, I found that when argument ('size' and 'stride') is list, this api also works.
```
import torch
results={}
arg_1 = [2,2]
arg_2 = [4,2]
arg_3 = "cpu"
results['res'] = torch.empty_strided(arg_1,arg_2,device=arg_3,)
```
### Suggest a potential alternative/fix
It would be better if the document could write as this:
size (tuple/list of python:ints) β the shape of the output tensor
stride (tuple/list of python:ints) β the strides of the output tensor
cc @svekars @holly1238
| 0 |
5,038 | 83,052 |
FSDP init can crash with shared parameters
|
high priority, triage review, oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
FSDP initialization can crash when modules with shared params are wrapped separately. For example, if wrap https://github.com/facebookresearch/multimodal/blob/679f3596e4c44b483c68d4023b24e3c7f77292b3/torchmultimodal/modules/losses/flava.py#L138 linear (decoder) separately from the main module and then wrap the main module with `device_id` argument, this will raise an error due to `bias` param being shared. The `bias` param would have already been moved to GPU by the linear wrapped FSDP unit, but then the higher-level wrapper would still expect it to be on CPU, resulting in this error: https://github.com/pytorch/pytorch/blob/9e65e93c39238ec05aa7913693d7c3e4523bf257/torch/distributed/fsdp/fully_sharded_data_parallel.py#L814
### Versions
main
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
5,039 | 83,045 |
[JIT] Scripting modules fails for modules that contain nested NamedTuples
|
oncall: jit
|
### π Describe the bug
When scripting a module that contains a nested NamedTuple instance variable, scripting fails.
Repro:
```python
import torch
from typing import NamedTuple, List
class AA(NamedTuple):
a: torch.Tensor
class BB(NamedTuple):
a: AA
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.x = BB(AA(torch.rand(2, 2)))
def forward(self, input: BB) -> torch.Tensor:
return self.x.a.a
torch.jit.script(MyModule())
'''
Traceback (most recent call last):
File "/data/users/dberard/scripts/oncall/jackie.py", line 19, in <module>
torch.jit.script(MyModule())
File "/data/users/dberard/pytorch/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/data/users/dberard/pytorch/torch/jit/_recursive.py", line 476, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/data/users/dberard/pytorch/torch/jit/_recursive.py", line 542, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/data/users/dberard/pytorch/torch/jit/_recursive.py", line 393, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
'Tuple[Tuple[Tensor]]' object has no attribute or method 'a'.:
File "/path/to/repro.py", line 17
def forward(self, input: BB) -> torch.Tensor:
return self.x.a.a
~~~~~~~~ <--- HERE
'''
```
### Versions
master branch, `e3dd4242657232d4b404465f2df848050cd7f088`
| 2 |
5,040 | 83,032 |
Support for CSR Tensor with NN layers
|
module: sparse, module: nn, triaged
|
### π Describe the bug
When I try to pass a CSR tensor to a forward pass for a NN it outputs NaN.
Here is the NN:
```
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = GCNConv(292, 8)
self.conv2 = GCNConv(8, 7)
def forward(self, data):
x, edge_index = data.x.to_dense(), data.edge_index
x = self.conv1(x, edge_index)
x = F.relu(x)
x = F.dropout(x, training=self.training)
x = self.conv2(x, edge_index)
return F.log_softmax(x, dim=1)
```
Are there any specific reasons why CSR tensors are not supported yet?
### Versions
PyTorch 1.10
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 5 |
5,041 | 83,024 |
New PR template suggests a pattern that does not close PR
|
triaged
|
### π Describe the bug
https://github.com/pytorch/pytorch/pull/81991 introduced new template, which makes it harder for new contributors to mark PRs as fixing particular issue
Also, it is more verbose when ignored, perhaps we should come up with strategy to skip it if not filled, otherwise committed PR descriptions looks as follows(from https://github.com/pytorch/pytorch/commit/8d1ff9fc5dc70bdc65a83748c01cddf187728452):
```
### Description
<!-- What did you change and why was it needed? -->
### Issue
<!-- Link to Issue ticket or RFP -->
### Testing
<!-- How did you test your change? -->
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82505
Approved by: https://github.com/razarmehr, https://github.com/albanD
```
### Versions
CI
| 4 |
5,042 | 83,020 |
'Wav2Vec2ForCTC' object has no attribute 'conv'
|
oncall: quantization, triaged
|
### π Describe the bug
hi there.
i run my code on Colab.
i want to statically quantize my Wav2Vec model.
before that i try dynamic quantization but it was not useful because i didn't speed up inference time ,unfortunetly got slower than regular model.
but i got error:
`'Wav2Vec2ForCTC' object has no attribute 'conv'`
here is my code:
```
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
input_values = tokenizer(audio, return_tensors = "pt").input_values
```
Quantize snippet:
```
model.eval()
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model_fp32_fused = torch.quantization.fuse_modules(model, [['conv', 'relu']],inplace=True)
model_fp32_prepared = torch.quantization.prepare(model_fp32_fused)
model_fp32_prepared(input_values)
model_int8 = torch.quantization.convert(model_fp32_prepared)
res = model_int8(input_values)
```
and stacktrace:
```
return modules[name]
1207 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1208 type(self).__name__, name))
1209
1210 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'Wav2Vec2ForCTC' object has no attribute 'conv'
```
### Versions
```
PyTorch version: 1.12.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.0+cu113
[pip3] torchaudio==0.12.0+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0+cu113
[conda] Could not collect
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
5,043 | 83,019 |
TestCommon.test_dtypes error message is confusing
|
triaged, module: testing
|
### π The feature, motivation and pitch
Here's what I think test_dtypes is doing:
- for each dtype, check if `op(*args, **kwargs)` works
- if it works, then add the dtype to a list of "acceptable" dtypes
- if it fails, then don't add the dtype to a list of acceptable dtypes.
test_dtype suppresses the error messages of those failures. This is confusing, because the error message suggests that the supported dtypes are wrong, but something else (e.g. a change to torch.testing) could responsible:
"AssertionError: The supported dtypes for _refs.broadcast_shapes on device type cpu are incorrect!
The following dtypes did not work in forward but are listed by the OpInfo: {torch.float32}." Furthermore, it makes it more difficult to debug because there is no exception to do a backtrace from.
I don't know if it would make the UX worse, but could we include the error messages of failures in the error message?
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,044 | 83,015 |
Incorrect tensor conversion to m1 MPS.
|
triaged, module: mps
|
### π Describe the bug
When converting float 64 tensors to float tensors on m1 GPU MPS unpredictable errors occur.
As discussed in thread #82707 with @philipturner and @kulinseth
Here is the code to replicate:
```python
import numpy as np
from torch import tensor
import torch
print('numpy', np.__version__)
print('pytorch', torch.__version__)
device = torch.device("mps")
list1 = np.array([[0.0201, 0.0185, 0.0181, 0.0185, 0.0196, 0.0215, 0.0246, 0.0273, 0.0274,
0.0252, 0.0212, 0.0179, 0.0167, 0.0164, 0.0168, 0.0188, 0.0216, 0.0237,
0.0260, 0.0284, 0.0331, 0.0389, 0.0445, 0.0494, 0.0508, 0.0449, 0.0341,
0.0282, 0.0299, 0.0373, 0.0462, 0.0552, 0.0621, 0.0649, 0.0649, 0.0652,
0.0692, 0.0742, 0.0725, 0.0671, 0.0590, 0.0530, 0.0503, 0.0543, 0.0609,
0.0615, 0.0509, 0.0394, 0.0312, 0.0279, 0.0240, 0.0248, 0.0276, 0.0312,
0.0341, 0.0359, 0.0379, 0.0391, 0.0411, 0.0441, 0.0473, 0.0492, 0.0480,
0.0465],
[0.1648, 0.1620, 0.1533, 0.1466, 0.1445, 0.1462, 0.1505, 0.1573, 0.1576,
0.1514, 0.1417, 0.1325, 0.1296, 0.1290, 0.1285, 0.1242, 0.1220, 0.1227,
0.1244, 0.1254, 0.1266, 0.1319, 0.1366, 0.1380, 0.1338, 0.1263, 0.1234,
0.1246, 0.1262, 0.1224, 0.1117, 0.0965, 0.0872, 0.0852, 0.0914, 0.0982,
0.1021, 0.1045, 0.1106, 0.1168, 0.1230, 0.1246, 0.1247, 0.1238, 0.1233,
0.1240, 0.1258, 0.1252, 0.1241, 0.1235, 0.1229, 0.1225, 0.1224, 0.1241,
0.1342, 0.1427, 0.1462, 0.1418, 0.1322, 0.1239, 0.1132, 0.1103, 0.1116,
0.1172]])
list2 = np.array([[0.0523, 0.0481, 0.0444, 0.0415, 0.0392, 0.0378, 0.0370, 0.0368, 0.0387,
0.0430, 0.0493, 0.0561, 0.0612, 0.0639, 0.0645, 0.0637],
[0.1189, 0.1251, 0.1285, 0.1287, 0.1257, 0.1213, 0.1181, 0.1152, 0.1141,
0.1135, 0.1130, 0.1105, 0.1073, 0.1035, 0.0985, 0.0967]])
list3 = np.array([[-1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1.,
-1., -1.],
[-1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1.,
-1., -1.]])
input1 = tensor(list1)
input2 = tensor(list2)
input3 = tensor(list3)
gpu_list = []
for i in range(100):
gpu1 = input1.float().to(device=device, non_blocking=True)
gpu2 = input2.float().to(device=device, non_blocking=True)
gpu3 = input3.float().to(device=device, non_blocking=True)
if len(gpu_list) > 0:
print(gpu1 == gpu_list[0][0])
print(gpu2 == gpu_list[0][1])
print(gpu3 == gpu_list[0][2])
gpu_list.append([gpu1, gpu2, gpu3])
print(gpu1)
print(gpu2)
print(gpu3)
```
The beginning of output showing the problem:
```
numpy 1.22.3
pytorch 1.12.1
/opt/anaconda3/envs/multi-modal-m1/lib/python3.8/site-packages/torch/_tensor_str.py:103: UserWarning: The operator 'aten::bitwise_and.Tensor_out' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1659484780698/work/aten/src/ATen/mps/MPSFallback.mm:11.)
nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
tensor([[ 1.1755e-38, 0.0000e+00, 2.8026e-45, 0.0000e+00, 7.0242e-38,
0.0000e+00, 0.0000e+00, 0.0000e+00, 7.1746e-43, 2.9427e-44,
5.1088e-03, 1.4013e-45, 0.0000e+00, 0.0000e+00, 1.6630e+13,
1.4013e-45, 0.0000e+00, 0.0000e+00, 1.3245e-37, 3.4438e-41,
8.7460e-36, 1.4013e-45, 7.1746e-43, 0.0000e+00, 7.6231e-43,
0.0000e+00, 2.8026e-45, 0.0000e+00, 0.0000e+00, 0.0000e+00,
9.6122e-41, 0.0000e+00, 2.8026e-45, 0.0000e+00, 0.0000e+00,
0.0000e+00, 9.9773e-37, 3.4438e-41, 3.8582e-32, 1.4013e-45,
7.1746e-43, 0.0000e+00, 3.5873e-43, 0.0000e+00, 1.4013e-45,
0.0000e+00, 0.0000e+00, 0.0000e+00, 2.0781e-32, 1.4013e-45,
1.0331e-22, 1.4013e-45, 0.0000e+00, 0.0000e+00, 5.1088e-03,
1.4013e-45, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 3.5873e-43, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 7.1067e-29, -2.0000e+00,
2.7697e-28, 0.0000e+00, 1.2891e-01, 1.2900e-01, 1.2850e-01,
1.2420e-01, 1.2200e-01, 1.2270e-01, 1.2440e-01, 1.2540e-01,
1.2660e-01, 1.3190e-01, 1.3660e-01, 1.3800e-01, 1.3380e-01,
1.2630e-01, 1.2340e-01, 1.2460e-01, 1.2620e-01, 1.2240e-01,
1.1170e-01, 9.6500e-02, 8.7200e-02, 8.5200e-02, 9.1400e-02,
9.8200e-02, 1.0210e-01, 1.0450e-01, 1.1060e-01, 1.1680e-01,
1.2300e-01, 1.2460e-01, 1.2470e-01, 1.2380e-01, 1.2330e-01,
1.2400e-01, 1.2580e-01, 1.2520e-01, 1.2410e-01, 1.2350e-01,
1.2290e-01, 1.2250e-01, 1.2240e-01, 1.2410e-01, 1.3420e-01,
1.4270e-01, 1.4620e-01, 1.4180e-01, 1.3220e-01, 1.2390e-01,
1.1320e-01, 1.1030e-01, 1.1160e-01, 1.2880e-39]], device='mps:0')
tensor([[0.0522, 0.0481, 0.0444, 0.0415, 0.0392, 0.0378, 0.0370, 0.0368, 0.0387,
0.0430, 0.0493, 0.0561, 0.0612, 0.0639, 0.0645, 0.0637],
[0.1189, 0.1251, 0.1285, 0.1287, 0.1257, 0.1213, 0.1181, 0.1152, 0.1141,
0.1135, 0.1130, 0.1105, 0.1073, 0.1035, 0.0985, 0.0967]],
device='mps:0')
tensor([[-1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1.,
-1., -1.],
[-1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1.,
-1., -1.]], device='mps:0')
tensor([[False, True, True, True, False, True, True, True, False, True,
False, True, False, True, False, True, True, True, False, True,
False, True, False, True, False, True, False, True, True, True,
True, True, True, True, True, True, False, True, False, True,
True, True, True, True, True, True, True, True, False, True,
False, True, False, True, False, True, True, True, False, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, False, False,
False, True, False, False, False, False, False, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, False]], device='mps:0')
tensor([[False, False, False, False, False, True, True, True, True, True,
True, True, True, True, True, True],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True]], device='mps:0')
tensor([[True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True],
[True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True]], device='mps:0')
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.3)
CMake version: version 3.23.3
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:13:39) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] numpy 1.22.3 py38h25ab29e_0
[conda] numpy-base 1.22.3 py38h974a1f5_0
[conda] pytorch 1.12.1 py3.8_0 pytorch
[conda] torchaudio 0.12.1 py38_cpu pytorch
[conda] torchvision 0.13.1 py38_cpu pytorch
cc @kulinseth @albanD
| 12 |
5,045 | 82,997 |
Implement refs.var as a real reference
|
triaged, open source, cla signed, module: primTorch, no-stale
|
### Description
This PR removes the use of `prims.var` in the implementation of the `var` reference because there's no need for `var` to be a primitive.
### Testing
No new tests are needed.
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha @peterbell10
| 12 |
5,046 | 82,960 |
torch.bitwise_xor argument 'other' documentation wrong
|
module: docs, triaged
|
### π The doc issue
The type of argument ('input' and 'other') written on the document is the integral or boolean tensor. That is to say, parameters ('input' and 'other') must be a tensor. However, I found that when parameter 'other' is bool or number, this api also works.
```import torch
arg_1 = torch.randint(0,2,[3], dtype=torch.bool)
arg_2 = True
res = torch.bitwise_xor(arg_1,arg_2,)
import torch
arg_1 = torch.randint(-512,1024,[3], dtype=torch.int64)
arg_2 = 10
res = torch.bitwise_xor(arg_1,arg_2)
```
The parameter 'other' on above code works well on bool and int type data.
### Suggest a potential alternative/fix
It would be better if the document could write as this: other(Tensor, bool, int) β the second input .
cc @svekars @holly1238
| 0 |
5,047 | 82,951 |
torch.profiler's FLOPs measure only counts operations involving '+' and '*' .
|
oncall: profiler
|
### π Describe the bug
(1) c = a - b
(2) c = a + (-b)
Two operations shown above are mathematically identical.
However, torch.profiler does not count the FLOPs of operation (1).
```python
import torch
from torch.profiler import profile
def flops(a, b, op):
with profile(
activities = [torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
with_flops = True) as subtraction:
if op == '-':
c = a - b
elif op == '-':
c = a + (-b)
else:
raise NotImplementedError
subtraction_events = subtraction.events()
subtraction_flops = sum([int(evt.flops) for evt in subtraction_events])
print(subtraction_flops)
```
Now test with two tensors with six elements each.
```python
a = torch.rand((2, 3), device='cuda')
b = torch.rand((2, 3), device='cuda')
flops(a, b, '+')
flops(a, b, '-')
```
```
6
0
```
You can easily find out that the results are different.
This also happens in other operations: **, /, and other library functions like torch.pow, torch.std_mean, etc. are not counted.
I understand that torch.profiler gives 'estimated' value, but I believe this is something far from estimation.
### Versions
Collecting environment information...
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-184-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.2.152
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.57.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
[conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge
[conda] mkl_random 1.2.2 py39hde0f152_0 conda-forge
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.10.1 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.1 py39_cu113 pytorch
[conda] torchvision 0.11.2 py39_cu113 pytorch
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 0 |
5,048 | 93,798 |
torchinductor fallback cannot deal op that returns tuple of list of tensors
|
triaged, oncall: pt2
|
```
Traceback (most recent call last):
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/d5479947763d4841/hpc/torchrec/models/feed/benchmark/__vdd_benchmark__/vdd_benchmark#link-tree/torchinductor/graph.py", line 196, in call_function
return lowerings[target](*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/d5479947763d4841/hpc/torchrec/models/feed/benchmark/__vdd_benchmark__/vdd_benchmark#link-tree/torchinductor/lowering.py", line 139, in wrapped
return decomp_fn(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/d5479947763d4841/hpc/torchrec/models/feed/benchmark/__vdd_benchmark__/vdd_benchmark#link-tree/torchinductor/lowering.py", line 583, in handler
result = ir.FallbackKernel.create(kernel, *args)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/d5479947763d4841/hpc/torchrec/models/feed/benchmark/__vdd_benchmark__/vdd_benchmark#link-tree/torchinductor/ir.py", line 2264, in create
return [
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/d5479947763d4841/hpc/torchrec/models/feed/benchmark/__vdd_benchmark__/vdd_benchmark#link-tree/torchinductor/ir.py", line 2268, in <listcomp>
example_output[i].device,
AttributeError: 'list' object has no attribute 'device'
```
Triggered by fbgemm.jagged_dense_dense_elementwise_add_jagged_output.default
to repro in fbcode, check out V2 of https://www.internalfb.com/diff/D38488051
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 4 |
5,049 | 82,926 |
Slice operation on "ragged" dimension in NestedTensor
|
triaged, enhancement, module: nestedtensor
|
### π The feature, motivation and pitch
## Motivation
In preproc we often wants to operates over variable-width list, such as token ids in text domain, or sparse features in recommendation domain; one common operation is to slice over each list (e.g. only need first k elements). One way is to use Arrow's List type:
```python
>>> import torcharrow as ta
>>> id_list = ta.column([[0, 1, 2, 3], [4, 5, 6, 7, 8], [9, 10]])
>>> id_list
0 [0, 1, 2, 3]
1 [4, 5, 6, 7, 8]
2 [9, 10]
dtype: List(int64), length: 3, null_count: 0
>>> id_list.list.slice(stop=3)
0 [0, 1, 2]
1 [4, 5, 6]
2 [9, 10]
dtype: List(Int64(nullable=True)), length: 3, null_count: 0
```
I was thinking nested tensor may also work well for this use case (especially when doing preproc after Tensor collate). But looks like slice is not yet supported on ragged dimension?
```python
>>> import torch
>>> a, b, c = torch.arange(4), torch.arange(5) + 4, torch.arange(2) + 9
>>> id_list = torch.nested_tensor([a, b, c])
>>> id_list
nested_tensor([
tensor([0, 1, 2, 3]),
tensor([4, 5, 6, 7, 8]),
tensor([9, 10])
])
>>> id_list[:, :3]
raceback (most recent call last):
File "<stdin>", line 1, in <module>
NotImplementedError: Could not run 'aten::slice.Tensor' with arguments from the 'NestedTensorCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slice.Tensor' is only available for these backends: [CPU, CUDA, HIP, XLA, MPS, IPU, XPU, HPU, VE, Lazy, Meta, PrivateUse1, PrivateUse2, PrivateUse3, FPGA, ORT, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedXLA, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedLazy, QuantizedMeta, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, CustomRNGKeyId, MkldnnCPU, SparseCPU, SparseCUDA, SparseHIP, SparseXLA, SparseMPS, SparseIPU, SparseXPU, SparseHPU, SparseVE, SparseLazy, SparseMeta, SparsePrivateUse1, SparsePrivateUse2, SparsePrivateUse3, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, Batched, VmapMode, PythonTLSSnapshot].
......
```
Wondering if there is any plan to support this? Thanks!
### Alternatives
_No response_
### Additional context
Variable width data is often modelled as the flattened value and the offset tensor. For the above (simplified 1D) case, one way is to model it as the following internal representation (which is the Arrow Layout, other layout variations exist, such as use the `lengths`):
```python
values=tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]),
offsets=tensor([ 0, 4, 9, 11]),
# Logically, represent the following variable-width data:
#
# 0 [0, 1, 2, 3]
# 1 [4, 5, 6, 7, 8]
# 2 [9, 10]
# dtype: List(int64), length: 3
```
So we kind wants to to a "batched slice" over `values` over the ranges `(0, 3), (4, 7), (9, 11)`. The ranges is kind of like `offsets, offsets + 3` (needs to capped by the end of each list.
General n-D Tensor slice support is more complicated, but the similar idea may still work?
The request originally posted in the NestedTensor repo: https://github.com/pytorch/nestedtensor/issues/473 . But now realized new feature about NestedTensor should be posted in PyTorch repo.
Thanks!
cc @cpuhrsch @jbschlosser @bhosmer
| 1 |
5,050 | 82,919 |
Adding a warning of non-compatibility with forward hooks for the fast path of TransformerEncoderLayer
|
triaged, oncall: transformer/mha
|
### π The doc issue
In [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html), it would be helpful if it explicitly points out that naive hooks tend not to work under `fast path`, see [this discussion](https://discuss.pytorch.org/t/register-forward-hook-doesnt-work-for-nestedtensor/158374/5). The reason for such a notification is that **attention maps** are generally plotted when using Transformers, and hooking is debatably the most direct way to get attention weights.
### Suggest a potential alternative/fix
Add a warning notifying users that forward hooks might not be compatible with the fast path of `nn.TransformerEncoderLayer`.
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 0 |
5,051 | 82,915 |
DISABLED test_tensorboard_trace_handler (__main__.TestProfiler)
|
module: flaky-tests, skipped, oncall: profiler
|
Platforms: mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tensorboard_trace_handler&suite=TestProfiler&file=test_profiler.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7695478091).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green.
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 14 |
5,052 | 82,902 |
functorch slow tests not being run in slow CI
|
module: ci, module: tests, triaged, module: functorch
|
### π Describe the bug
See title
### Versions
trunk
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @zou3519 @Chillee @samdow
| 0 |
5,053 | 82,894 |
linalg and lu tests fail when run in parallel on linux cuda
|
high priority, module: cuda, module: ci, triaged, module: linear algebra
|
### π Describe the bug
When I am sshed into a CI runner for the linux-bionic-cuda11.6-py3.10-gcc7 default test config tests, running some tests in parallel in different processes (like running `python test_ops_jit.py -v -k test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 --repeat 10` in two terminals) causes the test to fail.
An incomplete list of tests that this happens on is:
- TestJitCUDA.test_variant_consistency_jit_linalg_ldl_solve_cuda_float32
- TestJitCUDA.test_variant_consistency_jit_linalg_ldl_solve_cuda_complex64
- TestGradientsCUDA.test_fn_fwgrad_bwgrad_linalg_lu_cuda_float64
- TestGradientsCUDA.test_fn_fwgrad_bwgrad_linalg_lu_factor_ex_cuda_float64
- TestGradientsCUDA.test_fn_fwgrad_bwgrad_lu_cuda_float64
- TestGradientsCUDA.test_fn_fwgrad_bwgrad_linalg_lu_factor_cuda_float64
- TestCommonCUDA.test_out_linalg_ldl_solve_cuda_float32
- TestCommonCUDA.test_out_linalg_lu_factor_cuda_float32
- TestCommonCUDA.test_dtypes_linalg_lu_cuda
- TestCommonCUDA.test_noncontiguous_samples_lu_cuda_float32
- TestCommonCUDA.test_noncontiguous_samples_linalg_lu_factor_ex_cuda_float32
- TestCommonCUDA.test_noncontiguous_samples_linalg_ldl_solve_cuda_float32
To the best of my knowledge, this is not related to memory, as running two processes of `TestJitCUDA.test_variant_consistency_jit_linalg_ldl_solve_cuda_float32` results in about 1000/7000 MB used according to nvidia-smi.
Running with CUDA_LAUNCH_BLOCKING or cuda-memcheck causes the test to pass.
As far as I know, this does not happen on cpu, windows cuda, or linux rocm.
An example of the stacktrace I get is:
```
jenkins@9894d1040e4e:~/workspace/test$ CI='' PYTORCH_TESTING_DEVICE_ONLY_FOR="cuda" /opt/conda/bin/python -bb test_ops_jit.py -v --import-slow-tests --import-disabled-tests -k TestJitCUDA.test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 --repeat 10
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 (__main__.TestJitCUDA) ... TEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
ERROR
TEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 errored - num_retries_left: 3
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1073, in assert_equal
pair.compare()
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 620, in compare
self._compare_values(actual, expected)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 721, in _compare_values
compare_fn(actual, expected, rtol=self.rtol, atol=self.atol, equal_nan=self.equal_nan)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 854, in _compare_regular_values_close
if torch.all(matches):
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1909, in wrapper
method(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1909, in wrapper
method(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 390, in instantiated_test
raise rte
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 852, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 852, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 852, in dep_fn
return fn(slf, *args, **kwargs)
[Previous line repeated 1 more time]
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 814, in test_wrapper
return test(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_ops_jit.py", line 117, in test_variant_consistency_jit
check_against_reference(self,
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 92, in check_against_reference
self.assertEqual(outputs, outputs_test)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2361, in assertEqual
assert_equal(
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1080, in assert_equal
f"Comparing\n\n"
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 329, in __repr__
body = [
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 330, in <listcomp>
f" {name}={value!s},"
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 423, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor_str.py", line 591, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor_str.py", line 554, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor_str.py", line 319, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor_str.py", line 98, in __init__
tensor_view = tensor.reshape(-1)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
expected failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 (__main__.TestJitCUDA) ... ERROR
TEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 errored - num_retries_left: 2
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2007, in setUp
set_rng_seed(SEED)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1270, in set_rng_seed
torch.manual_seed(seed)
File "/opt/conda/lib/python3.10/site-packages/torch/random.py", line 40, in manual_seed
torch.cuda.manual_seed_all(seed)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/random.py", line 113, in manual_seed_all
_lazy_call(cb, seed_all=True)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py", line 156, in _lazy_call
callable()
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/random.py", line 111, in cb
default_generator.manual_seed(seed)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
expected failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 (__main__.TestJitCUDA) ... ERROR
TEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 errored - num_retries_left: 1
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2007, in setUp
set_rng_seed(SEED)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1270, in set_rng_seed
torch.manual_seed(seed)
File "/opt/conda/lib/python3.10/site-packages/torch/random.py", line 40, in manual_seed
torch.cuda.manual_seed_all(seed)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/random.py", line 113, in manual_seed_all
_lazy_call(cb, seed_all=True)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py", line 156, in _lazy_call
callable()
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/random.py", line 111, in cb
default_generator.manual_seed(seed)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
expected failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 (__main__.TestJitCUDA) ... ERROR
TEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 errored - num_retries_left: 0
======================================================================
ERROR: test_variant_consistency_jit_linalg_ldl_solve_cuda_float32 (__main__.TestJitCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2007, in setUp
set_rng_seed(SEED)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1270, in set_rng_seed
torch.manual_seed(seed)
File "/opt/conda/lib/python3.10/site-packages/torch/random.py", line 40, in manual_seed
torch.cuda.manual_seed_all(seed)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/random.py", line 113, in manual_seed_all
_lazy_call(cb, seed_all=True)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py", line 156, in _lazy_call
callable()
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/random.py", line 111, in cb
default_generator.manual_seed(seed)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
----------------------------------------------------------------------
Ran 4 tests in 1.186s
FAILED (errors=1, expected failures=3)
jenkins@9894d1040e4e:~/workspace/test$
```
cc @ezyang @gchanan @zou3519 @ngimel @seemethere @malfet @pytorch/pytorch-dev-infra @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
### Versions
```
jenkins@649a39c6b611:~/workspace/torch/utils$ python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0a0+gita22ba1e
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.14.252-195.483.amzn2.x86_64-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: Tesla M60
Nvidia driver version: 510.60.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.13.0a0+gita22ba1e
[pip3] torchdynamo==1.13.0.dev0
[pip3] torchvision==0.14.0a0+1a1d509
[conda] magma-cuda116 2.6.1 0 pytorch
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.21.2 py310hd8d4704_0
[conda] numpy-base 1.21.2 py310h2b8c604_0
[conda] torch 1.13.0a0+gita22ba1e pypi_0 pypi
jenkins@649a39c6b611:~/workspace/torch/utils$
```
| 14 |
5,054 | 82,886 |
CUDA graph capturing fails for nn.Embedding and large batch sizes
|
module: cuda, triaged, module: embedding, module: cuda graphs
|
### π Describe the bug
Capturing CUDA graphs fails with a somewhat unspecific error when using `nn.Embedding` (and back-propagating through it) with batch sizes larger than 3072. I assume that this is because of an internal optimization in the respective CUDA kernel, which [performs sorting if more than 3072 inputs are used](https://github.com/pytorch/pytorch/blob/1cafb1027f223f2174f842945dd337cfa0fc120e/aten/src/ATen/native/cuda/Embedding.cu#L262). The (truncated) backtrace when this error is encountered with `CUDA_LAUNCH_BLOCKING=1` looks like this:
```
[...]
File "[...]/test_graph.py", line 785, in test
loss.backward()
File "[...]/lib/python3.10/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "[...]/lib/python3.10/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: unique_by_key: failed to synchronize: cudaErrorStreamCaptureUnsupported: operation not permitted when stream is capturing
```
For reference, here's a testing script with which I determined the threshold:
```py
import torch as th
from torch import nn, optim
from bisect import bisect_left
model = nn.Embedding(5, 30).cuda()
opt = optim.Adam(model.parameters())
inp = (th.arange(0, 10000) % 5).cuda()
def test(N):
opt.zero_grad(set_to_none=True)
out = model(inp[:N])
loss = out.mean()
loss.backward()
return None
def capture(N):
th.cuda.synchronize()
s = th.cuda.Stream()
s.wait_stream(th.cuda.current_stream())
with th.cuda.stream(s):
for _ in range(3):
test(N)
th.cuda.current_stream().wait_stream(s)
graph = th.cuda.CUDAGraph()
with th.cuda.graph(graph):
res = test(N)
def try_capture(N):
print(f'capture {N}')
try:
capture(N)
except:
print(f'failed {N}')
return 2
print(f'ok {N}')
return 0
thres = bisect_left(list(range(inp.shape[0])), 1, key=lambda x: try_capture(x))
print(f'>> threshold {thres}')
```
As far as I understand, the sorting optimization creates dynamically-sized tensors which are indeed [not supported in CUDA graphs](https://pytorch.org/docs/1.11/notes/cuda.html#constraints). I would see several possibilities to address this:
- The optimization could be disabled with an additional argument to `nn.Embedding()` and `F.embedding()`
- An exception could be raised if a CUDA graph capture is underway and the threshold for sorting inputs is reached.
- As a minimum, refer to this (and similar?) optimizations in the constraints section for the CUDA graph docs.
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100
Nvidia driver version: 470.57.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.961
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.0
[pip3] pytorch3d==0.6.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchfile==0.1.0
[pip3] torchvision==0.12.0
[pip3] torchviz==0.0.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-fft 1.3.1 pypi_0 pypi
[conda] mkl-random 1.2.2 pypi_0 pypi
[conda] mkl-service 2.4.0 pypi_0 pypi
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.0 pypi_0 pypi
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.11.0 py3.10_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.6.2 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
```
cc @ngimel @mcarilli @ezyang
| 6 |
5,055 | 82,879 |
`torch.tensor` and `torch.as_tensor` keyword argument `device` documentation wrong
|
module: docs, triaged, module: tensor creation
|
### π The doc issue
> device - the device of the constructed tensor. If None and data is a tensor then the device of data is used. If None and data is not a tensor then the result tensor is constructed on the CPU.
However, if None and data is not a tensor, then the result tensor actually is constructed on the current device for the default tensor type, like `torch.empty`, `torch.zeros` and `torch.ones`.
### Suggest a potential alternative/fix
the device of the constructed tensor. Default: If `None` and data is a tensor, uses the device of data. If `None` and data is not a tensor, uses the current device for the default tensor type (see [torch.set_default_tensor_type()](https://pytorch.org/docs/stable/generated/torch.set_default_tensor_type.html#torch.set_default_tensor_type)). [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
cc @svekars @holly1238 @gchanan @mruberry
| 0 |
5,056 | 82,872 |
Unknown builtin op: torchvision::deform_conv2d
|
oncall: jit
|
### π Describe the bug
I have a model, in this model i have used torchvision.ops.DeformConv2D
then i traced this model with out any error.
but when i want to load this jit model in c++ liobtorch
"torch::jit::load();"
i got an error about Unknown builtin op: torchvision::deform_conv2d
my Version:
python3.8 : torch 1.11.0-cpu
torchvision 0.12.0-cpu
c++ : libtorch 1.11.0-cpu
"
Unknown builtin op: torchvision::deform_conv2d.
Could not find any similar ops to torchvision::deform_conv2d. This op may not exist or may not be currently supported in TorchScript.
.......
Serialized File "code/__torch__/torchvision/ops/deform_conv.py", line 14
bias = self.bias
weight = self.weight
input = ops.torchvision.deform_conv2d(argument_1, weight, offset, mask, bias, 1, 1, 1, 1, 1, 1, 1, 1, True)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return input
"
### Versions
my Version:
python3.8 : torch 1.11.0-cpu
torchvision 0.12.0-cpu
c++ : libtorch 1.11.0-cpu
| 3 |
5,057 | 82,871 |
GPU arch 8.6 is not covered by the `TORCH_CUDA_ARCH_LIST = All` option
|
module: build, module: cuda, triaged
|
### π Describe the bug
Since `TORCH_CUDA_ARCH_LIST = Common` covers 8.6, it's probably a bug that 8.6 is not included in `TORCH_CUDA_ARCH_LIST = All`.
`TORCH_CUDA_ARCH_LIST = All` will use `CUDA_KNOWN_GPU_ARCHITECTURES`,
https://github.com/pytorch/pytorch/blob/bfebf254dd92f3ed35154597166e7e71fb04f31b/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake#L192-L193
whose latest arch is `"Ampere"`,
https://github.com/pytorch/pytorch/blob/bfebf254dd92f3ed35154597166e7e71fb04f31b/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake#L86
and `"Ampere"` adds 80 to bin/ptx only.
https://github.com/pytorch/pytorch/blob/bfebf254dd92f3ed35154597166e7e71fb04f31b/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake#L237-L239
### Versions
TOT
cc @malfet @seemethere @ngimel
| 1 |
5,058 | 82,843 |
Tensor operation hangs when used with multiprocessing
|
module: multiprocessing, triaged, module: determinism, shadow review
|
### π Describe the bug
The bug is basically some strange interaction between Tensors and python's multiprocessing.
Minimum code:
```python
import multiprocessing as mp
import torch
def f(c): return c[None]-c[:,None]
p = mp.Pool()
print(p.apply_async(f, [torch.randn(105, 3)]).get(2).shape)
a = torch.tensor(torch.randn(476, 3).numpy().tolist())
print(a) # ------------------------ if comment out this line, then it doesn't time out, and everything works fine
p = mp.Pool()
print(p.apply_async(f, [torch.randn(104, 3)]).get(2).shape) # works
print(p.apply_async(f, [torch.randn(105, 3)]).get(2).shape) # times out
```
Output:
```
torch.Size([105, 105, 3])
tensor([[-0.5029, -0.4826, 0.7539],
[-0.3531, -1.0151, 1.6901],
[ 0.4097, -1.2270, 0.4938],
...,
[ 1.0566, 0.1112, -1.1541],
[-0.4986, -0.9533, 0.0470],
[-0.3708, 0.8196, -0.7386]])
torch.Size([104, 104, 3])
---------------------------------------------------------------------------
TimeoutError Traceback (most recent call last)
Input In [1], in <cell line: 10>()
8 p = mp.Pool()
9 print(p.apply_async(f, [torch.randn(104, 3)]).get(2).shape)
---> 10 print(p.apply_async(f, [torch.randn(105, 3)]).get(2).shape)
File ~/anaconda3/envs/torch/lib/python3.8/multiprocessing/pool.py:767, in ApplyResult.get(self, timeout)
765 self.wait(timeout)
766 if not self.ready():
--> 767 raise TimeoutError
768 if self._success:
769 return self._value
TimeoutError:
```
So, for some reason when I print out the tensor `a`, multiprocessing hangs when I do an operation with a 105x3 tensor, but does not when I do the same operation with a 104x3 tensor. However, when I don't print out the tensor `a`, multiprocessing does not hang for both 104x3 and 105x3 tensors.
This was originally observed on a JupyterLab environment, but I have tested the code on the vanilla python interpreter. Still same issue.
### Versions
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Linux Mint 20.3 (x86_64)
GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0
Clang version: 8.0.1-9 (tags/RELEASE_801/final)
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.129.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] numpy-stl==2.17.1
[pip3] pytorch-sphinx-theme==0.0.19
[pip3] torch==1.10.0
[pip3] torchaudio==0.10.0
[pip3] torchvision==0.11.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.21.2 py38h20f2e39_0
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] numpy-stl 2.17.1 pypi_0 pypi
[conda] pytorch 1.10.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-sphinx-theme 0.0.19 pypi_0 pypi
[conda] torch 1.10.0 pypi_0 pypi
[conda] torchaudio 0.10.0 py38_cu113 pytorch
[conda] torchvision 0.10.0 pypi_0 pypi
cc @VitalyFedyunin @mruberry @kurtamohler @ezyang
| 5 |
5,059 | 82,831 |
Error building Pytorch 13.1 from Source on OS X 12.5
|
module: build, module: protobuf, triaged
|
### π Describe the bug
Same error with different versions of protoc:
./src/protoc --version
libprotoc 3.19.4
./src/protoc --version
libprotoc 3.21.4
```
% export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
Building wheel torch-1.13.0a0+gitec67c6a
-- Building version 1.13.0a0+gitec67c6a
cmake --build . --target install --config Release
[0/1] Re-running CMake...
-- CLANG_VERSION_STRING: Apple clang version 13.1.6 (clang-1316.0.21.2.5)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
-- sdk version: 12.3, mps supported: ON
-- MPSGraph framework found
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/CMakeDependentOption.cmake:89 (message):
Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
CMakeLists.txt:259 (cmake_dependent_option)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/CMakeDependentOption.cmake:89 (message):
Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
CMakeLists.txt:290 (cmake_dependent_option)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could not find ccache. Consider installing ccache to speed up compilation.
-- std::exception_ptr is supported.
-- Turning off deprecation warning due to glog.
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Caffe2: Found protobuf with new-style protobuf targets.
-- Caffe2 protobuf include directory: /Users/davidlaxer/protobuf/src
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- MKL libraries: /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libmkl_intel_lp64.dylib;/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libmkl_intel_thread.dylib;/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libmkl_core.dylib;/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libiomp5.dylib;/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk/usr/lib/libpthread.tbd;/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk/usr/lib/libm.tbd
-- MKL include directory: /Users/davidlaxer/anaconda3/envs/AI-Feynman/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libiomp5.dylib
-- Brace yourself, we are building NNPACK
-- NNPACK backend is x86-64
-- Failed to find LLVM FileCheck
-- git version: v1.6.1 normalized to 1.6.1
-- Version: 1.6.1
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK -- success
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:61 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -Xpreprocessor -fopenmp -I/usr/local/include
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:61 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -Xpreprocessor -fopenmp -I/usr/local/include
-- Found OpenMP: TRUE
CMake Warning at third_party/fbgemm/CMakeLists.txt:63 (message):
OpenMP found! OpenMP_C_INCLUDE_DIRS =
CMake Warning at third_party/fbgemm/CMakeLists.txt:162 (message):
==========
CMake Warning at third_party/fbgemm/CMakeLists.txt:163 (message):
CMAKE_BUILD_TYPE = Release
CMake Warning at third_party/fbgemm/CMakeLists.txt:164 (message):
CMAKE_CXX_FLAGS_DEBUG is -g
CMake Warning at third_party/fbgemm/CMakeLists.txt:165 (message):
CMAKE_CXX_FLAGS_RELEASE is -O3 -DNDEBUG
CMake Warning at third_party/fbgemm/CMakeLists.txt:166 (message):
==========
** AsmJit Summary **
ASMJIT_DIR=/Users/davidlaxer/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=FALSE
ASMJIT_TARGET_TYPE=STATIC
ASMJIT_DEPS=pthread
ASMJIT_LIBS=asmjit;pthread
ASMJIT_CFLAGS=-DASMJIT_STATIC
ASMJIT_PRIVATE_CFLAGS=-Wall;-Wextra;-Wconversion;-fno-math-errno;-fno-threadsafe-statics;-DASMJIT_STATIC
ASMJIT_PRIVATE_CFLAGS_DBG=
ASMJIT_PRIVATE_CFLAGS_REL=-O2;-fmerge-all-constants
-- Using third party subdirectory Eigen.
-- Found PythonInterp: /Users/davidlaxer/anaconda3/envs/AI-Feynman/bin/python (found suitable version "3.9.12", minimum required is "3.0")
-- Found PythonLibs: /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libpython3.9.a (found suitable version "3.9.12", minimum required is "3.0")
-- Using third_party/pybind11.
-- pybind11 include dirs: /Users/davidlaxer/pytorch/cmake/../third_party/pybind11/include
-- Adding OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/usr/local/include
-- No OpenMP library needs to be linked against
-- Found PythonInterp: /Users/davidlaxer/anaconda3/envs/AI-Feynman/bin/python (found version "3.9.12")
-- Found PythonLibs: /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libpython3.9.a (found version "3.9.12")
Generated: /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
--
-- ******** Summary ********
-- CMake version : 3.23.3
-- CMake command : /opt/local/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- C++ compiler version : 13.1.6.13160021
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.9/site-packages;/Users/davidlaxer/anaconda3/envs/AI-Feynman
-- CMAKE_INSTALL_PREFIX : /Users/davidlaxer/pytorch/torch
-- CMAKE_MODULE_PATH : /Users/davidlaxer/pytorch/cmake/Modules
--
-- ONNX version : 1.12.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler : /Users/davidlaxer/protobuf/src/protoc
-- Protobuf includes : /Users/davidlaxer/protobuf/src
-- Protobuf libraries : /Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib
-- BUILD_ONNX_PYTHON : OFF
--
-- ******** Summary ********
-- CMake version : 3.23.3
-- CMake command : /opt/local/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- C++ compiler version : 13.1.6.13160021
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
-- CMAKE_PREFIX_PATH : /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.9/site-packages;/Users/davidlaxer/anaconda3/envs/AI-Feynman
-- CMAKE_INSTALL_PREFIX : /Users/davidlaxer/pytorch/torch
-- CMAKE_MODULE_PATH : /Users/davidlaxer/pytorch/cmake/Modules
--
-- ONNX version : 1.4.1
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler : /Users/davidlaxer/protobuf/src/protoc
-- Protobuf includes : /Users/davidlaxer/protobuf/src
-- Protobuf libraries : /Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib
-- BUILD_ONNX_PYTHON : OFF
-- Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor
-- Adding -DNDEBUG to compile flags
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Found a library with LAPACK API (mkl).
disabling CUDA because NOT USE_CUDA is set
-- USE_CUDNN is set to 0. Compiling without cuDNN support
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- MKLDNN_CPU_RUNTIME = OMP
-- DNNL_TARGET_ARCH: X64
-- DNNL_LIBRARY_NAME: dnnl
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find Doxyrest (missing: DOXYREST_EXECUTABLE)
-- Found PythonInterp: /Users/davidlaxer/anaconda3/envs/AI-Feynman/bin/python (found suitable version "3.9.12", minimum required is "2.7")
-- Could NOT find Sphinx (missing: SPHINX_EXECUTABLE)
-- Enabled workload: TRAINING
-- Enabled primitives: ALL
-- Enabled primitive CPU ISA: ALL
-- Enabled primitive GPU ISA: ALL
-- Primitive cache is enabled
-- Found MKL-DNN: TRUE
-- Version: 7.0.3
-- Build type: Release
-- CXX_STANDARD: 14
-- Required features: cxx_variadic_templates
-- Using CPU-only version of Kineto
-- Configuring Kineto dependency:
-- KINETO_SOURCE_DIR = /Users/davidlaxer/pytorch/third_party/kineto/libkineto
-- KINETO_BUILD_TESTS = OFF
-- KINETO_LIBRARY_TYPE = static
-- Found PythonInterp: /Users/davidlaxer/anaconda3/envs/AI-Feynman/bin/python (found version "3.9.12")
INFO CUDA_SOURCE_DIR =
INFO ROCM_SOURCE_DIR =
INFO CUPTI unavailable or disabled - not building GPU profilers
-- Kineto: FMT_SOURCE_DIR = /Users/davidlaxer/pytorch/third_party/fmt
-- Kineto: FMT_INCLUDE_DIR = /Users/davidlaxer/pytorch/third_party/fmt/include
INFO CUPTI_INCLUDE_DIR = /extras/CUPTI/include
INFO ROCTRACER_INCLUDE_DIR = /include/roctracer
-- Configured Kineto (CPU)
-- don't use NUMA
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Using ATen parallel backend: OMP
disabling CUDA because USE_CUDA is set false
CMake Deprecation Warning at third_party/sleef/CMakeLists.txt:91 (cmake_policy):
The OLD behavior for policy CMP0066 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
-- Found OpenMP_C: -Xpreprocessor -fopenmp -I/usr/local/include (found version "5.0")
-- Found OpenMP_CXX: -Xpreprocessor -fopenmp -I/usr/local/include (found version "5.0")
-- Found OpenMP: TRUE (found version "5.0")
-- Configuring build for SLEEF-v3.6.0
Target system: Darwin-21.6.0
Target processor: x86_64
Host system: Darwin-21.6.0
Host processor: x86_64
Detected C compiler: AppleClang @ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
CMake: 3.23.3
Make program: /opt/local/bin/ninja
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- Building static test bins: OFF
-- MPFR : /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libmpfr.dylib
-- MPFR header file in /Users/davidlaxer/anaconda3/envs/AI-Feynman/include
-- GMP : /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libgmp.dylib
-- RT :
-- FFTW3 : LIBFFTW3-NOTFOUND
-- OPENSSL : 1.1.1o
-- SDE : SDE_COMMAND-NOTFOUND
-- RUNNING_ON_TRAVIS :
-- COMPILER_SUPPORTS_OPENMP :
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: /Users/davidlaxer/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: /Users/davidlaxer/pytorch/build/aten/src/ATen/core/aten_interned_strings.h
core header install: /Users/davidlaxer/pytorch/build/aten/src/ATen/core/enum_tag.h
CMake Warning (dev) at torch/CMakeLists.txt:467:
Syntax Warning in cmake code at column 107
Argument not separated from preceding token by whitespace.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at torch/CMakeLists.txt:467:
Syntax Warning in cmake code at column 115
Argument not separated from preceding token by whitespace.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
caffe2/CMakeLists.txt:1288 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /opt/local/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
caffe2/CMakeLists.txt:1288 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- pytorch is compiling with OpenMP.
OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/usr/local/include.
OpenMP libraries: /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libiomp5.dylib.
-- Caffe2 is compiling with OpenMP.
OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/usr/local/include.
OpenMP libraries: /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libiomp5.dylib.
-- Using lib/python3.9/site-packages as python relative installation path
CMake Warning at CMakeLists.txt:1073 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.23.3
-- CMake command : /opt/local/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- C++ compiler id : AppleClang
-- C++ compiler version : 13.1.6.13160021
-- Using ccache if found : ON
-- Found ccache : CCACHE_PROGRAM-NOTFOUND
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/usr/local/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-range-loop-analysis -Wno-pass-failed -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -DUSE_MPS -fno-objc-arc -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;IDEEP_USE_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-- CMAKE_PREFIX_PATH : /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.9/site-packages;/Users/davidlaxer/anaconda3/envs/AI-Feynman
-- CMAKE_INSTALL_PREFIX : /Users/davidlaxer/pytorch/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 1.13.0
-- CAFFE2_VERSION : 1.13.0
-- BUILD_CAFFE2 : OFF
-- BUILD_CAFFE2_OPS : OFF
-- BUILD_CAFFE2_MOBILE : OFF
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_TENSOREXPR_BENCHMARK: OFF
-- BUILD_NVFUSER_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : OFF
-- Protobuf compiler : /Users/davidlaxer/protobuf/src/protoc
-- Protobuf includes : /Users/davidlaxer/protobuf/src
-- Protobuf libraries : /Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : True
-- Python version : 3.9.12
-- Python executable : /Users/davidlaxer/anaconda3/envs/AI-Feynman/bin/python
-- Pythonlibs version : 3.9.12
-- Python library : /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/libpython3.9.a
-- Python includes : /Users/davidlaxer/anaconda3/envs/AI-Feynman/include/python3.9
-- Python site-packages: lib/python3.9/site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : True
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- CROSS_COMPILING_MACOSX :
-- INTERN_BUILD_MOBILE :
-- USE_BLAS : 1
-- BLAS : mkl
-- BLAS_HAS_SBGEMM :
-- USE_LAPACK : 1
-- LAPACK : mkl
-- USE_ASAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : ON
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : ON
-- USE_FFTW : OFF
-- USE_MKL : ON
-- USE_MKLDNN : ON
-- USE_MKLDNN_ACL : OFF
-- USE_MKLDNN_CBLAS : OFF
-- USE_UCC : OFF
-- USE_ITT : ON
-- USE_NCCL : OFF
-- USE_NNPACK : ON
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : ON
-- USE_TBB : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : ON
-- USE_PYTORCH_QNNPACK : ON
-- USE_XNNPACK : ON
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : OFF
-- USE_DEPLOY : OFF
-- Public Dependencies : caffe2::Threads;caffe2::mkl
-- Private Dependencies : pthreadpool;cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;fbgemm;ittnotify;fp16;foxi_loader;fmt::fmt-header-only;kineto
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : ON
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/davidlaxer/pytorch/build
[1/2025] Linking CXX static library lib/libfbgemm.a
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: lib/libfbgemm.a(ExecuteKernel.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: lib/libfbgemm.a(ExecuteKernel.cc.o) has no symbols
[2/2025] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx_torch-ml.pb.cc.o
FAILED: third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx_torch-ml.pb.cc.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -DONNXIFI_ENABLE_EXT=1 -DONNX_API="__attribute__((__visibility__(\"default\")))" -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -D__STDC_FORMAT_MACROS -I/Users/davidlaxer/pytorch/cmake/../third_party/benchmark/include -I/Users/davidlaxer/pytorch/build/third_party/onnx -isystem /Users/davidlaxer/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /Users/davidlaxer/pytorch/cmake/../third_party/googletest/googletest/include -isystem /Users/davidlaxer/protobuf/src -isystem /Users/davidlaxer/anaconda3/envs/AI-Feynman/include -isystem /Users/davidlaxer/pytorch/third_party/gemmlowp -isystem /Users/davidlaxer/pytorch/third_party/neon2sse -isystem /Users/davidlaxer/pytorch/third_party/XNNPACK/include -isystem /Users/davidlaxer/pytorch/third_party/ittapi/include -isystem /Users/davidlaxer/pytorch/cmake/../third_party/eigen -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor -O3 -DNDEBUG -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=10.9 -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -std=gnu++11 -MD -MT third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx_torch-ml.pb.cc.o -MF third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx_torch-ml.pb.cc.o.d -o third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx_torch-ml.pb.cc.o -c /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.cc
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.cc:4:
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.h:17:2: error: This file was generated by an older version of protoc which is
#error This file was generated by an older version of protoc which is
^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.h:18:2: error: incompatible with your Protocol Buffer headers. Please
#error incompatible with your Protocol Buffer headers. Please
^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.h:19:2: error: regenerate this file with a newer version of protoc.
#error regenerate this file with a newer version of protoc.
^
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.cc:4:
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.h:26:
/usr/local/include/google/protobuf/generated_message_table_driven.h:299:25: error: no template named 'MapEntryHelper'
bool operator()(const MapEntryHelper<T>& a,
^
/usr/local/include/google/protobuf/generated_message_table_driven.h:300:25: error: no template named 'MapEntryHelper'
const MapEntryHelper<T>& b) const {
^
/usr/local/include/google/protobuf/generated_message_table_driven.h:312:11: error: no template named 'MapEntryHelper'
typedef MapEntryHelper<typename MapFieldType::EntryTypeTrait> Entry;
^
/usr/local/include/google/protobuf/generated_message_table_driven.h:325:38: error: member reference base type 'Entry' (aka 'int') is not a structure or union
output->WriteVarint32(map_entry._cached_size_);
~~~~~~~~~^~~~~~~~~~~~~~
/usr/local/include/google/protobuf/generated_message_table_driven.h:338:33: error: member reference base type 'std::__vector_base<int, std::allocator<int>>::value_type' (aka 'int') is not a structure or union
output->WriteVarint32(v[i]._cached_size_);
~~~~^~~~~~~~~~~~~~
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.cc:4:
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.h:34:
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:17:2: error: This file was generated by an older version of protoc which is
#error This file was generated by an older version of protoc which is
^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:18:2: error: incompatible with your Protocol Buffer headers. Please
#error incompatible with your Protocol Buffer headers. Please
^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:19:2: error: regenerate this file with a newer version of protoc.
#error regenerate this file with a newer version of protoc.
^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5811:63: error: no member named 'EmptyDefault' in 'google::protobuf::internal::ArenaStringPtr'
name_.Set(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, static_cast<ArgT0 &&>(arg0), args..., GetArenaForAllocation());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5824:64: error: no member named 'EmptyDefault' in 'google::protobuf::internal::ArenaStringPtr'
name_.Set(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, value, GetArenaForAllocation());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5828:75: error: no member named 'EmptyDefault' in 'google::protobuf::internal::ArenaStringPtr'
return name_.Mutable(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, GetArenaForAllocation());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5836:19: error: no member named 'ReleaseNonDefault' in 'google::protobuf::internal::ArenaStringPtr'
auto* p = name_.ReleaseNonDefault(&::PROTOBUF_NAMESPACE_ID::internal::GetEmptyStringAlreadyInited(), GetArenaForAllocation());
~~~~~ ^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5851:7: error: too many arguments to function call, expected 2, have 3
GetArenaForAllocation());
^~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/google/protobuf/arenastring.h:316:8: note: 'SetAllocated' declared here
void SetAllocated(std::string* value, Arena* arena);
^
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.cc:4:
In file included from /Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.pb.h:34:
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5880:72: error: no member named 'EmptyDefault' in 'google::protobuf::internal::ArenaStringPtr'
ref_attr_name_.Set(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, static_cast<ArgT0 &&>(arg0), args..., GetArenaForAllocation());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5893:73: error: no member named 'EmptyDefault' in 'google::protobuf::internal::ArenaStringPtr'
ref_attr_name_.Set(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, value, GetArenaForAllocation());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/Users/davidlaxer/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:5897:84: error: no member named 'EmptyDefault' in 'google::protobuf::internal::ArenaStringPtr'
return ref_attr_name_.Mutable(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, GetArenaForAllocation());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
```
### Versions
% python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.3
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:36:29) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.13.0a0+gitf4ee374
[pip3] torchvision==0.14.0a0+e75a333
[conda] blas 1.0 mkl anaconda
[conda] mkl 2021.4.0 hecd8cb5_637 anaconda
[conda] mkl-include 2022.0.0 hecd8cb5_105 anaconda
[conda] mkl-service 2.4.0 py39h9ed2024_0 anaconda
[conda] mkl_fft 1.3.1 py39h4ab4a9b_0 anaconda
[conda] mkl_random 1.2.2 py39hb2f4e1b_0 anaconda
[conda] numpy 1.22.3 py39h2e5f0a9_0 anaconda
[conda] numpy-base 1.22.3 py39h3b1a694_0 anaconda
[conda] pytorch 1.12.0 py3.9_0 pytorch
[conda] torch 1.13.0a0+gitf4ee374 pypi_0 pypi
[conda] torchvision 0.14.0a0+e75a333 pypi_0 pypi
(AI-Feynman) davidlaxer@x86_64-apple-darwin13 pytorch %
Protobuf configuration settings:
<img width="563" alt="Screen Shot 2022-08-04 at 11 05 22 AM" src="https://user-images.githubusercontent.com/3105499/182920511-4d3f9358-88b3-478d-a8a4-4a6b2e289148.png">
<img width="562" alt="Screen Shot 2022-08-04 at 12 04 35 PM" src="https://user-images.githubusercontent.com/3105499/182932957-37901b38-7ef3-43f8-bb33-c0880d6a9b3b.png">
```
% ls -l /Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib
lrwxr-xr-x 1 davidlaxer staff 20 Aug 3 20:18 /Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib -> libprotobuf.30.dylib
```
cc @malfet @seemethere
| 5 |
5,060 | 82,823 |
getDLContext in DLConvertor.h cannot be found
|
triaged, module: dlpack
|
https://github.com/pytorch/pytorch/blob/67ece03c8cd632cce9523cd96efde6f2d1cc8121/aten/src/ATen/DLConvertor.h#L17
is not consistent with the definition in DLConvertor.cpp
https://github.com/pytorch/pytorch/blob/67ece03c8cd632cce9523cd96efde6f2d1cc8121/aten/src/ATen/DLConvertor.cpp#L71
| 3 |
5,061 | 82,813 |
functionalize and make_fx are not composable resulting in segfault and cuda error
|
module: crash, triaged, module: fx, fx, module: functorch
|
### π Describe the bug
This snippet segfaults with `device="cpu"` and gives a CUDA error with cuda device input.
```py
import torch
from functorch import make_fx
from functorch.experimental import functionalize
a = torch.randn(3, 3, device="cuda")
def fn(a):
result = torch.empty_like(a)
result.copy_(a)
return result
try:
functionalize(make_fx(fn))(a)
except Exception as e:
print(e)
print("functionalize failed")
```
```py
CUDA error: invalid argument
functionalize failed
```
### Versions
Latest master.
cc @ezyang @SherlockNoMad @zou3519 @Chillee @samdow @jjsjann123
| 4 |
5,062 | 82,802 |
[ROCm] build instruction is haphazard missing information unclear, build does not work
|
module: docs, module: rocm, triaged
|
### π Describe the bug
STARTED FROM https://pytorch.org/get-started/locally/ --> this has building from source link but ironically it link brings to "windows-from-source" i am not sure what it means, whoever has brought this URL must be some issue with English, nevertheless i decided to go there:
https://pytorch.org/get-started/locally/#windows-from-source
This page does not have a lot and then points to another one:
https://github.com/pytorch/pytorch#from-source
at this point, I am unclear why i have to go through 3 different pages to get the instruction for building:
This main page appears to be main instructions for building pytoch but RIDDLED with errors, haphazardly put information containing several different ways of building but ALL OF THEM DOES NOT work.
https://github.com/pytorch/pytorch#from-source
after satisfying all requirements including anaconda, upgraded python and bunch of libraries, some of them were specified in the page and some not (latter I had to find throuhg gruelling hours of struggle), build still fails
Anaconda3-2021.11-Linux-x86_64.sh (installed this anaconda)
Python 3.9.10
Following steps ocurrs OK for radeon MI100:
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive --jobs 0
python tools/amd_build/build_amd.py
but this following never works:
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py install
There is error about makefile missing.
POOR DOCUMENTATION!!
Decided to go manual by "cd pytorch ; mkdir build ; cd build ; cmake .. ; make -j16" it goes for about 88% and fails.
Try these steps on freshly installed Centos 8 Stream and i guaranteed you will never be able to build it with what is in there!
I just dont know how even basic build steps fail fail fail.
### Versions
release/1.10
cc @svekars @holly1238 @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
| 5 |
5,063 | 82,793 |
Profiling results on CPU is not reliable
|
module: performance, triaged
|
### π Describe the bug
Same as https://github.com/pytorch/kineto/issues/325, I used torch profiler to profile the inference of DL models, but the result of profiling is far from the measurement of directly running the model.
For example:
effiennet_b5 on A100:
Batch size: 1 (real time scenario), multi-instance (7 instances on 7 mig, each instance on one mig)
results of single instance:
```
benchmark time: 17.839 ms.
profiler time: Self CPU time total 30.498ms, Self CUDA time total: 17.447ms, ProfilerStep* self cpu 6.493ms
```

self CPU time total 30.498ms - ProfilerStep* self cpu 6.493ms is still much larger than benchmark time 17.839 ms.
I found the exact overhead number before: 4 us per op on CUDA, and the profiler overhead on CPU seems to be very large.
This makes performance analysis difficult.
### Versions
pytorch 1.12
python 3.8.5
cc @VitalyFedyunin @ngimel
| 6 |
5,064 | 82,789 |
[LibTorch] the C++ api needs detailed error reports like pytorch
|
module: logging, triaged, enhancement
|
### π The feature, motivation and pitch
Basically, I am currently working with Libtorch (the C++ version of pytorch), and encountered a problem.
When any operations fails within the libtorch engine, it will raise a debug error as intended. However, unlike pytorch which reports the error in details through log messages, the libtorch debug window leads up to the point of execution termination in the "disassembly version" of my application.
<img width="797" alt="15e9bafce8b1bd35a56abf36dd30dbd" src="https://user-images.githubusercontent.com/61119095/182743523-963e9486-3626-496f-8cd2-5389cec0ddb1.png">
Although the stack trace provides the function that causes the error in LLDB, and the annotations within disassembled binaries indicates that libtorch went through an error checking and report stage, no reasonings or error messages are given before the application terminates its execution.
This is extremely troublesome while debugging an application, since many functions can fail in multiple ways and there will be no way to know the reasons of failure directly from libtorch.
Therefore, I would like the error loggings like pytorch to be also possible within libtorch, which will be a great help for me and other C++ developers working on this engine.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,065 | 82,785 |
UnaryUfuncInfo Sample Generation Ignores sample_kwarg function
|
high priority, triaged, module: correctness (silent), module: testing
|
### π Describe the bug
`op.sample_inputs` as is the common pattern throughout [test_ops](https://github.com/pytorch/pytorch/blob/master/test/test_ops.py) etc ignores the `sample_kwarg` that is passed into the UnaryUFunc OpInfo.
As a result, no kwargs are passed in / tested.
repro run `python test/test_ops.py -k test_fake_nan_to_num_cpu_float32` and print kwargs.
### Versions
master
cc @ezyang @gchanan @zou3519
| 1 |
5,066 | 82,764 |
Subclass of Tensor doesn't support __format__
|
triaged, tensor subclass
|
### π Describe the bug
```__format__``` on a 0-dimension tensor works if the class is ```torch.Tensor```, but fails with a subclass. (Tripped over this when using fastai, which subclasses Tensor.)
```
from torch import Tensor, tensor
x = Tensor(tensor(4.8801))
print(x.__format__('.4f'))
class TensorSubclass(Tensor):
pass
y = TensorSubclass(tensor(4.8801))
print(y.__format__('.4f'))
```
```
4.8801
Traceback (most recent call last):
File "main.py", line 10, in <module>
print(y.__format__('.4f'))
File ".../pytorch/torch/_tensor.py", line 842, in __format__
return handle_torch_function(Tensor.__format__, (self,), self, format_spec)
File ".../pytorch/torch/overrides.py", line 1530, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
File ".../pytorch/torch/_tensor.py", line 1263, in __torch_function__
ret = func(*args, **kwargs)
File ".../pytorch/torch/_tensor.py", line 845, in __format__
return object.__format__(self, format_spec)
TypeError: unsupported format string passed to TensorSubclass.__format__
```
### Versions
PyTorch version: 1.13.0a0+gitf4ee374
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.3
Libc version: N/A
Python version: 3.9.12 (main, Jul 29 2022, 10:53:31) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-12.5-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.0a0+gitf4ee374
[pip3] torchaudio==0.13.0.dev20220801
[pip3] torchvision==0.14.0.dev20220801
[conda] Could not collect
cc @ezyang
| 0 |
5,067 | 82,762 |
Fill in a bool Tensor not supported in jit
|
oncall: jit
|
### π Describe the bug
Fill in a bool Tensor not supported in jit
```python
import torch
class Dumbo(torch.nn.Module):
def forward(self, x):
x_mask = torch.zeros(x.shape, dtype=torch.bool)
x_mask[:,:] = True
return x_mask + x
x = torch.rand(2, 3)
import io
torch.onnx.export(Dumbo(), (x, ), io.BytesIO())
```
Error message is as follows
```
RuntimeError: 0INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755903507/work/torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::fill_ but it isn't a special case. Argument types: Tensor, bool,
Candidates:
aten::fill_.Scalar(Tensor(a!) self, Scalar value) -> (Tensor(a!))
aten::fill_.Tensor(Tensor(a!) self, Tensor value) -> (Tensor(a!))
```
Current workaround, replace
```
x_mask = torch.zeros(x.shape, dtype=torch.bool)
x_mask[:,:] = True
```
with
```
x_mask = torch.zeros(x.shape)
x_mask[:,:] = 1
```
### Versions
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1014-azure-x86_64-with-glibc2.17
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.950
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] torch==1.13.0a0+gitf1aeea2
[pip3] torchvision==0.13.0a0+8e5844f
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.13.0a0+git6997ac7 pypi_0 pypi
[conda] torchvision 0.13.0a0+8e5844f dev_0
| 0 |
5,068 | 82,761 |
torch.Tensor.bag() should automatically implement bagging
|
triaged, enhancement
|
### π The feature, motivation and pitch
Bootstrap aggregation, or bagging, is a common sampling technique in training ML models. For a dataset D of size N, bagging will randomly sample with replacement N data points from D. These bagged datasets can then be used to train an ensemble of models which will often outperform a single model trained on unbagged data (see Breiman, 1996).
### Documentation
Tensor.bag(dim=0, n=1) should return a tensor of data points sampled randomly with replacement from the original tensor, and of the same size as that tensor.
Dim refers the the dimension along which to randomly sample.
N refers to the number of bagged datasets to return; if n>1, the tensor returned should include n independent bagged datasets of the same size as the original tensor.
Out refers to what should be returned. The default is to return the bagged values. 'idx' should return the indices that would have been bagged, and 'both' should return a named tuple of both values and indices.
### Questions
* Should the returned Tensor be a deep copy or simply a view of the original data? Views are sufficient if the data will not be transformed in the training process. But processing after bagging (e.g. normalizing the bagged dataset) would require a deep copy. Perhaps this should be an option, or two separate methods.
* Is 'out' the correct way to handle this? What should be the default behavior, and what should the options be named?
### Alternatives
PyTorch could also implement the equivalent of numpy's np.random.choice(), which has an outstanding pull request as described in this issue: https://github.com/pytorch/pytorch/issues/16897
### Additional context
_No response_
| 0 |
5,069 | 82,756 |
Met bugs ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0
|
oncall: distributed, oncall: r2p
|
### π Describe the bug
I met bugs when I run distribute train.

Package Version
------------------ ----------------
certifi 2022.6.15
charset-normalizer 2.1.0
idna 3.3
numpy 1.23.1
Pillow 9.2.0
pip 22.1.2
PyYAML 6.0
requests 2.28.1
scipy 1.9.0
setuptools 61.2.0
timm 0.4.5
tlt 0.1.0
torch 1.12.0+rocm5.1.1
torchaudio 0.12.0+rocm5.1.1
torchvision 0.13.0+rocm5.1.1
typing_extensions 4.3.0
urllib3 1.26.11
wheel 0.37.1
### Versions
Collecting environment information...
PyTorch version: 1.12.0+rocm5.1.1
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.1.20531-cacfa990
OS: Red Hat Enterprise Linux release 8.4 (Ootpa) (x86_64)
GCC version: (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
Clang version: Could not collect
CMake version: version 3.18.2
Libc version: glibc-2.28
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.18.0-305.28.1.el8_4.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.1.20531
MIOpen runtime version: 2.16.0
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.12.0+rocm5.1.1
[pip3] torchaudio==0.12.0+rocm5.1.1
[pip3] torchvision==0.13.0+rocm5.1.1
[conda] numpy 1.23.1 pypi_0 pypi
[conda] torch 1.12.0+rocm5.1.1 pypi_0 pypi
[conda] torchaudio 0.12.0+rocm5.1.1 pypi_0 pypi
[conda] torchvision 0.13.0+rocm5.1.1 pypi_0 pypi
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,070 | 82,751 |
Refactor how errors decide whether to append C++ stacktrace
|
triaged, better-engineering
|
### π The feature, motivation and pitch
Per @zdevito's comment in https://github.com/pytorch/pytorch/pull/82665/files#r936022305, we should refactor the way C++ stacktrace is appended to errors.
Currently, in https://github.com/pytorch/pytorch/blob/752579a3735ce711ccaddd8d9acff8bd6260efe0/torch/csrc/Exceptions.h, each error goes through a try/catch and the C++ stacktrace is conditioned on whether cpp stacktraces are enabled or not.
Instead, specific exceptions can have a flag that determines whether cpp stacktrace is added or not. Most errors would set this in their constructor based on the env variable, but for certain types of errors which always report cpp stacktrace, this can just be set to true and this field can be checked when reporting errors.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,071 | 82,727 |
DecompositionInterpreter creates invalid graphs for FX graph modules created with torch.fx.symbolic_trace
|
triaged, module: fx, fx
|
### π Describe the bug
```py
import torch
from torch.fx.experimental.proxy_tensor import DecompositionInterpreter
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.bn = torch.nn.BatchNorm2d(3)
self.relu = torch.nn.ReLU()
def forward(self, inp):
o = self.bn(inp)
o = self.relu(o)
return o
input = torch.randn(2, 3, 4, 5)
m = Model()
gm = torch.fx.symbolic_trace(m)
graph = torch.fx.Graph()
DecompositionInterpreter(gm, graph).run(input)
print(graph)
```
```py
graph():
%inp : [#users=1] = placeholder[target=inp]
%native_batch_norm_default : [#users=3] = call_function[target=torch.ops.aten.native_batch_norm.default](args = (%inp, Parameter containing:
tensor([1., 1., 1.], requires_grad=True), Parameter containing:
tensor([0., 0., 0.], requires_grad=True), tensor([0.0085, 0.0026, 0.0262]), tensor([1.0102, 1.0085, 0.9952]), True, 0.1, 1e-05), kwargs = {})
%getitem : [#users=1] = call_function[target=operator.getitem](args = (%native_batch_norm_default, 0), kwargs = {})
%getitem_1 : [#users=0] = call_function[target=operator.getitem](args = (%native_batch_norm_default, 1), kwargs = {})
%getitem_2 : [#users=0] = call_function[target=operator.getitem](args = (%native_batch_norm_default, 2), kwargs = {})
%relu_default : [#users=2] = call_function[target=torch.ops.aten.relu.default](args = (%getitem,), kwargs = {})
%detach_default : [#users=1] = call_function[target=torch.ops.aten.detach.default](args = (%relu_default,), kwargs = {})
%detach_default_1 : [#users=0] = call_function[target=torch.ops.aten.detach.default](args = (%detach_default,), kwargs = {})
return relu_default
```
Arguments to the `native_batch_norm_default` call contain repr of parameters instead of being referenced as constants.
The source of this problem is that `DecompositionInterpreter` uses `torch.fx.proxy.GraphAppendingTracer` instead of `torch.fx.experimental.proxy_tensor.PythonKeyTracer`.
### Versions
Latest master
cc @ezyang @SherlockNoMad @davidberard98
| 0 |
5,072 | 93,797 |
torchdynamo backend failure suppression is insufficient when backend fails at runtime
|
triaged, oncall: pt2
|
see https://github.com/pytorch/torchdynamo/pull/703 specifically https://app.circleci.com/pipelines/github/pytorch/torchdynamo/623/workflows/f8662ac5-4b8f-4a18-bd44-bd3b4808e581/jobs/635
it's possible that torchbenchmark.py is toggling some other config that is prevent the failures from being suppressed, but I removed the obvious command line flags that were unsuppressing failures.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 7 |
5,073 | 82,725 |
Automating release process - Binary validation, Automatically generating get started page
|
module: ci, triaged
|
### Problems details
Validation Takes to much time and not automated
Binary builds presence and installation instructions are not tested against release
Get Started page generation is not automated
Lack of validation for artifacts based on the release matrix. We publish the following [release matrix] (TO DO MOVE MATRIX TO OSS), and we build, test and publish the binaries to multiple repositories, such as Conda, Wheels, pypi, Iphone cocoapods, Android Maven. We publish for multiple OS versions (Linux, Windows, MacOs) and multiple package versions (CUDA, Python). However we have following gaps, for pytorch core and each of the domain libraries:
Smoke tests on clean environments are not implemented for all the binaries we produce (including Domain Libraries). To make sure binary can be loaded properly we should validate these binaries on the environment that does not have any dependencies preinstalled.
Existing smoke tests run in the environment that has all dependencies preinstalled and hence we may miss an issue when the dependencies are missing
Binary presence and installation instructions validation for each binary. We already have a subset of the implemented when validating the get[ started page](https://pytorch.org/get-started/locally/). We also implemented a script to validate conda binaries presence. However we donβt validate all the binaries this way.
This situation leads to the following issues:
Not releasing some binaries that are targeted for release
Releasing the binaries that could not be installed
Binaries fails to install due to regression of repository
Issues: [82428](https://github.com/pytorch/pytorch/issues/82428)
Releasing the binaries that could be installed but fails loading
Issues: [74087](https://github.com/pytorch/pytorch/issues/74087) [78490](https://github.com/pytorch/pytorch/issues/78490)
### Proposal
1. Ensure each release binary can be properly installed, and executed on target environment.
Linux:
- [x] #83519
- [x] #84421
- [x] #82969
- [x] #82971
- [x] #82973
Windows:
- [x] #82977
- [x] #82978
- [x] #82980
Mac:
- [x] #83013
- [x] #83021
1.5 CUDA Older driver compatibility test:
- [x] #82913
2. Ensure we produce binary for every configuration declared in release matrix:
- [x] #82991
3. Generate a get started page automatically based on the release matrix.
- [x] #82996
4. Surface results on HUD and alerts for release failures
- [ ] #84422
5. Extend the validation tests to cover more use cases:
- [x] #85085
6. Automate release only changes that needs to happen in order to build the release
- [x] #86491
7. Fix Official Docker build for release
- [x] #87489
8. Synchronize domain builds to be executed after core build have completed
- [ ] #87501
9. Consolidate Pytorch core and Validation system framework matrixes and smoke tests
- [ ] #88686
cc @seemethere @malfet
| 4 |
5,074 | 82,724 |
cur_dim == dimINTERNAL ASSERT FAILED at
|
module: onnx, triaged, onnx-triaged
| ERROR: type should be string, got "https://github.com/pytorch/pytorch/blob/8da2b204e111ad0ea42d0b029eb6851f5fd2a95f/torch/csrc/jit/passes/onnx/pattern_conversion/pattern_conversion.cpp#L133\r\nWhen there are some repeated aten::slices in the subblock, the dim_offset won't plus-plus leading to asserting error.\r\n\r\nthe sub block's torch script IR is as follows.\r\n```torch script\r\n%3865 : Float(450, 450, strides=[450, 1], requires_grad=0, device=cuda:0) = aten::select(%attention_mask, %4175, %4176) \r\n%3866 : Float(61, 450, strides=[450, 1], requires_grad=0, device=cuda:0) = aten::slice(%3865, %4179, %4180, %1741, %4181)\r\n%3867 : Float(61, 450, strides=[450, 1], requires_grad=0, device=cuda:0) = aten::slice(%3866, %4184, %4185, %1749, %4186)\r\n```" | 3 |
5,075 | 82,718 |
tensor.unfold don't check the parameter size value, that maybe less than 0.
|
module: error checking, triaged, module: edge cases
|
### π Describe the bug
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorShape.cpp#L3144
Unfold op may create a tensor with negative value and cause failed. So maybe add a check of parameter size in this op.
>>> a = torch.randn((3,4,5,4))
>>> b = a.unfold(3,-1,1)
>>> b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/projs/framework/shangang/venv/pytorch_19/lib/python3.6/site-packages/torch/_tensor.py", line 212, in __repr__
return torch._tensor_str._str(self)
File "/projs/framework/shangang/venv/pytorch_19/lib/python3.6/site-packages/torch/_tensor_str.py", line 407, in _str
return _str_intern(self)
File "/projs/framework/shangang/venv/pytorch_19/lib/python3.6/site-packages/torch/_tensor_str.py", line 382, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/projs/framework/shangang/venv/pytorch_19/lib/python3.6/site-packages/torch/_tensor_str.py", line 242, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/projs/framework/shangang/venv/pytorch_19/lib/python3.6/site-packages/torch/_tensor_str.py", line 82, in __init__
tensor_view = tensor.reshape(-1)
RuntimeError: Trying to create tensor with negative dimension -1: [3, 4, 5, 6, -1]
### Versions
master
| 0 |
5,076 | 82,712 |
Tensorboard py-profiler shows no device info in Operator view
|
oncall: profiler
|
### π Describe the bug
Following the official tutorial, i use the following code to profile my model. However, there is no device info collected/showed in Operator view.
```python
with profile(
activities=[ProfilerActivity.CUDA, ProfilerActivity.CPU],
schedule=torch.profiler.schedule(
skip_first=8,
warmup=0,
wait=0,
active=2,
),
on_trace_ready=torch.profiler.tensorboard_trace_handler('./pyprofile/tf_event/'),
with_flops=True,
with_modules=True,
with_stack=True,
record_shapes=True,
) as p:
for i in range(self.config['max_iter']):
```
Here are my tensorboard snapshot where device duration and tensor core usage are all zeros, and no device bar chart is showed.
<img width="1068" alt="ζͺε±2022-08-03 δΈε7 14 24" src="https://user-images.githubusercontent.com/53320182/182594867-31e5baf4-89c4-4d4c-a6bd-d738ecf1ef31.png">
### Versions
PyTorch version: 1.12.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.28
Python version: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: 9.0.176
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.1
[pip3] spring==0.7.2+cu112.torch1120.mvapich2.pmi2.nartgpu.develop.b35cd03e
[pip3] torch==1.12.0+cu113
[pip3] torch-tb-profiler==0.4.0
[pip3] torchvision==0.11.3+cu113
[conda] Could not collect
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 1 |
5,077 | 82,710 |
build fail when using lto with gcc
|
module: build, triaged
|
### π Describe the bug
The bug is reported to gentoo at https://bugs.gentoo.org/862672
During the build I get
_/var/tmp/portage/sci-libs/caffe2-1.12.0/work/pytorch-1.12.0/aten/src/ATen/native/cpu/moments_utils.h:76: error: type of βc_vecsβ does not match original declaration [-Werror=lto-type-mismatch]
76 | static std::array<Vec, kChunkSize> c_vecs = ([]() {
|
/usr/lib/gcc/x86_64-pc-linux-gnu/11.3.0/include/g++-v11/array:95: note: type name βstd::array<at::vec::DEFAULT::Vectorized<c10::BFloat16>, 16ul>β should match type name βstd::array<at::vec::AVX2::Vectorized<c10::BFloat16>, 16ul>β
95 | struct array
|
/var/tmp/portage/sci-libs/caffe2-1.12.0/work/pytorch-1.12.0/aten/src/ATen/native/cpu/moments_utils.h:76: error: βc_vecsβ violates the C++ One Definition Rule [-Werror=odr]
76 | static std::array<Vec, kChunkSize> c_vecs = ([]() {
|
/usr/lib/gcc/x86_64-pc-linux-gnu/11.3.0/include/g++-v11/array:95: note: type name βstd::array<at::vec::DEFAULT::Vectorized<c10::BFloat16>, 16ul>β should match type name βstd::array<at::vec::AVX512::Vectorized<c10::BFloat16>, 16ul>β
95 | struct array
|
/var/tmp/portage/sci-libs/caffe2-1.12.0/work/pytorch-1.12.0/aten/src/ATen/native/cpu/moments_utils.h:76: note: βc_vecsβ was previously declared here
76 | static std::array<Vec, kChunkSize> c_vecs = ([]() {
|
/var/tmp/portage/sci-libs/caffe2-1.12.0/work/pytorch-1.12.0/aten/src/ATen/native/cpu/moments_utils.h:76: note: code may be misoptimized unless β-fno-strict-aliasingβ is used_
my guess is that the compiler has problem (in lto mode) to distinguish the various static variable c_vecs created on different instance of the template
### Versions
version is 1.12.0
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Gentoo Base System release 2.8 (x86_64)
GCC version: (Gentoo 11.3.0 p4) 11.3.0
Clang version: 14.0.4
CMake version: version 3.22.4
Libc version: glibc-2.34
Python version: 3.10.5 (main, Jun 29 2022, 11:04:31) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.32-gentoo-r1-x86_64-x86_64-Intel-R-_Core-TM-_i5-4570_CPU_@_3.20GHz-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[conda] Could not collect
cc @malfet @seemethere
| 0 |
5,078 | 82,687 |
Move nested-tensor tutorial from prototype
|
triaged, module: nestedtensor
|
### π The doc issue
Nested-tensor tutorial involves ops that are nightly-only now, so it is put under prototype/ and have _tutorial suffix removed.
### Suggest a potential alternative/fix
Once new PyTorch release comes out, we should move it back to beginner/ and append _tutorial suffix.
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @mikaylagawarecki
| 1 |
5,079 | 82,684 |
SequentialLR does not work correctly with multiple ConstantLR
|
triaged, module: LrScheduler
|
In combination with multiple ConstantLR schedulers, SequentialLR should use one at a time (depending whether the current epoch and milestones), but it instead applies several at the same time. For the example below:
```
import torch.optim
from torch import nn
from torch.optim.lr_scheduler import ConstantLR
from torch.optim.lr_scheduler import SequentialLR
model = nn.Linear(10, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=1.0)
scheduler1 = ConstantLR(optimizer, factor=0.5, total_iters=3)
scheduler2 = ConstantLR(optimizer, factor=0.3, total_iters=4)
scheduler = SequentialLR(optimizer, schedulers=[scheduler1, scheduler2], milestones=[3])
for step in range(6):
scheduler.step()
print(step, scheduler.get_last_lr())
```
The output is:
0 [0.15]
1 [0.15]
2 [0.3]
3 [0.3]
4 [0.3]
5 [0.3]
While the correct output should be:
0 [0.5]
1 [0.5]
2 [0.3]
3 [0.3]
4 [0.3]
5 [0.3]
### Versions
1.12
| 0 |
5,080 | 82,677 |
RReLU doc doesn't specify the eval mode behaving just like LeakyReLU
|
module: docs, module: nn, triaged, actionable, topic: docs
|
### π The doc issue
[RReLU](https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html) behaves the same as LeakyReLU when it's on eval mode, but the documentation doesn't seem to provide such information.
### Suggest a potential alternative/fix
Maybe add a link or a notice specifying this would be better.
cc @svekars @holly1238 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,081 | 82,669 |
unittest.subTest and way to selectively mark subTests as expected failures
|
triaged, better-engineering, module: testing
|
### π The feature, motivation and pitch
When testing if something like vmap works on an operator, we test the following things:
1. if it errors out (it shouldn't)
2. if the output of vmap matches the output of a for loop (it should)
3. if there is a batching rule implemented for the operation. We do this by running the vmap and checking if it raises any "batching rule not implemented" warnings.
We have two separate tests, test_vmap_exhaustive, and test_op_has_batch_rule. The former does (1) and (2), and the latter does (1), (2), (3) (because (1) and (2) are almost required for (3)). We could cut down the test time if we have one test and used something like unittest.subTest in a way so that each test gets 3 subtests.
Furthermore, if a vmap test fails for e.g. torch.searchsorted, we have an expected failure for it. It would be nice to be able to distinguish between if it failed because of (1) or if it failed because of (2); (2) is a silent correctness issue and is much more hi-pri to fix.
### Alternatives
n/a
### Additional context
cc @jbschlosser who has worked on many testing improvements in the past
| 8 |
5,082 | 82,668 |
Schema information for torch.* operations
|
triaged, module: __torch_function__, module: testing
|
### π The feature, motivation and pitch
My main motivation for this is for in-place vmap testing. In order to do vmap in-place testing, we must, given an OpInfo, construct some sample inputs, some of which are batched or not. For example, given `Tensor.add_(x, y)`, we would need to generate x and y.
There is a problem if we generate x to be a regular Tensor and y to be the Tensor being vmapped over, because that errors out. If we knew that the first argument to `Tensor.add_` is the argument that gets mutated, then we can avoid generating the aforementioned case.
I don't know if anything else would find this useful, but just putting it out there.
### Alternatives
- Assume that the first argument of an in-place operation gets mutated :) (this is not always true)
- Try to "guess" what the aten operator that corresponds to a python torch in-place operation is. e.g. Tensor.add_ -> at::add_. This does not always work.
### Additional context
_No response_
cc @hameerabbasi @rgommers @peterbell10 @ezyang
| 2 |
5,083 | 82,660 |
in-place variants should get their own OpInfos
|
triaged, better-engineering, module: testing
|
### π The feature, motivation and pitch
Today, to write an OpInfo test to check that a subsystem (let's say vmap) works on all operations, we write something like the following:
```
@ops(op_db)
def test_vmap(self, device, dtype, op):
test_op(op)
if op.inplace_variant:
test_op(op.inplace_variant)
```
Let's say that we have correct vmap support for torch.add, but incorrect support for `Tensor.add_`. Then there is no way to specify "I want to skip this test for Tensor.add_ but have the test run for torch.add" without adding additional infrastructure.
If we made in-place variants their own OpInfos, we could rewrite the above test as the following. And, since the OpInfo for "add" would be separate from "add_", then we would be able to skip them separately from each other.
```
@ops(op_db)
def test_vmap(self, device, dtype, op):
test_op(op)
```
### Alternatives
Split test_vmap into the following:
```
@ops(op_db)
def test_vmap(self, device, dtype, op):
test_op(op)
@ops(op_db)
def test_vmap_inplace(self, device, dtype, op):
if op.inplace_variant:
test_op(op.inplace_variant)
```
but keep in mind that everyone who uses OpInfos and cares about out-of-place and in-place operations now needs to write two tests.
### Additional context
cc @mruberry @ngimel what are your thoughts?
| 6 |
5,084 | 82,635 |
[Torchscript] torch.min returns wrong gradient when inputs are equal
|
oncall: jit
|
### π Describe the bug
The same issue applies to torch.max()
Steps To Reproduce:
```py
import torch
# input
x = torch.ones([10]).requires_grad_()
y = torch.ones([10]).requires_grad_()
grad_output = torch.ones_like(x)
def minimum(x, y):
return torch.minimum(x, y) * x
def min(x, y):
return torch.min(x, y) * x
def test(func, func_script):
# we need a few iterations to trigger the fused kernel
for i in range(5):
# forward
result = func(x, y)
result_script = func_script(x, y)
# derivative
(result_grad,) = torch.autograd.grad(result, x, grad_output, create_graph=True)
(result_script_grad,) = torch.autograd.grad(result_script, x, grad_output, create_graph=True)
# check result
assert torch.allclose(result, result_script), f"results do not match:\n a: {result}\n b: {result_script}"
assert torch.allclose(result_grad, result_script_grad), f"grads do not match:\n a: {result_grad}\n b: {result_script_grad}"
minimum_script = torch.jit.script(minimum)
min_script = torch.jit.script(min)
test(minimum, minimum_script)
print("minimum pass")
test(min, min_script)
print("min pass")
```
output:
```
output:
minimum pass
Traceback (most recent call last):
File "torch_min_torchscript_grad_bug.py", line 36, in <module>
test(min, min_script)
File "torch_min_torchscript_grad_bug.py", line 28, in test
assert torch.allclose(result_grad, result_script_grad), f"grads do not match:\n a: {result_grad}\n b: {result_script_grad}"
AssertionError: grads do not match:
a: tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000,
1.5000], grad_fn=<AddBackward0>)
b: tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=<AddBackward0>)
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0a0+340c412
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.2
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 510.73.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+340c412
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.13.0a0
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2020.4 h726a3e6_304 conda-forge
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.13.0a0+340c412 pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
```
| 0 |
5,085 | 82,634 |
[Torchscript] some activations backward are not fused when used with linear
|
oncall: jit
|
### π Describe the bug
## Description
The backward of some activation functions (e.g. `SiLU`, `erf`) contain many kernels, and there will be more kernels if the user requires higher order derivatives (e.g. Physics Informed Neural Networks in Modulus).
For the activation functions that have symbolic gradient defined at [symbolic_script.cpp](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/runtime/symbolic_script.cpp), their backward are not fused when used with linear.
For example:
The backward of the following standalone `torch.erf()` could be fused.
```py
class Erf(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return torch.erf(x)
```
But when used with linear, the backward of these activation functions is not fused anymore. And the performance of the scripted module is even worse than the eager mode.
```py
linear_erf = torch.nn.Sequential(
torch.nn.Linear(512, 512),
Erf(),
torch.nn.Linear(512, 512),
Erf(),
torch.nn.Linear(512, 512),
Erf(),
torch.nn.Linear(512, 512),
)
```
After some debugging, we found this happens when the DifferentiableGraph outputs contain an alias of the inputs.
And [unmergeOutputsAlisingInputs](https://github.com/pytorch/pytorch/blob/1bbea3c3a2ed0843de1bfdd360b999ee21cee635/torch/csrc/jit/passes/utils/subgraph_utils.cpp#L435) might be the reason that torchscript decided to unfuse the DifferentiableGraph.
For more detail:
The unfused DifferentiableGraph contains the calculation of: linear_grad_input + erf_backward + linear_grad_input + erf_backward + linear_grad_input ...
This graph returns an alias of the grad_output, because it is needed to calculate the linear_grad_weight.
## Steps To Reproduce:
benchmark script: https://gist.github.com/yueyericardo/0d89a3a74c874c68a5a8729891a459a8#file-test_linear_erf-py
Sample outputs
benchmark:
```
$ python test_linear_erf.py
erf : 1.903 ms/step
erf_scripted : 0.939 ms/step
linear_erf : 16.079 ms/step
linear_erf_scripted : 16.801 ms/step # not fused
```
profile the kernels, logs: https://gist.github.com/yueyericardo/0d89a3a74c874c68a5a8729891a459a8#file-linear_erf_profile-log
```
$ python test_linear_erf.py -p
```
The command I used for JIT graph debugging:
```
PYTORCH_JIT_LOG_LEVEL=">>>profiling_graph_executor_impl:>>>create_autodiff_subgraphs:>>>subgraph_utils" python test_linear_erf.py
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0a0+340c412
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.2
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 510.73.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+340c412
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.13.0a0
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2020.4 h726a3e6_304 conda-forge
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.13.0a0+340c412 pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
```
| 0 |
5,086 | 82,627 |
PyTorch crashes when running with OpenACC
|
module: crash, triaged, module: openmp, module: third_party
|
### π Describe the bug
I'm binding OpenACC code with ctypes, and it is working fine. However, just by importing torch pkg, it crashes the application.
module_c.cpp
```
#include "module_c.h"
int addvector_cab(void)
{
int i;
float a[50];
float b[50];
float c[50];
int n=50;
for( i=0; i<n; i++)
{
a[i] = 1;
b[i] = 1;
c[i] = 0;
}
printf("ENTERED C FUNCTION!\n");
if( n == 0 ){
printf("DUMMY ERROR!\n");
printf("EXITING C FUNCTION!\n");
return(1);
}
#pragma acc parallel loop present_or_copyin(a,b) present_or_copyout(c)
for(i = 0; i < n; i++){
c[i] = a[i] + b[i];
}
printf("EXITING C FUNCTION!\n");
return(0);
}
```
module_c.h :
```
#pragma once
#ifndef __MODULE_C_H_INCLUDED__
#define __MODULE_C_H_INCLUDED__
#include <iostream>
#include <string>
#include "openacc.h"
#include "stdlib.h"
extern "C" {
int addvector_cab(void);
}
#endif
```
Compiling lines:
```
nvc++ -c -std=c++11 -acc -ta=multicore -fPIC -o module_c.o module_c.cpp
nvc++ -shared -Minfo=acc -std=c++11 -mp -acc:gpu -gpu=pinned -o mylib.so module_c.o
```
bind.py :
```
import ctypes
#import torch
so_file = "./mylib.so"
my_functions = ctypes.CDLL(so_file)
my_functions.addvector_cab.restype = ctypes.c_int
if( my_functions.addvector_cab() == 0):
print("Returned OKAY!")
```
## Expected Outputs
One should expect:
```
ENTERED C FUNCTION!
EXITING C FUNCTION!
Returned OKAY!
```
However, importing PyTorch in bind.py (uncommeting line 2, nothing else changed) and running again, it returns:
```
ENTERED C FUNCTION!
libgomp: TODO
```
Not sure if is related, but I tried a similar approach with libtorch in C++, and whenever I tried to run a code with OpenACC and libtorch, same thing happened... it just crashed and output 'libgomp: TODO'.
What I'm trying behind all this is to allocate a tensor via torch, share it with Cupy via Cuda_Array_Interface, and them use it in OpenACC (I'm already doing this last part without errors, if I allocated memory via Cupy). But the error I'm getting is way more basic than that... just by import torch, it crashes.
Any help/hint/axes are appreciated. =]
EDIT: Due to space constraints, I've simplified some parts.... better documentation and example can be found here: https://github.com/estojoverde/Torch_OpenACC/blob/pytorch_openacc
### Versions
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:18) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: Tesla V100-PCIE-32GB
GPU 1: Tesla V100-PCIE-32GB
GPU 2: Tesla V100-PCIE-32GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0
[pip3] torchaudio==0.12.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl anaconda
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 anaconda
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.22.3 py38he7a7128_0 anaconda
[conda] numpy-base 1.22.3 py38hf524024_0 anaconda
[conda] pytorch 1.12.0 py3.8_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.0 py38_cu116 pytorch
[conda] torchvision 0.13.0 py38_cu116 pytorch
| 6 |
5,087 | 82,616 |
FakeTensor Support For Pickling
|
triaged, module: fakeTensor
|
### π Describe the bug
See: https://github.com/pytorch/PiPPy/issues/298#issuecomment-1201790838
Needed for distributed use.
### Versions
master
| 2 |
5,088 | 82,610 |
contiguous() not work for rank 1 length 1 tensor.
|
triaged, module: dlpack
|
### π Describe the bug
When I try `torch.tensor([1.+2.j, 3.+4.j]).real.contiguous().stride()`, it returns `(1,)` as expected.
But if the length is only 1, the stride is not 1 but 2, that is to say `torch.tensor([1.+2.j]).real.contiguous().stride()` gives `(2,)`
Although stride of length 1 tensor has no effect. But when I try to use mpi4py to bcast/allreduce tensor, it will throw an error because of wrong stride.
### Versions
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 14.0.6
CMake version: version 3.23.3
Libc version: glibc-2.35
Python version: 3.10.5 (main, Jun 6 2022, 18:49:26) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-5.18.12-arch1-1-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.961
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.0
[conda] Could not collect
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 10 |
5,089 | 82,598 |
Deep copy models with `create_feature_extractor` produces different parameters
|
triage review, triaged, module: vision, oncall: fx
|
### π Describe the bug
When I define a model using the new feature `create_feature_extractor` and then deep copy the model, the parameters of the original model and the new models are different.
Here is an example:
```python
from torchvision.models.feature_extraction import create_feature_extractor
import torch.nn as nn
import torchvision
import copy
class cusResNet18(nn.Module):
def __init__(self, n_classes, pretrained = True):
super(cusResNet18, self).__init__()
self.resnet = torchvision.models.resnet18(pretrained=pretrained)
self.resnet.fc = nn.Linear(512, n_classes)
self.avgpool = self.resnet.avgpool
self.returnkey_avg = 'avgpool'
self.returnkey_fc = 'fc'
self.body = create_feature_extractor(
self.resnet, return_nodes={'avgpool': self.returnkey_avg, 'fc': self.returnkey_fc})
def forward(self, x):
outputs = self.body(x)
return outputs[self.returnkey_fc], outputs[self.returnkey_avg].squeeze()
model = cusResNet18(n_classes=1)
copied_model = copy.deepcopy(model)
print(len(list(model.parameters())), len(list(copied_model.parameters())))
```
The output is ```62 124```.
If printing out the named_parameters, the differences are from the defined self.body.
```
print(dict(model.named_parameters()).keys(), dict(copied_model.named_parameters()).keys())
```
I'm wondering why there is a difference after using deepcopy and how can I deepcopy a model that has the create_feature_extractor feature inside?
### Versions
torch==1.10.2
torchvision==0.11.3
cc @fmassa @vfdev-5 @pmeier @ezyang @SherlockNoMad
| 6 |
5,090 | 82,583 |
DataLoader parameter pin_memory_device should accept torch.device type
|
module: dataloader, triaged
|
### π The feature, motivation and pitch
Currently, the DataLoader class parameter pin_memory_device only accepts a device in string format. It should be possible to pass a torch.device instead of a string.
### Alternatives
_No response_
### Additional context
_No response_
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 1 |
5,091 | 82,577 |
RFC: Add flag for RNN decomposition to all RNN modules
|
feature, module: rnn, triaged
|
**tl;dr** The basic proposal here is to add a flag to RNN (and subclasses like GRU or LSTM) where instead of running the RNN kernel, it will run the linear, dropout, etc. calls that create an equivalent decomposition. Without this, the monolithic rnn functions and buffers returned from the _cudnn_rnn function make it difficult to extend RNNs in the cases of extending RNNs, computing per sample gradients, and AOTAutograd. The proposed API adds a flag to the RNN class that determines whether or not to use the decomposed version, which will be defaulted to False in order to not incur perf penalties
## Problem
The basic problem with the RNN kernel in particular is that the cuda versions pass around buffers that are used during the backward computation. This is particularly problematic when someone wants to use a custom derivative for the RNN since CUDA doesn't have any stability guarantees for what is being passed back in the buffers. Therefore, a developer cannot even try to recompute the intermediate values and pass those to CUDA's RNN backwards function and hope to produce correct results.
## Use Cases
### RNN Experimentation
For a long time (even since [issue 1932](https://github.com/pytorch/pytorch/issues/1932)), people have been asking for ways to adapt RNNs. Some of the asks include [using layer norm](https://github.com/pytorch/pytorch/issues/7032) as the activation to [having different hidden sizes per layer](https://github.com/pytorch/pytorch/issues/55910). Although RNNs have somewhat fallen out of style with the rise of transformers, new research on them is [still hitting ICML](https://icml.cc/virtual/2021/poster/10541). Right now, everything exists in monolithic kernels (like rnn_tanh and rnn_relu) that are performant but make it difficult to understand what's happening. Although a user could write the same decomposition we plan to in Python, there's so many flags to an RNN that make it difficult to know if you've implemented the decomposition correctly. Having a deomposed Python version that we know works correctly will let users experiment with these new versions easily
### Expanded Weights and per sample gradients
These kernels are also problematic for Expanded Weights, our new system for computing per sample gradients. The mechanism behind this uses torch function and autograd.Function since we need to change the autograd behavior. In doing this, we also need to recompute the batched gradient with respect to the input. So, we would need to decide which backwards to use and then pass the correct byffers if we're using _cudnn_rnn_backward. As mentioned, this won't work because NVIDIA doesn't guarantee that the values in the buffers will be consistent between versions.
To work around this, libraries that want to support RNNs while computing per sample gradients like Opacus have hacky solutions that we shouldn't copy upstream. Specifically, they implement RNNs as two linear layers, which gets them the correct behavior. However, in order to make it exportable, they reset the names so that it looks like it's an RNN module. More concretely, a vanilla Pytorch RNN may have a weight with a "weight_hh_l0". An Opacus version of this would be decomposed into multiple Linear layers where the equivalent parameter should have the name "l0.hh.weight". In order to make their models save and loadable, they patch it to have the same name as the vanilla PyTorch RNN. However, we should not be copying this hack upstream since it breaks mechanisms like make_stateless that assumes that the name of the weights follows the structure of the nn.Module.
### AOTAutograd
AOTAutograd has mentioned that they've noticed these functions show up in traces. Although they are able to support the current mechanism, having a decomposition can help support backends that don't have an RNN kernel and allow for custom optimizations for different backends.
This would also fix https://github.com/pytorch/functorch/issues/586, which is an issue that stems from LSTMs not properly forwarding the `requires_grad_`-ness of its weights through to the `_cudnn_rnn` kernel
## Proposed API
Our proposal is to add a flag to the RNN module that determines whether to use the decomposed version or the RNN kernel like before. By keeping this flag off by default, users should not see any changes from the original behavior. User should be able to determine this while building the RNN while also toggle the flag without rebuilding their RNN, similar to a training flag. Unlike the training flag, we should be able to set this on a layer-by-layer basis instead of only at a whole model level.
`net = RNN(input_size, hidden_size, num_layers=4, use_decomposition=True)`
`net.set_decomposition_(False)`
## Perf Concerns
The decompositions written in Python will have worse perf than the custom C++ kernels. First, these decompositions will be necesssary for backends that don't have a custom RNN kernel, as noted in the AOTAutograd section. Additionally, systems like Opacus that currently require this decomposition to do their per sample gradient computation currently pay this cost. So, we will not be worsening their perf metrics.
Finally, with incoming systems like torchdynamo, we can hope to recover some of this performance for backends that have custom implemented kernels. Until then, by leaving the flag as the default of `use_decomposition=False`, we should not see any performance hits for users that do not use this flag.
## Alternatives
### `__torch_dispatch__` Decomposition
One alternate would be to implement the decomposition at the torch dispatch level, like the decompositions that AOTAutograd use. Since users will still be able to use the undecomposed version, AOTAutograd will still need a torch dispatch decomposition and we will probably want to even use the same decomposition. The only issue is if we only have a torch dispatch decomposition. Since Expanded Weights needs to be at the torch function level in order to extend autograd, it won't work to have the decomposition only exist at the torch dispatch level
### Add a `__torch_function__` for RNN
Currently, RNN doesn't have a functional version nor a torch function intercept. One argument may be to add these and then have a user intercept the RNN at that level and decompose if necessary. Given how monolithic the RNN kernels are, we won't be able to decompose it much between the module's forward call and this call. So if we just added this extension point, we end up with the same issue where a user could decompose the function themselves but runs a lot of risk of implementing it incorrectly. Additionally, this is BC breaking since we're intercepting torch function calls where we weren't before.
### Additional context
_No response_
cc @zou3519
| 4 |
5,092 | 82,565 |
PyTorch for quantum mechanics
|
feature, triaged, function request, module: scientific computing
|
### π The feature, motivation and pitch
As I begin working with PyTorch to simulate and optimize quantum systems, I propose to open this issue to list features that would be helpful.
Note that the goal is **not** to do quantum machine learning (e.g. https://www.tensorflow.org/quantum), but rather to simulate quantum system time evolution (using SchrΓΆdinger equation to begin with) and to perform gradient-based optimal control.
### Alternatives
_No response_
### Additional context
Related to #71446, but it is not active and specific to MIT's library.
_Don't hesitate to close this issue if you find it inappropriate, and prefer that I post individual feature requests._
| 4 |
5,093 | 82,550 |
`torch.cat` can break `torch.jit.ScriptModule` when in inference mode
|
oncall: jit
|
### π Describe the bug
Given a `ScriptModule` that concatenates two tensors and uses the output in another op, it breaks under `inference_mode`.
Code to reproduce:
```python
import torch
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.x = torch.nn.Parameter(data=torch.tensor(0.0))
def forward(self, a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
return self.x * torch.cat([a, b], dim=1)
a = torch.zeros(1, 1)
b = torch.zeros(1, 1)
model = torch.jit.script(Model())
model(a, b) # succeeds
with torch.inference_mode():
model(a, b) # fails
```
Traceback:
```
self = Model(
(linear): RecursiveScriptModule(original_name=Linear)
), input = (tensor([[0.]]), tensor([[0.]])), kwargs = {}, forward_call = <torch.ScriptMethod object at 0x114de8130>
def _call_impl(self, *input, **kwargs):
forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward)
# If we don't have any hooks, we want to skip the rest of the logic in
# this function, and just call forward.
if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
or _global_forward_hooks or _global_forward_pre_hooks):
> return forward_call(*input, **kwargs)
E RuntimeError: The following operation failed in the TorchScript interpreter.
E Traceback of TorchScript (most recent call last):
E RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1130: RuntimeError
```
Observations:
1. Running the model without inference mode succeeds
2. Running the model in inference mode without first running it *outside* of inference mode succeeds
3. Only after running the model outside of inference mode first, does a subsequent run from within inference mode fail
4. This behavior happens when the output of `torch.cat` is used in an op with `torch.nn.Parameter`s or trainable layers (e.g. `torch.nn.Linear`), but not with constants.
5. This behavior occurs whether converting the module to a `ScriptModule` after instantiation via `torch.jit.script` or if directly subclassing `torch.jit.ScriptModule`.
6. The model fails in the same way if first traced with `torch.jit.trace`.
### Versions
```
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.8.13 (default, Apr 13 2022, 19:33:23) [Clang 13.1.6 (clang-1316.0.21.2.3)] (64-bit runtime)
Python platform: macOS-12.4-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy==0.971
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.12.0
[pip3] torch-optimizer==0.1.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.0
[conda] Could not collect
```
| 5 |
5,094 | 82,547 |
make_fx is broken for all tracing modes
|
high priority, module: crash, triaged, module: fx, fx
|
### π Describe the bug
Running the script from https://gist.github.com/zou3519/3869d460f8bcb12799967e08a5998d9c raises an error on the `make_fx` call for `symbolic` and `real` tracing modes and segfaults for `tracing_mode="fake"`.
`tracing_mode="real"` :
```py
File ~/dev/pytorch/master/torch/fx/experimental/proxy_tensor.py:134, in proxy_call(func_overload, args, kwargs)
132 if t.constant is not None:
133 with maybe_disable_fake_tensor_mode():
--> 134 return t.constant.item()
135 raise RuntimeError("It appears that you're trying to get value out of a tracing tensor - erroring out! "
136 "It's likely that this is caused by data-dependent control flow or similar."
137 "Try torch.fx.experimental.proxy_tensor.enable_strict(False) to disable this check")
139 def unwrap_proxy(e):
AttributeError: 'list' object has no attribute 'item'
```
using `tracing_mode="symbolic"` raises another error:
```py
File ~/dev/pytorch/master/functorch/functorch/_src/vmap.py:484, in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs)
483 def _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs):
--> 484 vmap_level = _vmap_increment_nesting(batch_size, randomness)
485 try:
486 batched_inputs = _create_batched_inputs(flat_in_dims, flat_args, vmap_level, args_spec)
TypeError: _vmap_increment_nesting(): incompatible function arguments. The following argument types are supported:
1. (arg0: int, arg1: str) -> int
Invoked with: <torch.SymIntNode object at 0x7f6275eec3f0>, 'error'
```
and `tracing_mode="fake"` segfaults.
### Versions
Latest master branch.
cc @ezyang @gchanan @zou3519 @SherlockNoMad
| 6 |
5,095 | 82,546 |
Libtorch C++ torch::stack error
|
needs reproduction, module: cpp, triaged
|
#include <iostream>
#include <torch/torch.h>
int main()
{
torch::Tensor a = torch::rand({ 1, 4 });
torch::Tensor b = torch::rand({ 1, 4 });
std::cout << " a = " << a << std::endl;
std::cout << " b = " << b << std::endl;
torch::Tensor c = torch::stack({ a, b }, 0);
std::cout << " c = " << c << std::endl;
return 0;
}

cc @jbschlosser
| 1 |
5,096 | 82,545 |
Incorrect CPU implementation of CTCLoss backward step
|
module: autograd, module: loss, triaged
|
### π Describe the bug
The ATen CTCLoss backward step seems to produce incorrect gradients. The following code snippet reproduces this issue. It computes the differences of the output with respect to every input parameter `input`, and compares them with the backpropagated gradients. The results shall be approximately equal.
``` python
#!/usr/bin/env python3
import torch
torch.manual_seed(0)
torch.set_printoptions(precision=10)
T, C, S = 5, 3, 4
input = torch.randn(T, C).log_softmax(1).detach().requires_grad_()
target = torch.randint(low=1, high=C, size=(S,), dtype=torch.long)
loss = torch.nn.functional.ctc_loss(
input, target, torch.tensor(T), torch.tensor(S))
loss.backward()
for i in range(T):
for j in range(C):
new_input = input.clone().detach()
new_input[i][j] += 0.01
new_loss = torch.nn.functional.ctc_loss(
new_input, target, torch.tensor(T), torch.tensor(S))
print((new_loss - loss).detach(), input.grad[i][j] * 0.01) # expected to be approximately equal
```
The actual output is shown below, and the two columns differ significantly.
```
tensor(0.) tensor(0.0021115856)
tensor(0.) tensor(0.0003372314)
tensor(-0.0024995804) tensor(-0.0024488156)
tensor(0.) tensor(0.0018777853)
tensor(-0.0024995804) tensor(-0.0021404340)
tensor(0.) tensor(0.0002626498)
tensor(0.) tensor(0.0008711107)
tensor(0.) tensor(0.0013454026)
tensor(-0.0024995804) tensor(-0.0022165121)
tensor(-0.0025000572) tensor(-0.0018093735)
tensor(0.) tensor(0.0005692409)
tensor(0.) tensor(0.0012401351)
tensor(0.) tensor(0.0002813108)
tensor(0.) tensor(0.0019916897)
tensor(-0.0024998188) tensor(-0.0022729994)
```
It seems that
https://github.com/pytorch/pytorch/blob/4bb7e148c46167cb2b0beedf4332eb6eae5b03cc/aten/src/ATen/native/LossCTC.cpp#L330
shall be corrected to
``` c++
res = -std::exp(res + nll - lp) * gr;
```
### Versions
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.18.2
Libc version: N/A
Python version: 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.12.0
[conda] Could not collect
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 8 |
5,097 | 82,542 |
Is there Doc that explains how to call an extension op in another extension implementation?
|
module: docs, module: cpp, triaged
|
### π The doc issue
For example, there is an extension op which is installed from public repo via `pip install torch-scatter`, and in Python code, it's easy to use this extension:
```py
import torch
output = torch.ops.torch_scatter.scatter_max(x, index)
```
However, I'm writing an C++ extension and want to call this extension as well, but I cannot find any doc that guides how to do this, or I don't know whether Pytorch C++ extension can even support it or not. Briefly, this is something I'd like to do in extension function:
```cpp
torch::Tensor my_op(torch::Tensor x, torch::Tensor y, torch::Tensor z) {
auto temp = torch::ops::torch_scatter::scatter_max(z, y.view(-1)); // not working
..
return temp;
}
```
### Suggest a potential alternative/fix
_No response_
cc @svekars @holly1238 @jbschlosser
| 3 |
5,098 | 82,534 |
Use NestedTensor in RNN models
|
triaged, enhancement, module: nestedtensor
|
### π The feature, motivation and pitch
Now that NestedTensor is in core, I believe a high-impact application would be to replace many RNN utils, such as:
```
torch.nn.utils.rnn.pad_sequence
torch.nn.utils.rnn.pad_packed_sequence
torch.nn.utils.rnn.pack_sequence
```
with nested tensors. LSTM/GRUs are still heavily used in RL, where long episodes can result in a batch of sequences from 1 to 10,000 timesteps. The entire batch must be zero-padded to the length of the longest episode, which uses a ton of memory (or perhaps the RNN utils do something smarter). The RNN utils also pass around list of indices and sort the outputs, which the user must unsort. Furthermore, the above methods are not flexible -- they do not allow a sequence of zero length, for example.
It would be fantastic to stack a bunch of episodes/rollouts into a ragged NestedTensor and pass this straight into an LSTM. I believe all that's needed are sigmoid, tanh, and linear operations which I believe NestedTensor already supports.
I don't think you would have to worry about backwards compatibility. You can check `instanceof(lstm_input, torch.NestedTensor)` and branch from there.
### Alternatives
_No response_
### Additional context
cc @cpuhrsch @jbschlosser @bhosmer @drisspg
| 7 |
5,099 | 82,532 |
[ONNX] Memory leak when exporting a jit model to onnx
|
needs reproduction, oncall: jit, module: onnx
|
## Reproduction
The following code, which repeatedly exports a `torch.jit.script` model with `torch.onnx.export`, has a memory leak. During the export, every tensor parameter in `network` is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the _underlying buffer_ that's cloned, it's the lightweight `torch.Tensor` wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors.
I've reproduced this issue on both Linux and Windows, with pytorch versions `1.10.0` and `1.12.0` respectively.
```python
network = nn.Linear(4, 4)
network = torch.jit.script(network)
path = "network.onnx"
arg = torch.randn(1, 4)
while True:
if os.path.exists(path):
os.remove(path)
torch.onnx.export(network, arg, path)
# debug tools, these don't affect the behaviour
gc.collect()
objgraph.show_growth()
print([t.shape for t in objgraph.by_type("Tensor")[-4:]])
print([t.storage().data_ptr() for t in objgraph.by_type("Tensor")[-4:]])
print(gc.get_referrers(objgraph.by_type("Tensor")[-1]))
```
The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue.
- `gc.collect()` forces a gc collection cycle, ensuring we're not accidentally counting dead objects
- `objgraph.show_growth()` show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per iteration.
- `print([t.shape ... ])` shows that the tensors we're leaking have shapes `(4,4)` and `(4)`, so they're just the weight and bias of the linear layer.
- `print([t.storage() ... ])` shows that the underlying buffer is always the same, so only the shallow `tensor` class instance is being leaked.
- `print(gc. ... )` shows that nothing is pointing to these newly created objects, so they _should_ be collected.
Example output after running for a while:
```
Tensor 1081 +2
[torch.Size([4, 4]), torch.Size([4]), torch.Size([4, 4]), torch.Size([4])]
[2369979263104, 2369977312000, 2369979263104, 2369977312000]
[]
Tensor 1083 +2
[torch.Size([4, 4]), torch.Size([4]), torch.Size([4, 4]), torch.Size([4])]
[2369979263104, 2369977312000, 2369979263104, 2369977312000]
[]
Tensor 1085 +2
[torch.Size([4, 4]), torch.Size([4]), torch.Size([4, 4]), torch.Size([4])]
[2369979263104, 2369977312000, 2369979263104, 2369977312000]
[]
```
## Related issues
#61263 seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak.
#28414 was closed as a duplicate of the previous issue, but better matches this issue.
## PyTorch version info (for Linux)
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.10.2
Libc version: glibc-2.17
Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-41-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3080 Ti
GPU 1: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.11.0
[pip3] torchvision==0.11.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.3.0 h06a4308_520
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.2 py37h20f2e39_0
[conda] numpy-base 1.21.2 py37h79a1101_0
[conda] pytorch 1.10.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.11.0 py37 pytorch
[conda] torchvision 0.11.0 py37_cu113 pytorch
| 1 |
5,100 | 82,518 |
Split up `common_methods_invocations.py`?
|
triaged, needs research, better-engineering, module: testing
|
`common_methods_invocations.py` has grown to 22K lines and over 1MB in file size. One implication of this is you can't open it in the GitHub UI or link to specific lines of code.
I propose creating an opinfos folder and where there are currently different categories, such as `UnaryUfuncInfo` or `ReductionOpInfo`, these could be their own file.
| 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.