id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46168
|
Suppose I have a tensor A with the following shape:
torch.Size([5, 16, 5000, 3])
I also have a mask of the same shape:
torch.Size([5, 16, 5000, 3])
If I apply this mask M directly to the tensor A via
A = A[M]
I end up with a flattened tensor with single dimension.
However, I would like to mask out only along dimension 2. In other words, I would like to get a tensor of the shape
torch.Size([5, 16, 5000 - N, 3])
where N is the number of entries for which mask M is False.
What is the way of doing this?
|
st46169
|
Anton_Z:
However, I would like to mask out only along dimension 2.
If you want to mask out only in dimension 2, then you must have a mask that matches dimension 2.
In your example, the mask must be shaped torch.Size([5000])
Only then could you do A = A[:, :, M]
|
st46170
|
Thank you!
So there is no way to vectorize it along batch and channels dimensions?
I am going to use precomputed masks during training (A tensor is trainable, while M is not).
|
st46171
|
I have a dynamic shape maybe ( 1, …,). then How to view a tensor to this shape in C++ with libtorch.
I know tensor.view({1, 3, 4, 4}), but this is static with pre-known dim.
how about tensor.view(dynamic_shape)? thank you!
|
st46172
|
I’m not sure I understand the question correctly. You could still define the dynamic_shape during the runtime, which would make it “dynamic”, wouldn’t it?
|
st46173
|
I have a custom loss function defined like this:
class Quaternion_Multiplicative_Error(torch.nn.Module):
def __init__(self):
print("QME optimized")
super(Quaternion_Multiplicative_Error, self).__init__()
self.conj = torch.tensor([1,-1,-1,-1], requires_grad=False)
def qme(self, pred, true):
true = torch.mul(true, self.conj)
pro = self.hamilton_product(pred, true)
img_part = pro[1:]
norm = np.linalg.norm(img_part, ord=1)
return 2 * norm
def forward(self, pred, true):
batch_size = pred.shape[0]
return sum(self.qme(x, y) for x, y in zip(pred, true))/batch_size
I need to use this custom loss function in another main class, which looks something like this:
class FusionCriterion_LearnParms(torch.nn.Module):
def __init__(self, loss_pos="L1Loss", loss_ori="QMELoss", alpha=0.0, beta=-3.0):
super(FusionCriterion_LearnParms, self).__init__()
self.loss_pos = self.select_loss(loss_pos)
self.loss_ori = self.select_loss(loss_ori)
self.alpha = torch.nn.Parameter(torch.tensor([alpha], dtype=torch.double), requires_grad=True)
self.beta = torch.nn.Parameter(torch.tensor([beta], dtype=torch.double), requires_grad=True)
def select_loss(self, loss):
if loss == "L1Loss":
return torch.nn.L1Loss()
elif loss == "MSELoss":
return torch.nn.MSELoss()
else:
return Quaternion_Multiplicative_Error()
def forward(self, predicted, actual):
position_loss = (torch.exp(-self.alpha) * self.loss_pos(predicted[:, :3], actual[:, :3])) + self.alpha
orientation_loss = (torch.exp(-self.beta) * self.loss_ori(predicted[:, 3:]), actual[:, 3:]) + self.beta
total_loss = position_loss + orientation_loss
return total_loss
I get an error
File "fusion.py", line 62, in qme
true = torch.mul(true, self.conj)
RuntimeError: expected device cuda:0 but got device cpu
I have all the Classes above(nn.Module) moved to the torch.device(“cuda”)
I was hoping all its member functions will also be moved to “cuda”
|
st46174
|
nisharaichur:
self.conj = torch.tensor([1,-1,-1,-1], requires_grad=False)
Instead of: self.conj = torch.tensor([1,-1,-1,-1], requires_grad=False) perhaps:
self.register_buffer('conj ', torch.tensor([1,-1,-1,-1]))
So that it is transferred to gpu also?
|
st46175
|
So this means, we can have this as a constant and also this parameter will not be optimized right.
Yes it works
Thanks
|
st46176
|
I am a beginner of the PyTorch, now I am writing a code for the time series forecasting by LSTM. The LSTM includes two layers and stacks together. The first layer of LSTM includes 10 LSTM units and the hidden units will pass to another layer of LSTM which includes a single LSTM unit. The following is the architecture diagram of the neural network.
following is my code,
def __init__(self, nb_features=1, hidden_size_1=100, hidden_size_2=100, nb_layers_1 =5, nb_layers_2 = 1, dropout=0.4): #(self, nb_features=1, hidden_size=100, nb_layers=10, dropout=0.5): ####### nb_layers=5
super(Sequence, self).__init__()
self.nb_features = nb_features
self.hidden_size_1 = hidden_size_1
self.hidden_size_2 = hidden_size_2
self.nb_layers_1 =nb_layers_1
self.nb_layers_2 = nb_layers_2
self.lstm_1 = nn.LSTM(self.nb_features, self.hidden_size_1, self.nb_layers_1, dropout=dropout) #, dropout=dropout
self.lstm_2 = nn.LSTM(self.hidden_size_1, self.hidden_size_2, self.nb_layers_2, dropout=dropout)
self.lin = nn.Linear(self.hidden_size_2, 1)
def forward(self, input):
h0 = Variable(torch.zeros(self.nb_layers_1, input.size()[1], self.hidden_size_1))
h1 = Variable(torch.zeros(self.nb_layers_2, input.size()[1], self.hidden_size_2))
#print(type(h0))
c0 = Variable(torch.zeros(self.nb_layers_1, input.size()[1], self.hidden_size_1))
c1 = Variable(torch.zeros(self.nb_layers_2, input.size()[1], self.hidden_size_2))
#print(type(c0))
output_0, hn_0 = self.lstm_1(input, (h0, c0))
output, hn = self.lstm_2(output_0, (h1, c1))
out = torch.tanh(self.lin(output[-1])) ##########out = self.lin(output[-1])
#out = self.lin(output_2[-1])
return out
The code can be run, however, the output is a straight line even tuning the hyperparameter (learning rate, dropout, activation method) and increases epoch (i.e. 3000 epochs), the output result was shown in the following.
f21072×774 130 KB
Could you please give me some suggestions to solve this problem. many thanks
|
st46177
|
Hi,
I think the problem is because the forward function keeps calling the initialization of h0, h1, c0, and c1. So, in this case, you always use vector zeros as the hidden state. Try to remove it from the forward function.
Also, I don’t think that initialization is necessary. As mentioned in the documentation:
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
|
st46178
|
Hi,
Many thanks for your reply.
I have tried your suggestion that changed the code from:
h0 = Variable(torch.zeros(self.nb_layers_1, input.size()[1], self.hidden_size_1))
to
h0 = torch.ones(self.nb_layers_1, input.size()[1], self.hidden_size_1)
for both h0, h1, c0 and c1, however, the result was also shown a straight line.
Additional information, if I just use one layer of LSTM with 5 LSTM units, using the same code(by mean that initialization both the hidden state and cell state as zeros), it can be successfully predicted, but add one more single unit LSTM layer it doesn’t.
|
st46179
|
May be for the second LSTM, you need to forward the hidden state of your first LSTM, instead of initializing a new hidden state.
|
st46180
|
Compile pytorch from the latest git library. How to set the variables to let cmake detect NNPack or MKLDNN , I found there are variables like NO_MKLDNN WITH_MKLDNN MKLDNN_LIBRARY MKLDNN_LIB_DIR MKLDNN_INCLUDE_DIR . How to pass these variables to cmake?
Does NNPack or MKLDNN speed up for double precision?
|
st46181
|
Top Up!
I really need to improve the inference performance on CPU. After I found the benckmark of mkldnn, I want to have a try. However, compiling pytorch with mkldnn seems not native, because I have to struggle with variables like NO_MKLDNN, WITH_MKLDNN, MKLDNN_* … I wish compiling pytorch with mkldnn will be much easier, just as easy as linking mkl or openblas.
|
st46182
|
hi, Xiao Liang. you need to install MKLDNN from conda first and then follow the installation steps from README.
conda install -c mingfeima mkldnn
Performance speedup you can get strongly depends on your CPU type. The optimization from MKLDNN is mostly for Xeon CPU. We also have some BKM 40 for better CPU performance. i upload some benchmark numbers at link 48. you can take a look.
The optimization job is still working in progress and we will continuously boost up cpu performance. Conv performance should be much faster with MKLDNN, rnn optimization is currently under review.
If you run on Xeon CPU, NNPACK is definitely not as performant as MKLDNN, we double checked this before.
Let me know if you have trouble compiling with MKLDNN.
|
st46183
|
hi,did you solve the problem? how to install pytorch with nnpack on cpu?
really looking forward to your reply
|
st46184
|
Dose pytorch support join operator in SQL?
I have a tensor a and a tensor b, I want to join the first row of a and the second row of b to get c, based on the second row of a and the first row of b.
Dose any function support it? Or how can I achieve it with short time and small memory?
a = torch.tensor([[11, 1],
[12, 1],
[12, 2]])
b = torch.tensor([[1, 21],
[1, 22],
[2, 23]])
c = torch.tensor([[11, 21],
[11, 22],
[12, 21],
[12, 22],
[12, 23]])
c = f(a,b)
|
st46185
|
I’m reading this paper , Dynamic convolution-Attention over Convolution kernels 2.
I couldn't understand the complexity of attention i.e.,
How to calculate O(π(x)) ? Explain.
|
st46186
|
Hello,
I tried to use panoptic segmentation of Detectron2 on custom data, but I have no idea what labeling tool can export coco panoptic segmentation format?
Could anyone give me some advise?
Thank you.
|
st46187
|
I got `Runtime Error: cudaEventSynchronize in future::wait device-side assert triggered ’ when I use binary_cross_entropy
I think this is because the input of the BCELoss must fall into the range of [0,1].
my input is a product of two softmax, so, in theory, the product will never greater than 1.
I think this my be related to floating-point precision ?
and if so, how can I solve this problem.
can anyone help me ? thank you !
here is my code
cls_prob = F.softmax(cls_score, dim=1)
det_prob = F.softmax(det_score, dim=0)
predict = F.mul(cls_prob, det_prob)
loss = F.binary_cross_entropy(predict, label, size_average=False)
|
st46188
|
Hi,
Can you run your script with CUDA_LAUNCH_BLOCKING=1 and see what is the error message that is printed please.
|
st46189
|
Sorry, I think I missed some specific code information.
here is my complete code
import torch
from wsddn.roi_pooling.modules.roi_pool import RoIPool
from wsddn.utils.network import FC
from wsddn.utils import network
import torch.nn.functional as F
import torch.nn as nn
from wsddn.vgg16 import VGG16
class WSDDN(nn.Module):
feature_scale = 1.0 / 16
n_classes = 21
def __init__(self, classes=None):
super(WSDDN, self).__init__()
if classes is not None:
self.classes = classes
self.n_classes = len(classes)
self.features = VGG16()
self.roi_pool = RoIPool(7, 7, self.feature_scale)
self.fc6 = FC(512 * 7 * 7, 4096)
self.fc7 = FC(4096, 4096)
self.classifier_head = FC(4096, self.n_classes, relu=False)
self.detection_head = FC(4096, self.n_classes, relu=False)
self._loss = None
self._detection = None
def forward(self, im_data, rois, labels):
im_data = network.np_to_variable(im_data, is_cuda=True)
im_data = im_data.permute(0, 3, 1, 2)
rois = network.np_to_variable(rois, is_cuda=True)
labels = network.np_to_variable(labels, is_cuda=True)
features = self.features(im_data)
pooled_features = self.roi_pool(features, rois)
x = pooled_features.view(pooled_features.size()[0], -1)
x = self.fc6(x)
x = F.dropout(x, training=self.training)
x = self.fc7(x)
x = F.dropout(x, training=self.training)
cls_score = self.classifier_head(x)
det_score = self.detection_head(x)
cls_predict = F.softmax(cls_score, dim=1)
det_predict = F.softmax(det_score, dim=0)
predict = F.mul(cls_predict, det_predict)
y_predict = predict.sum(dim=0)
y_predict = y_predict[1:]
self._loss = self.build_loss(y_predict, labels)
self._detection = predict
return y_predict
@property
def detection(self):
return self._detection
@property
def loss(self):
return self._loss
def build_loss(self, y_predict, labels):
loss = F.binary_cross_entropy(y_predict, labels, size_average=False)
# y_predict = torch.clamp(y_predict, min=1e-4, max=1 - 1e-4)
# loss = -1 * torch.log(labels * (y_predict - 1.0 / 2) + 1 / 2).sum()
return loss
and the weird thing is, this problem will occur after training for about 10000 iters, so I’m waiting for the problem now
|
st46190
|
hi, this is the error message
/pytorch/torch/lib/THCUNN/BCECriterion.cu:30: Acctype bce_functor<Dtype, Acctype>::operator()(Tuple) [with Tuple = thrust::tuple<float, float, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, Dtype = float, Acctype = float]: block: [0,0,0], thread: [14,0,0] Assertion `input >= 0. && input <= 1.` failed.
Traceback (most recent call last):
File "/home/tz/projects/wsdnn_pytorch/train.py", line 86, in <module>
predict = net(im_data, prior_boxes, gt_classes)
File "/home/tz/anaconda2/envs/dl-python3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/home/tz/projects/wsdnn_pytorch/wsddn/wsddn.py", line 53, in forward
self._loss = self.build_loss(y_predict, labels)
File "/home/tz/projects/wsdnn_pytorch/wsddn/wsddn.py", line 67, in build_loss
loss = F.binary_cross_entropy(y_predict, labels, size_average=False)
File "/home/tz/anaconda2/envs/dl-python3/lib/python3.5/site-packages/torch/nn/functional.py", line 1200, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(input, target, weight, size_average)
RuntimeError: after cudaLaunch in triple_chevron_launcher::launch(): device-side assert triggered
thank you for your help!!!
|
st46191
|
From the error message it seems that the input of your BCE loss is not between 0 and 1. The input you give should represent the probability of label 1, so it should be between 0 and 1.
|
st46192
|
I agree with you.
so Can I just simply use torch.clamp to restrict the input?
I think the reason why the input doesn’t fall into range [0, 1] is the float-point precision
|
st46193
|
If it is floating-point precision error, then clamping will work, or adding the minimum and dividing by the max.
But first I would make sure that this is a precision problem, basically do this fix only if you’re close enough to either 0 or 1. Otherwise raise an error.
|
st46194
|
Hi @albanD I still found the similar issue in newest pytorch version (stable 1.4). I hope this issue will be fix soon in the next pytorch version.
|
st46195
|
This issue is due to user error (giving unexpected input to a function), not from pytorch’s side.
|
st46196
|
@albanD No I am doing like this.
criterion = nn.BCELoss()
pred = torch.sigmoid(pred)
loss = criterion(pred, target)
It still giving error, but if I add clamp the error resolved.
criterion = nn.BCELoss()
pred = torch.clamp(torch.sigmoid(pred),0,1)
loss = criterion(pred, target)
Which means the output of sigmoid is not in range 0 and 1, or maybe because of the precision problem. However suppose if I implement the attention module, which use sigmoid to produce [0-1] range, it will has problem because maybe the result is not pure 0 or 1 in range.
|
st46197
|
@ptrblck Hello I am sorry for the late reply. After doing debugging for several months, finally I know the main problem. This long time debugging happened because the error is only shows in specific time so I need to run the training again and again to get exactly the error. I wait the error for coming but it just come again several weeks ago. The problem is caused by the Nan value of the prediction. This makes sense that the error not always happened, where its depends on your model performance. Actually the error saying that the value is not between 0 and 1, in fact it is Nan. So I think next time its better to detect the Nan value before calculate the loss. Use pytorch function torch.isnan to make sure the prediction is not Nan. Also I suggest the pytorch should can produce the Nan error instead of only showing error message value not between 0 and 1.
|
st46198
|
Thanks for the update. I like the suggestion about printing the actual invalid value.
Would you like to open a GitHub issue with this feature request?
|
st46199
|
I face the same problem as you. It takes very long to debug because it only happens now and then. Do you know why there could be NaN value in prediction and how to prevent that from happening?
|
st46200
|
I am facing the same trouble as the original author posted. I multiply the results of two softmax outputs (softmax over two different dimentions). Then I sum the tensor over one dimention to get the final output scores, say a 20-d tensor. Here is the output score which triggers the cuda AssertionError, specifically one value 1.0000e+00, which in theory should not happy.
I assume this is related to floating-point precision error. This error is not stable to reproduce. I got the error sometimes around 3k steps and sometimes after 10k during training.
Does it imply that we should clamp the tensor whenever we use the binary_cross_entropy_loss? I think it might be a good idea to log what value is actually causing the AssertionError.
tensor([9.4490e-05, 1.3122e-06, 1.9130e-03, 1.1611e-04, 3.1499e-05, 7.9529e-05,
5.0480e-05, 1.0000e+00, 2.0515e-04, 1.4706e-06, 3.1726e-05, 1.7213e-09,
8.1568e-05, 6.2557e-06, 1.4758e-06, 2.2086e-04, 1.9921e-04, 7.1404e-05,
6.8685e-06, 1.0655e-04], device='cuda:0', grad_fn=<SumBackward1>)
cls_prob = F.softmax(cls_score, dim=1) # across classes [2000,20]
det_prob = F.softmax(det_score, dim=0) # across proposals/detections [2000,20]
predict = F.mul(cls_prob, det_prob) # shape: [2000,20]
pred_class_scores = sum(predict, dim=0) # [20]
loss = F.binary_cross_entropy(pred_class_scores, label, size_average=False)
|
st46201
|
Your code might create values larger than 1. due to the limited floating point precision as seen here:
torch.manual_seed(8)
cls_score = torch.randn(2000, 20, device='cuda')
cls_score[:, 19] = 100.
det_score = torch.randn(2000, 20, device='cuda')
cls_prob = F.softmax(cls_score, dim=1) # across classes [2000,20]
det_prob = F.softmax(det_score, dim=0) # across proposals/detections [2000,20]
predict = torch.mul(cls_prob, det_prob) # shape: [2000,20]
pred_class_scores = torch.sum(predict, dim=0) # [20]
print((pred_class_scores > 1.).any())
> tensor(True, device='cuda:0')
print(pred_class_scores[19] - 1.)
> tensor(1.1921e-07, device='cuda:0')
so I think you should clamp the values before passing them to the loss function.
|
st46202
|
Hi, I meet the same problem as posted here. I checked the value of the results after the multiplication of the two scores (computed by softmax), and sometimes it did gives values larger than 1. It seemed truly a precision problem.
I check the solution of a GitHub repo (https://github.com/NVlabs/wetectron/blob/master/wetectron/modeling/roi_heads/weak_head/loss.py 8). The solution proposed in this repo is simply clamping the scores. I think clmap the values will cause zero gradient during back propagation, but it seems there is no other solutions right mow.
|
st46203
|
Hi,
I wanted to have index_add_ operation applied on a batch.
For instance:
A = torch.FloatTensor([[[1, 2, 3], [1, 2, 3]], [[4, 5, 6], [4, 5, 6]], [[7, 8, 9], [7,8, 9]]])
ind = torch.LongTensor([[0, 1], [1, 2], [0, 2]])
to_add = torch.LongTensor([[[0,1],[0,1]],[[2,3],[2,3]],[[4,5],[4,5]]])
I would have liked a batched index_add_ operation
A.index_add_(2, ind, to_add)
and the expected output as:
output = torch.FloatTensor([[[1, 3, 3], [1, 3, 3]], [[6, 8, 6], [6, 8, 6]], [[11, 13, 9], [11, 13, 9]]])
Currently I am achieving the same using a loop through each tensor in batch and using index_add_ on each tensor. I would like to avoid this loop. Please let me know if there is a more efficient way of doing the same.
Thanks.
|
st46204
|
Hi,
I just met the same problem as you, and I found torch.scatter_add_() is helpful! You can have a try.
Btw, I found the methods named gather and scatter in PyTorch can solve this kind of problems well, which can be seemed as a “universal” version of methods named index*.
|
st46205
|
Hi,
In a batch of 128 I have extracted the features and then found the mean, and angles between mean and corresponding class features, The dimensions of calculated angles are:
[8,2,128], …[5,2,128] where the first index shows the total number of a certain class, say, 8 classes.
Can anyone help to find the mean of the angles of these dimensions?
|
st46206
|
Thanks Kushaj for ur reply,
I want to find the mean of angles of different classes. To make it simple, assume that we have angles of different classes stored in in 3D tensors I.e. (10, 2,128), (13,2, 128) …etc. where 1st dimension represents the number of certain classes. 2, 128 is the dimension of angles. Now I want to iterate over all these angles to find the mean of angles.
Cheers,
Angelina
|
st46207
|
For the tensor (10,2,128), torch.mean(tensor, dim=(1,2)) would give you the mean along the dim 0. Is this what you are referring to?
|
st46208
|
Hello everyone,
I am trying to use my own pre-trained model to train another network.
My model looks as below:
I am trying to change the conv11 to Conv1d(128,15)… but how can I access to this conv11 layer?
self.model = nn.Sequential(*list(self.model.children())[:-1]), this will result deleting the whole model and return an empty list [].
|
st46209
|
Solved by ptrblck in post #2
I guess you could access it directly via model.conv11.
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.
|
st46210
|
I guess you could access it directly via model.conv11.
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.
|
st46211
|
import torch
import numpy as np
x = torch.rand(4,4)
y = torch.rand(4,4)
print(x)
print(y)
print(torch.cat((x,y),0))
If I execute this code it will be like
tensor([[0.1211, 0.7577, 0.8376, 0.4488],
[0.6628, 0.8632, 0.9736, 0.8368],
[0.5848, 0.0872, 0.0469, 0.2834],
[0.8561, 0.6229, 0.3667, 0.1358]])
tensor([[0.1820, 0.8955, 0.3811, 0.5496],
[0.5745, 0.3199, 0.5228, 0.8269],
[0.8408, 0.6984, 0.7248, 0.1973],
[0.3045, 0.9651, 0.8701, 0.3180]])
tensor([[0.1211, 0.7577, 0.8376, 0.4488],
[0.6628, 0.8632, 0.9736, 0.8368],
[0.5848, 0.0872, 0.0469, 0.2834],
[0.8561, 0.6229, 0.3667, 0.1358],
[0.1820, 0.8955, 0.3811, 0.5496],
[0.5745, 0.3199, 0.5228, 0.8269],
[0.8408, 0.6984, 0.7248, 0.1973],
[0.3045, 0.9651, 0.8701, 0.3180]])
is there a way that I coult concat like
tensor([[0.1211, 0.7577, 0.8376, 0.4488],
[[0.1820, 0.8955, 0.3811, 0.5496],
[0.6628, 0.8632, 0.9736, 0.8368],
…
which is x row 1 comes first
and y row 1 and x row 2 and so on
Thank you for reading
|
st46212
|
Solved by ptrblck in post #3
Alternatively to the loop, you could also interleave the tensors directly as seen here:
x = torch.rand(4,4)
y = torch.rand(4,4)
print(x)
print(y)
z = torch.cat((x,y),0)
test = None
for k in range(x.shape[0]):
if test is None:
test = torch.cat((z[k:k+1],z[k + x.shape[0]:k + x.shape[0]+1]),0)…
|
st46213
|
I think I solved it
for k in range(x.shape[0]):
if test is None:
test = torch.cat((z[k:k+1],z[k + x.shape[0]:k + x.shape[0]+1]),0)
print(test)
else:
test = torch.cat((test,z[k:k+1],z[k + x.shape[0]:k + x.shape[0]+1]),0)
I think this does the job
thank you
|
st46214
|
Alternatively to the loop, you could also interleave the tensors directly as seen here:
x = torch.rand(4,4)
y = torch.rand(4,4)
print(x)
print(y)
z = torch.cat((x,y),0)
test = None
for k in range(x.shape[0]):
if test is None:
test = torch.cat((z[k:k+1],z[k + x.shape[0]:k + x.shape[0]+1]),0)
print(test)
else:
test = torch.cat((test,z[k:k+1],z[k + x.shape[0]:k + x.shape[0]+1]),0)
print(test)
res = torch.cat((x.unsqueeze(1), y.unsqueeze(1)), 1).view(8, 4)
print((test == res).all())
> tensor(True)
|
st46215
|
I get a device mismatch error when using Data-Parallel for training with multiple GPUs.
image1758×70 50.2 KB
After debugging I got to know that the data-parallel module doesn’t work with submodules.The model essentially is inceptionnet_v1/googlenet. My model design template is as below:
Class Submodule(nn.Module):
def __init__():
super .......
self.a = nn.Seq(.......)
self.b = nn.Seq(.......)
..........
def forward(self,x):
return torch.cat([self.a(x)......])
Class Model(nn.Module):
def __init__():
super .......
self.conv = conv
self.sublayer1 = Submodule(....)
self.sublayer2 = Submodule(....)
.......
def forward(self,x):
x= self.conv(x)
x = self.sublayer1(x)
x = self.sublayer2 (x)
........
return x
model = Model()
model = nn.DataParallel(model).cuda()
The DataParallel Works with Conv layers and not sublayers. It would be great if anyone can give me a pointer to debug this. I couldn’t find support from previous Pytorch Forums.
|
st46216
|
It does work with submodules.
The problem is you probably have some hardcoded device in the submodules.
This is calling .cuda() instead of coding it as .to(device) being device based on forward’s input.
|
st46217
|
I tried changing it to .to(device) and have the same error. The problem here is I’m trying to work with DataParallel method and I think this method is not used by the submodule.
|
st46218
|
Can you post the model code?
The submodule has to be a nn.Module, a nn.modulelist or torch dict.
Any other object is not properly traced by the system.
|
st46219
|
My high level understanding of pinned memory is that it speeds up data transfer from CPU to GPU…in some cases. I understand this can commonly be used in dataloaders when copying loaded data from host to device.
When else would this be useful? I have been trying to use the tensor pin_memory() 21 function, but I’m not seeing significant speed up in copying a large matrix to the GPU.
This is my testing code:
import torch
import time
# Warm up
q = torch.rand(10000, 10000).cuda()
w = torch.rand(10000, 10000).cuda()
for i in range(10):
qq = q * w
# Test pinning
b = torch.arange(1000000).pin_memory()
c = torch.arange(1000000)
print("BEFORE")
print("b is pinned?: ", b.is_pinned())
print("c is pinned?: ", c.is_pinned())
print("b is cuda?: ", b.is_cuda)
print("c is cuda?: ", c.is_cuda)
print("\nRESULTS")
torch.cuda.synchronize()
s = time.time()
b[:] = c # <<<<<< Time goes down without this, obviously, but what good is pinned memory if it always points to the same stuff?
d = b.to(torch.device("cuda"), non_blocking=True)
torch.cuda.synchronize()
print("Copy pinned (non-blocking): ", time.time() - s)
torch.cuda.synchronize()
s = time.time()
e = b.to(torch.device("cuda"), non_blocking=False)
torch.cuda.synchronize()
print("Copy pinned (blocking): ", time.time() - s)
torch.cuda.synchronize()
s = time.time()
f = c.to(torch.device("cuda"))
torch.cuda.synchronize()
print("Copy unpinned: ", time.time() - s)
print("\nAFTER")
print("b is pinned?: ", b.is_pinned())
print("c is pinned?: ", c.is_pinned())
print("d is pinned?: ", d.is_pinned())
print("e is pinned?: ", e.is_pinned())
print("f is pinned?: ", f.is_pinned())
print("b is cuda?: ", b.is_cuda)
print("c is cuda?: ", c.is_cuda)
print("d is cuda?: ", d.is_cuda)
print("e is cuda?: ", e.is_cuda)
print("f is cuda?: ", f.is_cuda)
Here is the results on a 2080Ti:
BEFORE
b is pinned?: True
c is pinned?: False
b is cuda?: False
c is cuda?: False
RESULTS
Copy pinned (non-blocking): 0.0015006065368652344
Copy pinned (blocking): 0.0006394386291503906
Copy unpinned: 0.0007956027984619141
AFTER
b is pinned?: True
c is pinned?: False
d is pinned?: False
e is pinned?: False
f is pinned?: False
b is cuda?: False
c is cuda?: False
d is cuda?: True
e is cuda?: True
f is cuda?: True
Now I would have expected the non-blocking pinned memory to be fastest, but it’s actually slower than simple copy. The real time suck is the in-place assignment of novel data (noted in the code above), but what good, then, is a pinned-memory tensor if the data is never going to change? In this example, I am treating b like a pinned memory buffer of sorts. Is this the wrong way to use/think about pinned memory?
|
st46220
|
Using pinned memory would allow you to copy the data asynchronously to the device, so your GPU won’t be blocking it. The bandwidth is limited by your hardware and the connection to your GPU. Using pinned memory cannot exceed these hardware limitations.
|
st46221
|
Gotcha, so what sort of circumstance would lead to the GPU blocking the copy? I know pinned memory is recommended for data loading directly to the GPU, but it’s still not abundantly clear to me how it helps.
Is it primarily a way for the data loader to prefetch the next batch onto the GPU while the current batch is being processed? For example, if you have a network architecture that performs some inference task on the GPU. Without pinned memory, execution would be:
Load batch to GPU
Execute inference
Load next batch to GPU
…
Do I understand correctly that with pinned memory, we would have
Load first batch to GPU
(concurrent with 3) Execute inference
(concurrent with 2) Load next batch onto GPU
…
Is that the general idea? If so, does it only execute the asynchronous copy if there is enough GPU RAM to accommodate it?
|
st46222
|
Hey
Is it possible to use a buffer for sending variables (with computation graphs) over a multiprocessing queue? I am currently gathering log probability variables for policy gradient with multiple child processes and the bottleneck at the moment is transferring the variables over to the parent process.
How can I make this transfer faster? If it was normal tensors I could just use a buffer as I would know the size but with variables that contain computation graphs I dont know how to do it.
|
st46223
|
Yes, so I need the computation graph. Just getting the data would not be enough!
|
st46224
|
This was so long ago I dont remember if I was able to solve it. Last time I used policy gradients I went with the hogwild style of training, where instead I share the weights over multiple processes and each process can update the weights asynchronously.
Another idea would be to use rpc. The only experience I have is with gRPC where each servicer can run in its own process, maybe that can be used in a similar way? https://pytorch.org/docs/stable/rpc.html 1
|
st46225
|
Hello, I’m using the function torch.conj on my code and I need to run it on a server which uses Pytorch 1.3.0, which does not implement this function (it does not depend on me to update the pytorch version used on the server).
The first option that comes to my mind in order to be able to use torch.conj on my code is to copy/paste the source code on my own model but I have not been capable to find it on the Github repository, could anyone help me to be able to use this function on pytorch 1.3.0?
Thanks!
|
st46226
|
pytorch.org
Extending PyTorch — PyTorch 1.7.0 documentation 3
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
logging.info(f"func: {func.__name__}, args: {args!r}, kwargs: {kwargs!r}")
if kwargs is None:
kwargs = {}
return super().__torch_function__(func, types, args, kwargs)
What type hints do I use for the arguments, and what type hint do I use for the return type: -> ReturnType?
|
st46227
|
Solved by ProGamerGov in post #2
After some testing, I think this is how you properly type hint __torch_function__:
from typing import Any, Callable, Dict, Optional, Tuple
class LoggingTensor(torch.Tensor):
@classmethod
def __torch_function__(cls, func: Callable, types: Tuple, args: Tuple = (), kwargs: Optional[Dict] = N…
|
st46228
|
After some testing, I think this is how you properly type hint __torch_function__:
from typing import Any, Callable, Dict, Optional, Tuple
class LoggingTensor(torch.Tensor):
@classmethod
def __torch_function__(cls, func: Callable, types: Tuple, args: Tuple = (), kwargs: Optional[Dict] = None) -> Any:
if kwargs is None:
kwargs = {}
return super().__torch_function__(func, types, args, kwargs)
|
st46229
|
Hello Everyone !
I am doing one problem which aims in improving the semantic segmentation pixelwise results. The approach requires two networks (1. CNN 2. MLP), which are concatenated. The output of net1 i.e. CNN is 3 numbers , which are then serves as input to net2 i.e. MLP. The output of net2 (MLP) is a single number (not an array or higher dimensional tensor), at the output of net2, MSE Loss is calculated.
Now I have to backpropagate and update the gradients of both networks using the calculated MSE loss. The backpropagation of net2 is straight forward, i.e. loss.backward(), but how to do backpropagation of net1 using the gradients of net2 ?
|
st46230
|
When you do loss.backward(), the gradients are calculated for both the networks and stored along with the parameters, assuming you are doing something like net2(net1(input)). Then according to how your optimizer is defined, when you do optimizer.step() the weights will get updated.
|
st46231
|
@Megh_Bhalerao
Thank you for answering. Please see the overview of the process
x=net1(input)
x1 = f(x) {some processing, means doing some summation operation, to calculate the input of net2}
y=net2(x1)
loss = |y-gt| {gt is ground truth}
loss.backward()
so, does this loss.backward(), can take care of calculation of gradients of both the networks. In optimizer, the parameters of both the networks will be passed.
|
st46232
|
Yes, to the best of my knowledge, because there is no reason the computational graph should break by doing the f(x). Yes, if you pass both the model’s parameters to the optimizer, the the loss.backward() will calculate the gradients of both the networks.
|
st46233
|
How to separate the parameters in optimization?
def initialize_parameters(self):
#user embedding, U
self.U = nn.Embedding(self.num_user, self.edim_user)
#item embedding, V
self.V_d1 = nn.Embedding(self.num_item_d1, self.edim_item)
self.V_d2 = nn.Embedding(self.num_item_d2, self.edim_item)
#domain1
self.weights_d1 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1]), requires_grad=True)) for l in range(len(self.layers) - 1)])
self.biases_d1 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l+1],), requires_grad=True)) for l in range(len(self.layers) - 1)])
#domain2
self.weights_d2 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1]), requires_grad=True)) for l in range(len(self.layers) - 1)])
self.biases_d2 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l+1],), requires_grad=True)) for l in range(len(self.layers) - 1)])
#shared
self.weights_shared = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1])), requires_grad=True) for l in range(self.cross_layers)])
optimizer = torch.optim.SGD(self.parameters(), lr=self.learning_rate)
for name in model.state_dict():
print(name)
OUTPUT
U.weight
V_d1.weight
V_d2.weight
weights_d1.0
weights_d1.1
weights_d1.2
weights_d1.3
biases_d1.0
biases_d1.1
biases_d1.2
biases_d1.3
weights_d2.0
weights_d2.1
weights_d2.2
weights_d2.3
biases_d2.0
biases_d2.1
biases_d2.2
biases_d2.3
weights_shared.0
weights_shared.1
weights_shared.2
What should I do if I want to regularise only the shared weights, i.e. weights_shared.0, weights_shared.1, weights_shared.2…
Using nn.Module’s default self.parameters(), all recognised params will be passed altogether.
Is there any way I could split them or specify it separately?
Also can someone confirm if this is the best way to initialise weights/biases manually?
I appreciate all helps and suggestions, Thank you!!
|
st46234
|
Do you mean to ask how to train only specific layers of a neural network? If so, one could try something like this:
for param in model.parameters():
param.requires_grad = False
This way you can freeze or unfreeze whichever layers you want.
|
st46235
|
no, I mean I want different hyper parameters for different layer.
say,
layer 1: lr=0.01
layer2 : lr=0.01, weight_decay = 0.01
I have no idea how to specify this since self.parameters() pass all in once.
|
st46236
|
Oh, in that case, check out the Per-parameter options here 2.
Essentially, you need to create different parameter groups with different hyperparameters and pass them as a dict to the optimizer instead of model.parameters()
For instance,
opt = optim.SGD([{'params': layer_1.parameters(), 'lr': 0.01}, {'params': layer_2.parameters(), 'lr': 0.01, 'weight_decay': 0.01}])
I hope that helps.
|
st46237
|
Hello,
I am new to PyTorch, and I encountered a quesiton about Softmax() and CrossEntropyLoss().
In a multi-classification task, I set dim=1 in Softmax(). I wanna know if I need to set the similar parameter in CrossEntropyLoss(). However, I did not find the similar parameter as dim in softmax(). And how the dim parameter of the Softmax() is reflected in CrossEntropyLoss()?
Thanks!
|
st46238
|
Solved by ptrblck in post #3
nn.CrossEntropyLoss expects the class dimension in dim1 as explained here.
|
st46239
|
One of my guesses is that “dim” is set to a default value. For example, the default value is 1.
|
st46240
|
I have a question about “register_forward_hook”. Part of my code is as follow,
def hook(module, input, output):
pass
with torch.no_grad():
model.layer3[0].conv2.register_forward_hook(hook)
embed=model(torch.unsqueeze(image_, 0))
It does not give me the intermediate feature. I am new to pytorch and I am not sure what is the problem.
|
st46241
|
Your current hook function doesn’t do anything with the activation, so you could add e.g. a print statement:
def hook(module, input, output):
print(output.shape)
model = nn.Conv2d(1, 1, 3)
with torch.no_grad():
model.register_forward_hook(hook)
embed = model(torch.randn(1, 1, 4, 4))
|
st46242
|
I loaded the dataset using data loader as follows:
data_loader = torchvision.datasets.ImageFolder('/content/drive/My Drive/Dataset/malimg_paper_dataset_imgs',
transform = torchvision.transforms.Compose([
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])]))
The original dataset’s size is 1.1GB but in the data loader, I have applied resizing and normalization, now I want to know what will be the size of data that is loaded into the data loader. I can’t find anything related in the documentations. Thanks
|
st46243
|
You are not creating a DataLoader, but an ImageFolder which is a Dataset.
The ImageFolder dataset will lazily load and process each sample. If you wrap it into a DataLoader via:
dataset = ImageFolder(...)
loader = DataLoader(dataset, batch_size=..., num_workers=...)
each worker of the loader will load a complete batch and add it to a queue.
By default the prefetch_factor in the DataLoader is set to 2, which will load 2*num_workers batches.
|
st46244
|
I did that, after that how can I know the size of loader where complete data is stored in the form of batches?
|
st46245
|
You can get the number of samples via len(dataset) and the number of batches via len(loader).
|
st46246
|
yes but I don’t want to know the length of samples or batches. I want to know the memory size those samples are holding.
|
st46247
|
If you are lazily loading the data, only 2*num_workers*batch_size will be loaded into the RAM. The rest will stay on the drive. You can check the shape of a batch size and calculate the needed RAM manually using the data type of your tensors.
|
st46248
|
I’m trying to build a model to classify MNIST.
I saw this website: https://www.kaggle.com/cdeotte/25-million-images-0-99757-mnist 2
And tried to copy what they did, besides the fact I only want 1 CNN and not 15.
But the results I get have very different accuracy, so I got to be missing something.
Here is my code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
Can you please help me convert?
|
st46249
|
Based on the notebook it seems the author create CNNs containing 7 conv-bn blocks and a final linear layer, while your current model seems to be quite different.
Am I missing the “right” model implementation or is the posted code not supposed to replicate the Keras model?
|
st46250
|
Is there a recommended framework for managing, running, and recording experiments with different architectures and hyperparameters, or should I roll one from scratch for whatever I’m doing?
I mean a free and open source framework like pytorch.
|
st46251
|
Solved by ptrblck in post #2
Optuna or Weights & Biases would be two frameworks which come to my mind.
|
st46252
|
I am getting an error when trying to run a model with the CIFAR10 dataset. I am using the included PyTorch dataset to retrieve the data and the transformation transformations = transforms.ToTensor() but I get the error: conv2d(): argument ‘input’ (position 1) must be Tensor, not tuple.
Here is the training loop I am using:
epochs = 1
for epoch in range(epochs):
train_loss = 0
val_loss = 0
accuracy = 0
# Training the model
model.train()
counter = 0
for inputs, labels in train_loader:
# Move to device
#inputs, labels = inputs.to(device), labels.to(device)
# Clear optimizers
optimizer.zero_grad()
# Forward pass
output = model.forward(inputs)
# Loss
loss = criterion(output, labels)
# Calculate gradients (backpropogation)
loss.backward()
# Adjust parameters based on gradients
optimizer.step()
# Add the loss to the training set's rnning loss
train_loss += loss.item()*inputs.size(0)
# Print the progress of our training
counter += 1
print(counter, "/", len(train_loader))
# Evaluating the model
model.eval()
counter = 0
# Tell torch not to calculate gradients
with torch.no_grad():
for inputs, labels in val_loader:
# Move to device
#inputs, labels = inputs.to(device), labels.to(device)
# Forward pass
output = model.forward(inputs)
# Calculate Loss
valloss = criterion(output, labels)
# Add loss to the validation set's running loss
val_loss += valloss.item()*inputs.size(0)
# Since our model outputs a LogSoftmax, find the real
# percentages by reversing the log function
output = torch.exp(output)
# Get the top class of the output
top_p, top_class = output.topk(1, dim=1)
# See how many of the classes were correct?
equals = top_class == labels.view(*top_class.shape)
# Calculate the mean (get the accuracy for this batch)
# and add it to the running accuracy for this epoch
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
# Print the progress of our evaluation
counter += 1
print(counter, "/", len(val_loader))
# Get the average loss for the entire epoch
train_loss = train_loss/len(train_loader.dataset)
valid_loss = val_loss/len(val_loader.dataset)
# Print out the information
print('Accuracy: ', accuracy/len(val_loader))
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(epoch, train_loss, valid_loss))
The full error code is:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-189-55992f2f3f3c> in <module>
14 optimizer.zero_grad()
15 # Forward pass
---> 16 output = model.forward(inputs)
17 # Loss
18 loss = criterion(output, labels)
~/opt/anaconda3/envs/Test/lib/python3.7/site-packages/torchvision/models/resnet.py in forward(self, x)
194
195 def forward(self, x):
--> 196 x = self.conv1(x)
197 x = self.bn1(x)
198 x = self.relu(x)
~/opt/anaconda3/envs/Test/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/opt/anaconda3/envs/Test/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
343
344 def forward(self, input):
--> 345 return self.conv2d_forward(input, self.weight)
346
347 class Conv3d(_ConvNd):
~/opt/anaconda3/envs/Test/lib/python3.7/site-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight)
340 _pair(0), self.dilation, self.groups)
341 return F.conv2d(input, weight, self.bias, self.stride,
--> 342 self.padding, self.dilation, self.groups)
343
344 def forward(self, input):
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple
I seem to get this quite a lot and it’s always been a sticking point no matter what dataset I use.
My research lead me to trying:
for inputs, labels in enumerate(train_loader): which gave the error conv2d(): argument ‘input’ (position 1) must be Tensor, not int
for inputs, labels in next(iter(train_loader)): which gave the error too many values to unpack (expected 2)
Thanks,
Dan.
|
st46253
|
The error message points to inputs, which is a tuple, while a tensor is expected.
Check your Dataset and make sure that the data and target samples are returned as tensors.
If you get stuck, feel free to post the code of your Dataset, so that we can take a look.
|
st46254
|
i want to standardize images.
let’s say i have a tensor of shape [20000,3,12,12].
#images is my tensor of said shape
images = (images - images.mean(dim = [1,2,3])) / images.std(dim=[1,2,3])
RuntimeError: The size of tensor a (12) must match the size of tensor b (20000) at non-singleton dimension 2
follow up question:
if i want to standardize each color matrix separately, meaning for each image i extract mean and std of each color and subtract by that separately - how do i do that?
i tried using the built-in normalization transform but got another error i couldn’t pass -
img = img.to(dtype=torch.float64)
img_mean = img.mean(dim=[1,2])
img_std = img.std(dim=[1,2])
img = TF.normalize(img,mean=[img_mean[0],img_mean[1],img_mean[2]],std=[img_std[0],img_std[1],img_std[2]])
return {'image':img,'target':target}
ValueError: std evaluated to zero after conversion to torch.float64, leading to division by zero.
most searches advise changing the dtype if the image but as can be seen i did that and still the error occurs
|
st46255
|
You can pass the keepdims argument to use broadcasting:
images = (images - images.mean(dim = [1,2,3], keepdims=True)) / images.std(dim=[1,2,3], keepdims=True)
enterthevoidf22:
if i want to standardize each color matrix separately, meaning for each image i extract mean and std of each color and subtract by that separately - how do i do that?
Wouldn’t that be the standard normalization applied in batchnorm layers?
If so, you could use your code and use dim=[0, 2, 3] instead.
|
st46256
|
I am trying to create a module class that inherits from two classes as follows:
from torch import nn
class Module1(nn.Module):
def __init__(self):
nn.Module.__init__(self)
self.l1_loss = nn.L1Loss()
class Module2(nn.Module):
def __init__(self):
nn.Module.__init__(self)
class Module12(Module1, Module2):
def __init__(self):
Module1.__init__(self)
if hasattr(self, 'l1_loss'):
print('l1_loss at position 1')
Module2.__init__(self)
if hasattr(self, 'l1_loss'):
print('l1_loss at position 2')
When I run the code, “l1_loss at position 1” is printed but not “l1_loss at position 2”. It seems that when Module2.init(self) is called, the member variable self.l1_loss is deleted. Why? how can I create a module with multiple inheritance?
|
st46257
|
I think you could use:
class Module12(Module1, Module2):
def __init__(self):
super(Module12, self).__init__()
if hasattr(self, 'l1_loss'):
print('l1_loss at position 1')
if hasattr(self, 'l1_loss'):
print('l1_loss at position 2')
m = Module12()
which will print both statements.
|
st46258
|
Hello,
I have to find and realize a project in deep learning for my master degree, but I don’t know what to choose. We will have 20 days to realize it, and my partner and I are beginner in deep learning, we have already done a project in image classification using transfer learning. Do you have some idea for interesting beginner project?
|
st46259
|
@ptrblck @albanD @Deeply
Hello, I hope can they help us, we are really interested in hearing your opinion on this topic, I think that will be really illustrative for all of us.
|
st46260
|
20 days is not a lot of time and I would suggest to use at least an already available (and clean) dataset.
Given this requirement I would then look for projects, which both of you find interesting from a personal point of view, and try to make sure this topic meets the requirements from your university.
Do you have any recommended projects, topics, or any other information or are you completely free to pick your topic for 20 days?
|
st46261
|
They proposed us to participate to a challenge or take a dataset and test new algortihm on it, but we should bring a new/interesting approach to the implementation, and we haven’t find good or rechable idea to start with.
|
st46262
|
I don’t know if this would fit the criteria (so you should make sure your advisor would be OK with it), but you could take a look at (past) competitions from e.g. DrivenData 4 and see if new approaches could yield any new results/findings.
|
st46263
|
LsTam91:
done a project in image classification using transf
Here is an idea that occurred to me but was not able to try due to lack of time. I’m not sure if it has been tackled before! My guess is not, but you can do some search to find out.
We report the classification accuracy in any classification problem. This metric, however, might be deceptive due to lots of reasons; data imbalance, the percentage of incorrectly labeled samples (especially in the test set, if any) and the number of classes (the latter would be interesting to tackle).
Now, the simplest problems to consider are CIFAR-10 and CIFAR-100. I think the best reported performance is by this work 10 and the code is available in PyTorch (EffNet-L2 achieved 99.70% and 96.08 on CIFAR-10 and CIFAR-100, respectively).
The research question: Is EffNet-L2 doing a better job at CIFAR-10 or CIFAR-100?
The problem hence is how to statistically quantify the Model Performance when the number of classes/categories increases? This should be an additional metric that complements the classification accuracy. One dilemma that might affect our judgement is the 10% of samples (which I consider a bit low) used in the testing phase, as it might affect model generalization.
|
st46264
|
I am trying to writing a regressor which uses a series of dense layers to predict a value.
The code is as follows:
The Network
class deep_regressor(nn.Module):
def __init__(self):
super(deep_regressor,self).__init__()
self.linear_1 = nn.Linear(
in_features = 8,
out_features = 16
)
self.linear_2 = nn.Linear(
in_features = 16,
out_features = 32
)
self.output = nn.Linear(
in_features = 32,
out_features = 1
)
def forward(self, input_tensor):
tensor = self.linear_1(input_tensor)
tensor = f.relu(tensor)
tensor = self.linear_2(tensor)
tensor = f.relu(tensor)
tensor = self.output(tensor)
return tensor
The Script:
neurons = deep_regressor()
optimizer = torch.optim.Adam(neurons.parameters(),lr = 0.01)
loss_function = nn.MSELoss()
for epoch in range(10):
number = 0
total_loss = 0
accuracy = 0
for data_rows,labels in train_loader:
predictions = neurons(data_rows)
calc_loss = loss_function(predictions,labels)
optimizer.zero_grad() #resets the gradients
calc_loss.backward()
optimizer.step()
print(
"Epochs =\t" + str(epoch + 1)
)
preds = neurons(torch.as_tensor(df, dtype = torch.float32))
preds = preds.detach().numpy()
rmse = math.sqrt(mean_squared_error(preds,target))
print(rmse)
The output per epoch is as follows:
Epochs = 1
78.64161989373262
Epochs = 2
78.3528279726827
Epochs = 3
78.3800896760905
Epochs = 4
78.35181985605024
Epochs = 5
78.24301382034795
Epochs = 6
78.52033108135714
Epochs = 7
78.39965093292
Epochs = 8
78.2766145950348
Epochs = 9
78.39979733373761
Epochs = 10
78.39885957650147
Why is this happening? How do I fix it?
|
st46265
|
Lemme explain sth to u
U see Neural networks are good and all but sometimes they don’t just fit better than other Machine learning models when it comes to certain things.
Neural networks are only known to outperform other models when it comes to image and text data u know y? Coz they are very good at feature extraction and pattern recognition.
Now u r performing regression task which probably means ur data is structured, in this case there are models that work better than any when it comes to structured data these models are called Gradient Boosters (eg: Gboost, XGboost, Random forest, etc)
How do they work?
Well they are an esemble of weak learner algorithms called the decision tree that come together to form a strong learning unit. This ensemble can be used to perform regression and classification tasks on structured data and can even give much better accuracies than other ML algorithms if u know what u are doing.
Text or Image data -> Neural networks.
Structured data -> Gradient Boosters.
|
st46266
|
Although in ur neural network 3 layers might be over kill for a ordinary regression task.
Then again if u data have really deep underlying patterns that even u cannot figure out, then u can use more layers for it (just don’t forget to put a dropout layer so the network doesn’t overfit on training data and reach only local minimum instead of global minimum)
Also u can add non-linear transformation between ur hidden layer and output layer (nn.ReLU()).
Hope these answers help.
|
st46267
|
I have already tried bagging and boosting. By your logic time series forecasting (which also happens to be a form of regression should not be done using neural network.
Here is an example related to standard regression: https://ieeexplore.ieee.org/document/5596936 1
I think they know what they are doing!!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.