我有一個獲取影像的卷積網路,但每個影像上還有一個彩色邊框,用于向網路輸入額外的資訊。現在我想計算損失,但通常的損失函式也會考慮預測的邊界。邊界是完全隨機的,只是系統的一個輸入。我不希望模型在預測錯誤顏色時認為它表現不佳。這發生在 DataLoader.getitem 中:
def __getitem__(self, index):
path = self.input_data[index]
imgs_path = sorted(glob.glob(path '/*.png'))
#read light conditions
lightConditions = []
with open(path "/lightConditions.json", 'r') as file:
lightConditions = json.load(file)
#shift light conditions
lightConditions.pop(0)
lightConditions.append(False)
frameNumber = 0
imgs = []
for img_path in imgs_path:
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
im_pil = Image.fromarray(img)
#img = cv2.resize(img, (256,448))
if lightConditions[frameNumber] ==False:
imgBorder = ImageOps.expand(im_pil,border = 6, fill='black')
else:
imgBorder = ImageOps.expand(im_pil, border = 6, fill='orange')
img = np.asarray(imgBorder)
img = cv2.resize(img, (256,448))
#img = cv2.resize(img, (0, 0), fx=0.5, fy=0.5, interpolation=cv2.INTER_CUBIC) #has been 0.5 for official data, new is fx = 2.63 and fy = 2.84
img_tensor = ToTensor()(img).float()
imgs.append(img_tensor)
frameNumber =1
imgs = torch.stack(imgs, dim=0)
return imgs
然后這是在訓練中完成的:
for idx_epoch in range(startEpoch, nEpochs):
#set epoch in dataloader for right shuffle ->set seed really random
val_loader.sampler.set_epoch(idx_epoch)
#Remember time for displaying time for epoch
startTimeEpoch = datetime.now()
i = 0
if processGPU==0:
running_loss = 0
beenValuated = False
for index, data_sr in enumerate(train_loader):
#Transfer Data to GPU but don't block other processes because this only effects this single process
data_sr = data_sr.cuda(processGPU, non_blocking=True)
startTimeIteration = time.time()
#Remove all dimensions of size 1
data_sr = data_sr.squeeze()
# calculate the index of the input images and GT images
num_f = len(data_sr)
#If model_type is 0 -> only calculate one frame that is marked with gt
if cfg.model_type == 0:
idx_start = random.randint(-2, 2)
idx_all = list(np.arange(idx_start, idx_start num_f).clip(0, num_f - 1))
idx_gt = [idx_all.pop(int(num_f / 2))]
idx_input = idx_all
#Else when model_type is 1 then input frames 1,2,3 and predict frame 4 to number of cfg.dec_frames. Set all images that will be predicted to 'gt' images
else:
idx_all = np.arange(0, num_f)
idx_input = list(idx_all[0:4])
idx_gt = list(idx_all[4:4 cfg.dec_frames])
imgs_input = data_sr[idx_input]
imgs_gt = data_sr[idx_gt]
# get predicted result
imgs_pred = model(imgs_input)
我使用 cfg.model_type = 1。這個模型會給我帶有彩色邊框的新影像。通常這里遵循損失計算:
loss = criterion_mse(imgs_pred, imgs_gt)
但是我不能再使用這個了。有誰知道如何撰寫僅考慮張量的某些部分或張量中的哪些部分代表哪些影像的自定義損失函式?
uj5u.com熱心網友回復:
您可以像在 numpy 中一樣對張量進行切片。影像批次的尺寸為 NCHW。如果b
是您的邊框大小并且它從所有方面都是對稱的,那么只需裁剪張量:
loss = criterion_mse(imgs_pred[:, :, b:-b, b:-b] , imgs_gt[:, :, b:-b, b:-b])
轉載請註明出處,本文鏈接:https://www.uj5u.com/caozuo/372311.html
上一篇:卷積后最大池化層