如何在PyTorch中更改输入大小以匹配Inception V3。

huangapple go评论60阅读模式
英文:

how to change input size to match inception v3 in pytorch

问题

I'm trying to run a pretrained pytorch InceptionV3 model on my own data.
我正在尝试在自己的数据上运行预训练的pytorch InceptionV3模型。

The data size is (N,N).
数据大小为(N,N)。

The batch size is 128.
批大小为128。

InceptionV3 requires (batch_size, 3, 229, 229) as input.
InceptionV3要求输入为(batch_size, 3, 229, 229)。

I tried to resize using the torch.transforms Resize function:
我尝试使用torch.transforms的Resize函数进行调整大小:

train_transforms = transforms.Compose([transforms.Resize((229, 229)),
                                       transforms.ToTensor()])

which did not work.
但是没有成功。

I also tried
我还尝试过:

train_transforms = transforms.Compose([transforms.Resize((3, 229, 229)),
                                       transforms.ToTensor()])

which also didn't work.
这也没有成功。

I'm already defining the batch size using a dataloader:
我已经在使用数据加载器定义了批大小:

train_loader = DataLoader(train_dataset, batch_size=128)

Also saw a suggestion here, but I do not understand how this number of neurons is calculated.
我也看到了一个建议,但我不理解如何计算神经元的数量。
https://github.com/pytorch/vision/issues/396

英文:

I'm trying to run a pretrained pytorch InceptionV3 model on my own data.
The data size is (N,N).
The batch size is 128.

InceptionV3 requires (batch_size, 3, 229, 229) as input.

I tried to resize using the torch.transforms Resize function:

train_transforms = transforms.Compose([transforms.Resize((229, 229)),
                                       transforms.ToTensor()])

which did not work.

I also tried

train_transforms = transforms.Compose([transforms.Resize((3, 229, 229)),
                                       transforms.ToTensor()])

which also didn't work.

I'm already defining the batch size using a dataloader:

train_loader = DataLoader(train_dataset, batch_size=128)

Also saw a suggestion here, but I do not understand how this number of neurons is calculated.
https://github.com/pytorch/vision/issues/396

答案1

得分: 0

It seems that 299 is for evaluation, for training you need a bigger size (due to aux classifiers).

from torchvision import models
from torchvision import transforms
import numpy as np
from PIL import Image
from torch import nn

model = models.inception_v3(pretrained=True)
model.eval()
w = 299
inputs = np.random.randint(low=0, high=255, size=(512, 512, 3)).astype(np.uint8)
img = Image.fromarray(inputs)
train_transforms = transforms.Compose([transforms.Resize((w, w)),
                                        transforms.ToTensor()])

inputs = train_transforms(img)
print(inputs.shape)
model(inputs[None, ...])[0].shape

So in training, use a bigger size, for example, 350.

from torchvision import models
from torchvision import transforms
import numpy as np
from PIL import Image
from torch import nn

model = models.inception_v3(pretrained=True)
w = 350
inputs = np.random.randint(low=0, high=255, size=(512, 512, 3)).astype(np.uint8)
img = Image.fromarray(inputs)
train_transforms = transforms.Compose([transforms.Resize((w, w)),
                                       transforms.ToTensor()])

inputs = train_transforms(img)
print(inputs.shape)
model(inputs[None, ...])[0].shape
英文:

It seems that 299 is for evaluation, for training you need bigger size(due to aux classifiers)

from torchvision import models
from torchvision import transforms
import numpy as np
from PIL import Image
from torch import nn

model = models.inception_v3(pretrained=True)
model.eval()
w = 299
inputs = np.random.randint(low=0, high=255, size=(512, 512, 3)).astype(np.uint8)
img = Image.fromarray(inputs)
train_transforms = transforms.Compose([transforms.Resize((w, w)),
                                        transforms.ToTensor()])

inputs = train_transforms(img)
print(inputs.shape)
model(inputs[None, ...])[0].shape

So in training, use bigger size for example 350

from torchvision import models
from torchvision import transforms
import numpy as np
from PIL import Image
from torch import nn

model = models.inception_v3(pretrained=True)
w = 350
inputs = np.random.randint(low=0, high=255, size=(512, 512, 3)).astype(np.uint8)
img = Image.fromarray(inputs)
train_transforms = transforms.Compose([transforms.Resize((w, w)),
                                       transforms.ToTensor()])

inputs = train_transforms(img)
print(inputs.shape)
model(inputs[None, ...])[0].shape

huangapple
  • 本文由 发表于 2023年5月11日 02:14:43
  • 转载请务必保留本文链接:https://go.coder-hub.com/76221489.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定