搜索
您的当前位置:首页pytorch系列教程(七)-卷积、池化、全连接、批归一化、上采样

pytorch系列教程(七)-卷积、池化、全连接、批归一化、上采样

来源:乌哈旅游

Conv2d

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride = 1, padding = 0, dilation= 1, groups = 1, bias = True, padding_mode= 'zeros')
in_channels:输入通道数
out_chanels:期待输出通道数
kerne_size:卷积核的数目
stride:步长
padding:填充
Input: (Cin,Hin,Win)
Output: (Cout,Hout,Wout)
Cin输入通道数目
Cout期待输出通道数目

输入

Conv3d

torch.nn.Conv3d(in_channels, out_channelst, kernel_size,stride= 1, padding = 0, dilation = 1, groups= 1, bias = True, padding_mode = 'zeros')
in_channels:输入通道数
out_chanels:期待输出通道数
kerne_size:卷积核的数目
stride:步长
padding:填充
Input: (Cin,Din,Hin,Win)
Output: (Cout,Dout,Hout,Wout)
Cin输入通道数目
Cout期待输出通道数目

MaxPool2d

torch.nn.MaxPool2d(kernel_size, stride = None, padding = 0, dilation = 1,return_indices= False, ceil_mode = False)

MaxPool3d

torch.nn.MaxPool3d(kernel_size, stride = None, padding = 0, dilation = 1, return_indices = False, ceil_mode = False)

Linear

torch.nn.Linear(in_features, out_featurest, bias = True)
如果是二维
Cout上一层的输出通道数目
Hout上一层的输出高度
Wout上一层的输出宽度
in_features=Cout*Hout*Wout

out_features为你想输出神经元的个数

如果不懂in_features可以参考


  

BatchNorm2d

torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

BatchNorm3d

torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

Upsample

torch.nn.Upsample(size= None, scale_factor = None, mode= 'nearest', align_corners = None)

ConvTranspose2d

orch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride= 1, padding= 0, output_padding= 0, groups = 1, bias = True, dilation = 1, padding_mode= 'zeros')
Wout​=(Win​−1)×stride[1]2×padding[1]+dilation[1]×(kernel_size[1]1)+output_padding[1]+1
Hout​=(Hin​−1)×stride[0]2×padding[0]+dilation[0]×(kernel_size[0]1)+output_padding[0]+1

  

ConvTranspose3d

torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride= 1, padding:  = 0, output_padding= 0, groups = 1, bias = True, dilation= 1, padding_mode = 'zeros')

Dout​=(Din​−1)×stride[0]2×padding[0]+dilation[0]×(kernel_size[0]1)+output_padding[0]+1
Hout​=(Hin​−1)×stride[1]2×padding[1]+dilation[1]×(kernel_size[1]1)+output_padding[1]+1
Wout​=(Win​−1)×stride[2]2×padding[2]+dilation[2]×(kernel_size[2]1)+output_padding[2]+1

convTranspose和upsample的差别

因篇幅问题不能全部显示,请点此查看更多更全内容

Top