詹惠儿

2018-12-19   阅读量: 486

数据分析师 Python数据分析

如何用Logistic回归识别手写数字?(3)

扫码加入数据分析学习群

现在,我们将开始培训。在这里,我们将执行以下任务:

  1. 将所有渐变重置为0。
  2. 向前传球。
  3. 计算损失。
  4. 执行反向传播。
  5. 更新所有重量。

# Training the Model

for epoch in range(num_epochs):

for i, (images, labels) in enumerate(train_loader):

images = Variable(images.view(-1, 28 * 28))

labels = Variable(labels)

# Forward + Backward + Optimize

optimizer.zero_grad()

outputs = model(images)

loss = criterion(outputs, labels)

loss.backward()

optimizer.step()

if (i + 1) % 100 == 0:

print('Epoch: [% d/% d], Step: [% d/% d], Loss: %.4f'

% (epoch + 1, num_epochs, i + 1,

len(train_dataset) // batch_size, loss.data[0]))

最后,我们将使用以下代码测试模型。

# Test the Model

correct = 0

total = 0

for images, labels in test_loader:

images = Variable(images.view(-1, 28 * 28))

outputs = model(images)

_, predicted = torch.max(outputs.data, 1)

total += labels.size(0)

correct += (predicted == labels).sum()

print('Accuracy of the model on the 10000 test images: % d %%' % (

100 * correct / total))

假设您正确执行了所有步骤,您将获得82%的准确度,这与当今最先进的模型相差甚远,后者使用了一种特殊类型的神经网络架构。供您参考,您可以在下面找到本文的完整代码:

import torch

import torch.nn as nn

import torchvision.datasets as dsets

import torchvision.transforms as transforms

from torch.autograd import Variable

# MNIST Dataset (Images and Labels)

train_dataset = dsets.MNIST(root ='./data',

train = True,

transform = transforms.ToTensor(),

download = True)

test_dataset = dsets.MNIST(root ='./data',

train = False,

transform = transforms.ToTensor())

# Dataset Loader (Input Pipline)

train_loader = torch.utils.data.DataLoader(dataset = train_dataset,

batch_size = batch_size,

shuffle = True)

test_loader = torch.utils.data.DataLoader(dataset = test_dataset,

batch_size = batch_size,

shuffle = False)

# Hyper Parameters

input_size = 784

num_classes = 10

num_epochs = 5

batch_size = 100

learning_rate = 0.001

# Model

class LogisticRegression(nn.Module):

def __init__(self, input_size, num_classes):

super(LogisticRegression, self).__init__()

self.linear = nn.Linear(input_size, num_classes)

def forward(self, x):

out = self.linear(x)

return out

model = LogisticRegression(input_size, num_classes)

# Loss and Optimizer

# Softmax is internally computed.

# Set parameters to be updated.

criterion = nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)

# Training the Model

for epoch in range(num_epochs):

for i, (images, labels) in enumerate(train_loader):

images = Variable(images.view(-1, 28 * 28))

labels = Variable(labels)

# Forward + Backward + Optimize

optimizer.zero_grad()

outputs = model(images)

loss = criterion(outputs, labels)

loss.backward()

optimizer.step()

if (i + 1) % 100 == 0:

print('Epoch: [% d/% d], Step: [% d/% d], Loss: %.4f'

% (epoch + 1, num_epochs, i + 1,

len(train_dataset) // batch_size, loss.data[0]))

# Test the Model

correct = 0

total = 0

for images, labels in test_loader:

images = Variable(images.view(-1, 28 * 28))

outputs = model(images)

_, predicted = torch.max(outputs.data, 1)

total += labels.size(0)

correct += (predicted == labels).sum()

print('Accuracy of the model on the 10000 test images: % d %%' % (

100 * correct / total))

添加CDA认证专家【维克多阿涛】,微信号:【cdashijiazhuang】,提供数据分析指导及CDA考试秘籍。已助千人通过CDA数字化人才认证。欢迎交流,共同成长!
0.0000 0 1 关注作者 收藏

评论(0)


暂无数据

推荐课程

推荐帖子