使用 Keras 分析 IMDB 电影数据

使用 Keras 分析 IMDB 电影数据

1
2
3
4
5
6
7
8
9
10
11
# Imports
import numpy as np
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline

np.random.seed(42)

1. 加载数据

该数据集预先加载了 Keras,所以一个简单的命令就会帮助我们划分训练和测试数据。 这里有一个我们想看多少单词的参数。 这里已将它设置为1000,也可以随时尝试设置为其他数字。

1
2
3
4
5
6
7
8
9
# Loading the data (it's preloaded in Keras)
# 这里下载数据集需要翻墙
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)

print(x_train.shape)
print(x_test.shape)

print(y_train.shape)
print(y_test.shape)
(25000,)
(25000,)

2. 检查数据

请注意,数据已经过预处理,其中所有单词都包含数字,评论作为向量与评论中包含的单词一起出现。 例如,如果单词’the’是我们词典中的第一个单词,并且评论包含单词’the’,那么在相应的向量中有 1。

输出结果是 1 和 0 的向量,其中 1 表示正面评论,0 是负面评论。

1
2
print(x_train[0])
print(y_train[0])
[1, 14, 22, 16, 43, 530, 973, 2, 2, 65, 458, 2, 66, 2, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 2, 2, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2, 19, 14, 22, 4, 2, 2, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 2, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2, 2, 16, 480, 66, 2, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 2, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 2, 15, 256, 4, 2, 7, 2, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 2, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2, 56, 26, 141, 6, 194, 2, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 2, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 2, 88, 12, 16, 283, 5, 16, 2, 113, 103, 32, 15, 16, 2, 19, 178, 32]
1

3. 输出的 One-hot 编码

在这里,我们将输入向量转换为 (0,1)-向量。 例如,如果预处理的向量包含数字 14,则在处理的向量中,第 14 个输入将是 1。

1
2
3
4
5
# One-hot encoding the output into vector mode, each of length 1000
tokenizer = Tokenizer(num_words=1000)
X_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
X_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
# print(x_train[0])

同时我们将对输出进行 one-hot 编码。

1
2
3
num_classes = 2
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

4. 模型构建

使用 sequential 在这里构建模型。 可以随意尝试不同的层和大小! 此外,你可以尝试添加 dropout 层以减少过拟合。

1
X_train.shape
(25000, 1000)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# TODO: Build the model architecture
model = Sequential()

model.add(Dense(128, activation='softmax', input_dim=1000))
model.add(Dropout(0.5))

# model.add(Dense(64, activation='softmax'))
# model.add(Dropout(0.2))

# model.add(Dense(32, activation='softmax'))
# model.add(Dropout(0.2))

model.add(Dense(8, activation='softsign'))
model.add(Dropout(0.1))

model.add(Dense(2, activation='softmax'))

# TODO: Compile the model using a loss function and an optimizer.

model.compile(loss='mean_squared_error',
optimizer='Adam', metrics=['accuracy'])
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_97 (Dense)             (None, 128)               128128    
_________________________________________________________________
dropout_60 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_98 (Dense)             (None, 8)                 1032      
_________________________________________________________________
dropout_61 (Dropout)         (None, 8)                 0         
_________________________________________________________________
dense_99 (Dense)             (None, 2)                 18        
=================================================================
Total params: 129,178
Trainable params: 129,178
Non-trainable params: 0
_________________________________________________________________

5. 训练模型

运行模型。 你可以尝试不同的 batch_size 和 epoch 数量!

1
model.fit(X_train, y_train, epochs=30, batch_size=1000)
Epoch 1/30
25000/25000 [==============================] - 1s 45us/step - loss: 0.2456 - acc: 0.6180
Epoch 2/30
25000/25000 [==============================] - 0s 13us/step - loss: 0.0914 - acc: 0.8818
Epoch 29/30
25000/25000 [==============================] - 0s 15us/step - loss: 0.0909 - acc: 0.8830
Epoch 30/30
25000/25000 [==============================] - 0s 17us/step - loss: 0.0907 - acc: 0.8834

6. 评估模型

你可以在测试集上评估模型,这将为你提供模型的准确性。你得出的结果可以大于 85% 吗?

1
2
3
4
5
train_score = model.evaluate(X_train, y_train, verbose=0)
test_score = model.evaluate(X_test, y_test, verbose=0)

print("train_Accuracy: ", train_score[1])
print("test_Accuracy: ", test_score[1])
train_Accuracy:  0.90024
test_Accuracy:  0.86136