使用tensorflow model
以下代碼進行訓練:
model.fit(train_dataset,
steps_per_epoch=10000,
validation_data=test_dataset,
epochs=20000
)
總數steps_per_epoch
是10000
并且epochs
是20000
。
是否可以將培訓時間拆分為多天:
day 1:
model.fit(..., steps_per_epoch=10000, ..., epochs=10, ....)
model.fit(..., steps_per_epoch=10000, ..., epochs=20, ....)
model.fit(..., steps_per_epoch=10000, ..., epochs=30, ....)
day 2:
model.fit(..., steps_per_epoch=10000, ..., epochs=100, ....)
day 3:
model.fit(..., steps_per_epoch=10000, ..., epochs=5, ....)
day (n):
model.fit(..., steps_per_epoch=10000, ..., epochs=n, ....)
預期epochs
是:
20000 = (day1 day2 day3 ... dayn)
我可以簡單地停止model.fit
并model.fit
在另一天開始嗎?
和用“”跑一次一樣epochs=20000
嗎?
uj5u.com熱心網友回復:
您可以在每天之后保存您的模型,pickle file
然后在明天加載您的模型并繼續訓練:
在 day_1 訓練模型
import tensorflow_datasets as tfds
import tensorflow as tf
import joblib
train, test = tfds.load(
'fashion_mnist',
shuffle_files=True,
as_supervised=True,
split = ['train', 'test']
)
train = train.repeat(15).batch(64).prefetch(tf.data.AUTOTUNE)
test = test.batch(64).prefetch(tf.data.AUTOTUNE)
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(28, 28, 1)))
model.add(tf.keras.layers.Conv2D(128, (3,3), activation='relu'))
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512, activation='relu'))
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Dense(10, activation='sigmoid'))
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(train, batch_size=256, steps_per_epoch=150, epochs=3, verbose=1)
model.evaluate(test, verbose=1)
joblib.dump(model, 'model_day_1.pkl')
day_1 之后的輸出:
Epoch 1/3
150/150 [==============================] - 7s 17ms/step - loss: 23.0504 - accuracy: 0.5786
Epoch 2/3
150/150 [==============================] - 2s 16ms/step - loss: 0.9366 - accuracy: 0.7208
Epoch 3/3
150/150 [==============================] - 3s 17ms/step - loss: 0.7321 - accuracy: 0.7682
157/157 [==============================] - 1s 8ms/step - loss: 0.4627 - accuracy: 0.8405
INFO:tensorflow:Assets written to: ram://***/assets
INFO:tensorflow:Assets written to: ram://***/assets
['model_day_1.pkl']
在第 2 天加載模型并繼續訓練:
model = joblib.load("/content/model_day_1.pkl")
model.fit(train, batch_size=256, steps_per_epoch=150, epochs=3, verbose=1)
model.evaluate(test, verbose=1)
joblib.dump(model, 'model_day_2.pkl')
day_2 之后的輸出:
Epoch 1/3
150/150 [==============================] - 3s 17ms/step - loss: 0.6288 - accuracy: 0.7981
Epoch 2/3
150/150 [==============================] - 2s 16ms/step - loss: 0.5290 - accuracy: 0.8222
Epoch 3/3
150/150 [==============================] - 2s 16ms/step - loss: 0.5124 - accuracy: 0.8272
157/157 [==============================] - 1s 5ms/step - loss: 0.4131 - accuracy: 0.8598
INFO:tensorflow:Assets written to: ram://***/assets
INFO:tensorflow:Assets written to: ram://***/assets
['model_day_2.pkl']
在第 3 天加載模型并繼續訓練:
model = joblib.load("/content/model_day_2.pkl")
model.fit(train, batch_size=256, steps_per_epoch=150, epochs=3, verbose=1)
model.evaluate(test, verbose=1)
joblib.dump(model, 'model_day_3.pkl')
day_3 之后的輸出:
Epoch 1/3
150/150 [==============================] - 3s 17ms/step - loss: 0.4579 - accuracy: 0.8498
Epoch 2/3
150/150 [==============================] - 2s 17ms/step - loss: 0.4078 - accuracy: 0.8589
Epoch 3/3
150/150 [==============================] - 2s 16ms/step - loss: 0.4073 - accuracy: 0.8560
157/157 [==============================] - 1s 5ms/step - loss: 0.3997 - accuracy: 0.8603
INFO:tensorflow:Assets written to: ram://***/assets
INFO:tensorflow:Assets written to: ram://***/assets
['model_day_3.pkl']
uj5u.com熱心網友回復:
我想你是在問是否多次呼叫model.fit
會繼續訓練模型(而不是從頭開始)——答案是肯定的,它會的。History
但是,每次呼叫都會生成一個新物件model.fit
,因此如果您正在捕獲它,您可能需要單獨處理它。
所以跑步
model.fit(..., epochs=10)
model.fit(..., epochs=10)
將總共??訓練模型 20 個 epoch。
轉載請註明出處,本文鏈接:https://www.uj5u.com/houduan/493868.html