본문 바로가기
카테고리 없음

[DeepLearning] 활성화함수, 최적화함수, callback함수

by YoungD 2023. 9. 28.
 

목표

  • 활성화함수와 경사하강법 최적의 조합을 확인해보자
  • 모델링에 도움이 되는 callback 함수(모델저장, 조기학습중단) 을 알아보자!
데이터 로딩
In [1]:
from tensorflow.keras.datasets import mnist  # 손글씨 데이터
In [2]:
# 데이터 분리
(X_train,y_train), (X_test,y_test) = mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
In [3]:
# 크기확인
(X_train.shape,y_train.shape), (X_test.shape,y_test.shape)
Out[3]:
(((60000, 28, 28), (60000,)), ((10000, 28, 28), (10000,)))

 

활성화 함수와 경사하강법 조합에 따른 성능비교

  1. sigmoid + SGD 조합
  2. relu + SGD 조합
  3. relu + Adam 조합
In [4]:
# 라이브러리 불러오기
 
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import InputLayer, Dense, Flatten
from tensorflow.keras.optimizers import SGD, Adam # 경사하강법 클래스 사용
In [5]:
# 1. sigmoid + SGD 조합
 
# 1)신경망설계
 
# 뼈대
model1 = Sequential()
 
 
# 입력층
 
# 사진데이터 (2차원 -> 1차원)
model1.add(Flatten())
 
 
# 중간층 (5층 (64,128,256,128,64))
model1.add(Dense(units = 64, activation = 'sigmoid'))
model1.add(Dense(units = 128, activation = 'sigmoid'))
model1.add(Dense(units = 256, activation = 'sigmoid'))
model1.add(Dense(units = 128, activation = 'sigmoid'))
model1.add(Dense(units = 64, activation = 'sigmoid'))
 
 
# 출력층
model1.add(Dense(units = 10, activation = 'softmax'))
In [6]:
#2) 학습방법및 평가방법 설정
model1.compile(loss = 'sparse_categorical_crossentropy',
                          optimizer = SGD(learning_rate = 0.01), #SGD 기본학습률: 0.01
                          metrics = ['accuracy'])
In [7]:
# 3) 학습 # epochs = 20
h1 = model1.fit(X_train,y_train,epochs = 20,
                         validation_split=0.2,
                         batch_size = 128)
Epoch 1/20
375/375 [==============================] - 8s 14ms/step - loss: 2.3126 - accuracy: 0.1140 - val_loss: 2.3022 - val_accuracy: 0.1060
Epoch 2/20
375/375 [==============================] - 4s 12ms/step - loss: 2.3013 - accuracy: 0.1140 - val_loss: 2.3019 - val_accuracy: 0.1060
Epoch 3/20
375/375 [==============================] - 4s 12ms/step - loss: 2.3011 - accuracy: 0.1138 - val_loss: 2.3020 - val_accuracy: 0.1060
Epoch 4/20
375/375 [==============================] - 3s 7ms/step - loss: 2.3010 - accuracy: 0.1140 - val_loss: 2.3020 - val_accuracy: 0.1060
Epoch 5/20
375/375 [==============================] - 2s 6ms/step - loss: 2.3010 - accuracy: 0.1138 - val_loss: 2.3017 - val_accuracy: 0.1060
Epoch 6/20
375/375 [==============================] - 2s 6ms/step - loss: 2.3009 - accuracy: 0.1140 - val_loss: 2.3017 - val_accuracy: 0.1060
Epoch 7/20
375/375 [==============================] - 3s 8ms/step - loss: 2.3008 - accuracy: 0.1140 - val_loss: 2.3011 - val_accuracy: 0.1060
Epoch 8/20
375/375 [==============================] - 3s 8ms/step - loss: 2.3006 - accuracy: 0.1142 - val_loss: 2.3015 - val_accuracy: 0.1060
Epoch 9/20
375/375 [==============================] - 2s 6ms/step - loss: 2.3005 - accuracy: 0.1140 - val_loss: 2.3012 - val_accuracy: 0.1060
Epoch 10/20
375/375 [==============================] - 2s 6ms/step - loss: 2.3004 - accuracy: 0.1140 - val_loss: 2.3009 - val_accuracy: 0.1060
Epoch 11/20
375/375 [==============================] - 2s 6ms/step - loss: 2.3003 - accuracy: 0.1140 - val_loss: 2.3010 - val_accuracy: 0.1060
Epoch 12/20
375/375 [==============================] - 3s 8ms/step - loss: 2.3001 - accuracy: 0.1148 - val_loss: 2.3010 - val_accuracy: 0.1060
Epoch 13/20
375/375 [==============================] - 3s 7ms/step - loss: 2.3000 - accuracy: 0.1140 - val_loss: 2.3009 - val_accuracy: 0.1060
Epoch 14/20
375/375 [==============================] - 2s 6ms/step - loss: 2.2998 - accuracy: 0.1140 - val_loss: 2.3010 - val_accuracy: 0.1060
Epoch 15/20
375/375 [==============================] - 2s 6ms/step - loss: 2.2997 - accuracy: 0.1140 - val_loss: 2.3004 - val_accuracy: 0.1060
Epoch 16/20
375/375 [==============================] - 2s 7ms/step - loss: 2.2995 - accuracy: 0.1140 - val_loss: 2.3006 - val_accuracy: 0.1060
Epoch 17/20
375/375 [==============================] - 3s 7ms/step - loss: 2.2992 - accuracy: 0.1140 - val_loss: 2.3008 - val_accuracy: 0.1060
Epoch 18/20
375/375 [==============================] - 3s 8ms/step - loss: 2.2992 - accuracy: 0.1140 - val_loss: 2.3005 - val_accuracy: 0.1060
Epoch 19/20
375/375 [==============================] - 2s 7ms/step - loss: 2.2991 - accuracy: 0.1140 - val_loss: 2.3006 - val_accuracy: 0.1060
Epoch 20/20
375/375 [==============================] - 2s 6ms/step - loss: 2.2990 - accuracy: 0.1140 - val_loss: 2.2999 - val_accuracy: 0.1060

 

2.relu + SGD
In [8]:
# 1)신경망설계
 
# 뼈대
model2 = Sequential()
 
 
# 입력층
# 사진데이터 (2차원 -> 1차원)
model2.add(Flatten())
 
 
# 중간층 (5층 (64,128,256,128,64))
model2.add(Dense(units = 64, activation = 'relu'))
model2.add(Dense(units = 128, activation = 'relu'))
model2.add(Dense(units = 256, activation = 'relu'))
model2.add(Dense(units = 128, activation = 'relu'))
model2.add(Dense(units = 64, activation = 'relu'))
 
 
# 출력층
model2.add(Dense(units = 10, activation = 'softmax'))
 
 
#2) 학습방법및 평가방법 설정
model2.compile(loss = 'sparse_categorical_crossentropy',
                          optimizer = SGD(learning_rate = 0.01),  #  SGD 기본학습률: 0.01
                          metrics = ['accuracy'])
 
h2 = model2.fit(X_train,y_train,epochs = 20,
                         validation_split=0.2,
                         batch_size = 128)
 
Epoch 1/20
375/375 [==============================] - 3s 8ms/step - loss: 1.9368 - accuracy: 0.4890 - val_loss: 1.0459 - val_accuracy: 0.6582
Epoch 2/20
375/375 [==============================] - 3s 8ms/step - loss: 0.5854 - accuracy: 0.8158 - val_loss: 0.3824 - val_accuracy: 0.8844
Epoch 3/20
375/375 [==============================] - 2s 5ms/step - loss: 0.3553 - accuracy: 0.8939 - val_loss: 0.3114 - val_accuracy: 0.9076
Epoch 4/20
375/375 [==============================] - 2s 5ms/step - loss: 0.2957 - accuracy: 0.9119 - val_loss: 0.2811 - val_accuracy: 0.9149
Epoch 5/20
375/375 [==============================] - 2s 6ms/step - loss: 0.2659 - accuracy: 0.9187 - val_loss: 0.2454 - val_accuracy: 0.9236
Epoch 6/20
375/375 [==============================] - 2s 6ms/step - loss: 0.2373 - accuracy: 0.9274 - val_loss: 0.2440 - val_accuracy: 0.9258
Epoch 7/20
375/375 [==============================] - 3s 9ms/step - loss: 0.2168 - accuracy: 0.9333 - val_loss: 0.2410 - val_accuracy: 0.9276
Epoch 8/20
375/375 [==============================] - 2s 6ms/step - loss: 0.2023 - accuracy: 0.9380 - val_loss: 0.2150 - val_accuracy: 0.9380
Epoch 9/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1881 - accuracy: 0.9422 - val_loss: 0.2168 - val_accuracy: 0.9358
Epoch 10/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1761 - accuracy: 0.9462 - val_loss: 0.2094 - val_accuracy: 0.9382
Epoch 11/20
375/375 [==============================] - 3s 7ms/step - loss: 0.1651 - accuracy: 0.9489 - val_loss: 0.1988 - val_accuracy: 0.9410
Epoch 12/20
375/375 [==============================] - 4s 10ms/step - loss: 0.1569 - accuracy: 0.9513 - val_loss: 0.1887 - val_accuracy: 0.9452
Epoch 13/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1468 - accuracy: 0.9537 - val_loss: 0.1957 - val_accuracy: 0.9424
Epoch 14/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1395 - accuracy: 0.9567 - val_loss: 0.1995 - val_accuracy: 0.9431
Epoch 15/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1321 - accuracy: 0.9585 - val_loss: 0.1969 - val_accuracy: 0.9420
Epoch 16/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1263 - accuracy: 0.9598 - val_loss: 0.1850 - val_accuracy: 0.9475
Epoch 17/20
375/375 [==============================] - 3s 9ms/step - loss: 0.1202 - accuracy: 0.9611 - val_loss: 0.1783 - val_accuracy: 0.9502
Epoch 18/20
375/375 [==============================] - 3s 7ms/step - loss: 0.1151 - accuracy: 0.9632 - val_loss: 0.1740 - val_accuracy: 0.9498
Epoch 19/20
375/375 [==============================] - 2s 7ms/step - loss: 0.1102 - accuracy: 0.9644 - val_loss: 0.1714 - val_accuracy: 0.9518
Epoch 20/20
375/375 [==============================] - 2s 6ms/step - loss: 0.1056 - accuracy: 0.9662 - val_loss: 0.1739 - val_accuracy: 0.9512

 

3. relu + Adam
In [9]:
# 1)신경망설계
 
# 뼈대
model3 = Sequential()
 
 
# 입력층
# 사진데이터 (2차원 -> 1차원)
model3.add(Flatten())
 
 
# 중간층 (5층 (64,128,256,128,64))
model3.add(Dense(units = 64, activation = 'relu'))
model3.add(Dense(units = 128, activation = 'relu'))
model3.add(Dense(units = 256, activation = 'relu'))
model3.add(Dense(units = 128, activation = 'relu'))
model3.add(Dense(units = 64, activation = 'relu'))
 
 
# 출력층
model3.add(Dense(units = 10, activation = 'softmax'))
 
 
#2) 학습방법및 평가방법 설정
model3.compile(loss = 'sparse_categorical_crossentropy',
                          optimizer = Adam(learning_rate = 0.001), #Adma 기본학습률: 0.001
                          metrics = ['accuracy'])
 
h3 = model3.fit(X_train,y_train,epochs = 20,
                        validation_split=0.2,
                        batch_size = 128)
 
Epoch 1/20
375/375 [==============================] - 4s 7ms/step - loss: 0.6967 - accuracy: 0.8301 - val_loss: 0.2680 - val_accuracy: 0.9217
Epoch 2/20
375/375 [==============================] - 3s 9ms/step - loss: 0.2238 - accuracy: 0.9338 - val_loss: 0.2016 - val_accuracy: 0.9394
Epoch 3/20
375/375 [==============================] - 3s 9ms/step - loss: 0.1512 - accuracy: 0.9544 - val_loss: 0.1715 - val_accuracy: 0.9513
Epoch 4/20
375/375 [==============================] - 3s 8ms/step - loss: 0.1211 - accuracy: 0.9628 - val_loss: 0.1777 - val_accuracy: 0.9527
Epoch 5/20
375/375 [==============================] - 3s 7ms/step - loss: 0.1050 - accuracy: 0.9683 - val_loss: 0.1633 - val_accuracy: 0.9553
Epoch 6/20
375/375 [==============================] - 3s 8ms/step - loss: 0.0912 - accuracy: 0.9719 - val_loss: 0.1604 - val_accuracy: 0.9593
Epoch 7/20
375/375 [==============================] - 3s 9ms/step - loss: 0.0797 - accuracy: 0.9755 - val_loss: 0.1383 - val_accuracy: 0.9609
Epoch 8/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0746 - accuracy: 0.9763 - val_loss: 0.1639 - val_accuracy: 0.9603
Epoch 9/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0621 - accuracy: 0.9813 - val_loss: 0.1481 - val_accuracy: 0.9623
Epoch 10/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0582 - accuracy: 0.9818 - val_loss: 0.1465 - val_accuracy: 0.9635
Epoch 11/20
375/375 [==============================] - 4s 10ms/step - loss: 0.0593 - accuracy: 0.9820 - val_loss: 0.1555 - val_accuracy: 0.9639
Epoch 12/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0551 - accuracy: 0.9826 - val_loss: 0.1320 - val_accuracy: 0.9665
Epoch 13/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0461 - accuracy: 0.9858 - val_loss: 0.1784 - val_accuracy: 0.9620
Epoch 14/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0532 - accuracy: 0.9842 - val_loss: 0.1538 - val_accuracy: 0.9619
Epoch 15/20
375/375 [==============================] - 3s 8ms/step - loss: 0.0435 - accuracy: 0.9866 - val_loss: 0.1532 - val_accuracy: 0.9632
Epoch 16/20
375/375 [==============================] - 3s 9ms/step - loss: 0.0462 - accuracy: 0.9861 - val_loss: 0.1399 - val_accuracy: 0.9693
Epoch 17/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0385 - accuracy: 0.9880 - val_loss: 0.1551 - val_accuracy: 0.9682
Epoch 18/20
375/375 [==============================] - 3s 8ms/step - loss: 0.0366 - accuracy: 0.9886 - val_loss: 0.1609 - val_accuracy: 0.9653
Epoch 19/20
375/375 [==============================] - 3s 7ms/step - loss: 0.0404 - accuracy: 0.9882 - val_loss: 0.1641 - val_accuracy: 0.9696
Epoch 20/20
375/375 [==============================] - 5s 14ms/step - loss: 0.0350 - accuracy: 0.9895 - val_loss: 0.1528 - val_accuracy: 0.9702
In [10]:
import matplotlib.pyplot as plt
plt.figure(figsize=(15,5))
 
# sigmoid + SGD 조합
 
plt.plot(h1.history['accuracy'], label="sigmoid+SGD train acc")
plt.plot(h1.history['val_accuracy'], label="sigmoid+SGD validation acc")
 
# relu + SGD 조합
 
plt.plot(h2.history['accuracy'], label="relu+SGD train acc")
plt.plot(h2.history['val_accuracy'], label="relu+SGD validation acc")
 
# relu + Adam 조합
 
plt.plot(h3.history['accuracy'], label="relu+Adam train acc")
plt.plot(h3.history['val_accuracy'], label="relu+Adam validation acc")
 
plt.legend()
plt.show()

 

callback 함수

  • 모델저장 및 조기학습중단
  • 모델저장 (ModelCheckPoint)
    • 딥러닝모델 학습시 지정된 epoch 를 다 끝내면 과대적합이 일어나는 경우가 있다 -> 중간에 일반화된 모델을 저장할 수 있는 기능!!
  • 조기학습 중단(EarlyStopping)
    • epoch 를 크게 설정할 경우 일정횟수 이상으로는 모델의 성능이 개선되지 않는 경우가 있다. -> 시간낭비 -> 모델의 성능이 개선되지 않는 경우에는 조기중단이 필요
In [11]:
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
 
# 모델 중간 저장
# 모델 중간 멈춤
In [12]:
# 모델 저장
# 저장될 경로 작성
model_path = '/content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_{epoch:02d}_{val_accuracy:0.2f}.hdf5'
mckp = ModelCheckpoint(filepath = model_path,  # 저장경로
                                          verbose = 1# 로그 출력 -> 1: 로그출력 0
                                          save_best_only = True# 모델성능이 최고점을 갱신할때마다 저장
                                          monitor = 'val_accuracy'# 최고점의 기준치
 
# 콜백객체 생성완료~
# 사용한것은 아님!
 
In [13]:
# 조기학습 중단
early = EarlyStopping(monitor = 'val_accuracy', #기준치
                                   verbose = 1# 로그출력
                                   patience = 10# 모델성능개선을 기다리는 최대 횟수
 
In [14]:
# 3 번째 조합으로 모델링
 
# 1)신경망설계
 
# 뼈대
model3 = Sequential()
 
 
# 입력층
# 사진데이터 (2차원 -> 1차원)
model3.add(Flatten())
 
 
# 중간층 (5층 (64,128,256,128,64))
model3.add(Dense(units = 64, activation = 'relu'))
model3.add(Dense(units = 128, activation = 'relu'))
model3.add(Dense(units = 256, activation = 'relu'))
model3.add(Dense(units = 128, activation = 'relu'))
model3.add(Dense(units = 64, activation = 'relu'))
 
# 출력층
model3.add(Dense(units = 10, activation = 'softmax'))
 
 
#2) 학습방법및 평가방법 설정
model3.compile(loss = 'sparse_categorical_crossentropy',
                          optimizer = Adam(learning_rate = 0.001), #Adma 기본학습률: 0.001
                          metrics = ['accuracy'])
 
h3 = model3.fit(X_train,y_train,epochs = 1000,
                        validation_split=0.2,
                        batch_size = 128,
                        callbacks = [mckp, early])

 

Epoch 1/1000
375/375 [==============================] - ETA: 0s - loss: 1.0119 - accuracy: 0.8026
Epoch 1: val_accuracy improved from -inf to 0.90000, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_01_0.90.hdf5
375/375 [==============================] - 7s 11ms/step - loss: 1.0119 - accuracy: 0.8026 - val_loss: 0.3360 - val_accuracy: 0.9000
Epoch 2/1000
 13/375 [>.............................] - ETA: 3s - loss: 0.2552 - accuracy: 0.9201

 

 
370/375 [============================>.] - ETA: 0s - loss: 0.2448 - accuracy: 0.9289
Epoch 2: val_accuracy improved from 0.90000 to 0.93008, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_02_0.93.hdf5
375/375 [==============================] - 3s 9ms/step - loss: 0.2432 - accuracy: 0.9294 - val_loss: 0.2422 - val_accuracy: 0.9301
Epoch 3/1000
371/375 [============================>.] - ETA: 0s - loss: 0.1790 - accuracy: 0.9460
Epoch 3: val_accuracy improved from 0.93008 to 0.93983, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_03_0.94.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.1794 - accuracy: 0.9459 - val_loss: 0.1993 - val_accuracy: 0.9398
Epoch 4/1000
375/375 [==============================] - ETA: 0s - loss: 0.1412 - accuracy: 0.9575
Epoch 4: val_accuracy improved from 0.93983 to 0.95100, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_04_0.95.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.1412 - accuracy: 0.9575 - val_loss: 0.1870 - val_accuracy: 0.9510
Epoch 5/1000
373/375 [============================>.] - ETA: 0s - loss: 0.1177 - accuracy: 0.9645
Epoch 5: val_accuracy improved from 0.95100 to 0.95283, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_05_0.95.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.1175 - accuracy: 0.9646 - val_loss: 0.1719 - val_accuracy: 0.9528
Epoch 6/1000
375/375 [==============================] - ETA: 0s - loss: 0.1016 - accuracy: 0.9687
Epoch 6: val_accuracy did not improve from 0.95283
375/375 [==============================] - 3s 9ms/step - loss: 0.1016 - accuracy: 0.9687 - val_loss: 0.1862 - val_accuracy: 0.9525
Epoch 7/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0918 - accuracy: 0.9713
Epoch 7: val_accuracy improved from 0.95283 to 0.95883, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_07_0.96.hdf5
375/375 [==============================] - 3s 8ms/step - loss: 0.0917 - accuracy: 0.9714 - val_loss: 0.1673 - val_accuracy: 0.9588
Epoch 8/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0812 - accuracy: 0.9747
Epoch 8: val_accuracy improved from 0.95883 to 0.95925, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_08_0.96.hdf5
375/375 [==============================] - 3s 8ms/step - loss: 0.0815 - accuracy: 0.9745 - val_loss: 0.1573 - val_accuracy: 0.9592
Epoch 9/1000
368/375 [============================>.] - ETA: 0s - loss: 0.0765 - accuracy: 0.9764
Epoch 9: val_accuracy improved from 0.95925 to 0.95933, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_09_0.96.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.0766 - accuracy: 0.9764 - val_loss: 0.1637 - val_accuracy: 0.9593
Epoch 10/1000
373/375 [============================>.] - ETA: 0s - loss: 0.0717 - accuracy: 0.9773
Epoch 10: val_accuracy did not improve from 0.95933
375/375 [==============================] - 4s 12ms/step - loss: 0.0718 - accuracy: 0.9773 - val_loss: 0.1699 - val_accuracy: 0.9575
Epoch 11/1000
375/375 [==============================] - ETA: 0s - loss: 0.0631 - accuracy: 0.9803
Epoch 11: val_accuracy improved from 0.95933 to 0.95975, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_11_0.96.hdf5
375/375 [==============================] - 5s 13ms/step - loss: 0.0631 - accuracy: 0.9803 - val_loss: 0.1742 - val_accuracy: 0.9597
Epoch 12/1000
374/375 [============================>.] - ETA: 0s - loss: 0.0680 - accuracy: 0.9792
Epoch 12: val_accuracy improved from 0.95975 to 0.96417, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_12_0.96.hdf5
375/375 [==============================] - 4s 11ms/step - loss: 0.0679 - accuracy: 0.9792 - val_loss: 0.1490 - val_accuracy: 0.9642
Epoch 13/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0622 - accuracy: 0.9811
Epoch 13: val_accuracy did not improve from 0.96417
375/375 [==============================] - 6s 15ms/step - loss: 0.0621 - accuracy: 0.9811 - val_loss: 0.1663 - val_accuracy: 0.9615
Epoch 14/1000
375/375 [==============================] - ETA: 0s - loss: 0.0559 - accuracy: 0.9827
Epoch 14: val_accuracy improved from 0.96417 to 0.96650, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_14_0.97.hdf5
375/375 [==============================] - 4s 10ms/step - loss: 0.0559 - accuracy: 0.9827 - val_loss: 0.1450 - val_accuracy: 0.9665
Epoch 15/1000
375/375 [==============================] - ETA: 0s - loss: 0.0499 - accuracy: 0.9842
Epoch 15: val_accuracy did not improve from 0.96650
375/375 [==============================] - 4s 10ms/step - loss: 0.0499 - accuracy: 0.9842 - val_loss: 0.1676 - val_accuracy: 0.9623
Epoch 16/1000
373/375 [============================>.] - ETA: 0s - loss: 0.0510 - accuracy: 0.9838
Epoch 16: val_accuracy did not improve from 0.96650
375/375 [==============================] - 5s 12ms/step - loss: 0.0511 - accuracy: 0.9838 - val_loss: 0.1560 - val_accuracy: 0.9661
Epoch 17/1000
375/375 [==============================] - ETA: 0s - loss: 0.0475 - accuracy: 0.9854
Epoch 17: val_accuracy did not improve from 0.96650
375/375 [==============================] - 4s 10ms/step - loss: 0.0475 - accuracy: 0.9854 - val_loss: 0.1579 - val_accuracy: 0.9657
Epoch 18/1000
369/375 [============================>.] - ETA: 0s - loss: 0.0488 - accuracy: 0.9845
Epoch 18: val_accuracy did not improve from 0.96650
375/375 [==============================] - 4s 10ms/step - loss: 0.0493 - accuracy: 0.9844 - val_loss: 0.1709 - val_accuracy: 0.9615
Epoch 19/1000
369/375 [============================>.] - ETA: 0s - loss: 0.0457 - accuracy: 0.9859
Epoch 19: val_accuracy did not improve from 0.96650
375/375 [==============================] - 3s 7ms/step - loss: 0.0471 - accuracy: 0.9858 - val_loss: 0.1663 - val_accuracy: 0.9655
Epoch 20/1000
369/375 [============================>.] - ETA: 0s - loss: 0.0426 - accuracy: 0.9869
Epoch 20: val_accuracy did not improve from 0.96650
375/375 [==============================] - 4s 10ms/step - loss: 0.0425 - accuracy: 0.9870 - val_loss: 0.1714 - val_accuracy: 0.9662
Epoch 21/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0348 - accuracy: 0.9897
Epoch 21: val_accuracy improved from 0.96650 to 0.96717, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_21_0.97.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.0348 - accuracy: 0.9897 - val_loss: 0.1524 - val_accuracy: 0.9672
Epoch 22/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0322 - accuracy: 0.9904
Epoch 22: val_accuracy did not improve from 0.96717
375/375 [==============================] - 3s 7ms/step - loss: 0.0322 - accuracy: 0.9903 - val_loss: 0.1761 - val_accuracy: 0.9644
Epoch 23/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0396 - accuracy: 0.9879
Epoch 23: val_accuracy improved from 0.96717 to 0.96925, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_23_0.97.hdf5
375/375 [==============================] - 3s 8ms/step - loss: 0.0394 - accuracy: 0.9879 - val_loss: 0.1527 - val_accuracy: 0.9693
Epoch 24/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0334 - accuracy: 0.9898
Epoch 24: val_accuracy did not improve from 0.96925
375/375 [==============================] - 4s 9ms/step - loss: 0.0334 - accuracy: 0.9898 - val_loss: 0.1522 - val_accuracy: 0.9678
Epoch 25/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0347 - accuracy: 0.9897
Epoch 25: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 7ms/step - loss: 0.0349 - accuracy: 0.9896 - val_loss: 0.1601 - val_accuracy: 0.9690
Epoch 26/1000
373/375 [============================>.] - ETA: 0s - loss: 0.0359 - accuracy: 0.9891
Epoch 26: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 8ms/step - loss: 0.0359 - accuracy: 0.9891 - val_loss: 0.1640 - val_accuracy: 0.9638
Epoch 27/1000
374/375 [============================>.] - ETA: 0s - loss: 0.0278 - accuracy: 0.9919
Epoch 27: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 7ms/step - loss: 0.0278 - accuracy: 0.9919 - val_loss: 0.1692 - val_accuracy: 0.9672
Epoch 28/1000
373/375 [============================>.] - ETA: 0s - loss: 0.0290 - accuracy: 0.9910
Epoch 28: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 8ms/step - loss: 0.0292 - accuracy: 0.9909 - val_loss: 0.1607 - val_accuracy: 0.9692
Epoch 29/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0291 - accuracy: 0.9916
Epoch 29: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 8ms/step - loss: 0.0289 - accuracy: 0.9916 - val_loss: 0.1792 - val_accuracy: 0.9686
Epoch 30/1000
375/375 [==============================] - ETA: 0s - loss: 0.0259 - accuracy: 0.9924
Epoch 30: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 9ms/step - loss: 0.0259 - accuracy: 0.9924 - val_loss: 0.1600 - val_accuracy: 0.9686
Epoch 31/1000
374/375 [============================>.] - ETA: 0s - loss: 0.0278 - accuracy: 0.9916
Epoch 31: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 8ms/step - loss: 0.0279 - accuracy: 0.9916 - val_loss: 0.1621 - val_accuracy: 0.9691
Epoch 32/1000
371/375 [============================>.] - ETA: 0s - loss: 0.0216 - accuracy: 0.9937
Epoch 32: val_accuracy did not improve from 0.96925
375/375 [==============================] - 3s 8ms/step - loss: 0.0216 - accuracy: 0.9937 - val_loss: 0.1942 - val_accuracy: 0.9687
Epoch 33/1000
374/375 [============================>.] - ETA: 0s - loss: 0.0302 - accuracy: 0.9921
Epoch 33: val_accuracy improved from 0.96925 to 0.97017, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_33_0.97.hdf5
375/375 [==============================] - 3s 8ms/step - loss: 0.0302 - accuracy: 0.9921 - val_loss: 0.1528 - val_accuracy: 0.9702
Epoch 34/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0215 - accuracy: 0.9937
Epoch 34: val_accuracy improved from 0.97017 to 0.97083, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_34_0.97.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.0215 - accuracy: 0.9937 - val_loss: 0.1745 - val_accuracy: 0.9708
Epoch 35/1000
373/375 [============================>.] - ETA: 0s - loss: 0.0203 - accuracy: 0.9942
Epoch 35: val_accuracy improved from 0.97083 to 0.97292, saving model to /content/drive/MyDrive/Colab Notebooks/DeepLearning/data/digit_model/dm_35_0.97.hdf5
375/375 [==============================] - 3s 7ms/step - loss: 0.0203 - accuracy: 0.9942 - val_loss: 0.1681 - val_accuracy: 0.9729
Epoch 36/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0247 - accuracy: 0.9927
Epoch 36: val_accuracy did not improve from 0.97292
375/375 [==============================] - 3s 7ms/step - loss: 0.0247 - accuracy: 0.9927 - val_loss: 0.1671 - val_accuracy: 0.9716
Epoch 37/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0226 - accuracy: 0.9937
Epoch 37: val_accuracy did not improve from 0.97292
375/375 [==============================] - 4s 9ms/step - loss: 0.0224 - accuracy: 0.9938 - val_loss: 0.1605 - val_accuracy: 0.9709
Epoch 38/1000
375/375 [==============================] - ETA: 0s - loss: 0.0195 - accuracy: 0.9948
Epoch 38: val_accuracy did not improve from 0.97292
375/375 [==============================] - 2s 6ms/step - loss: 0.0195 - accuracy: 0.9948 - val_loss: 0.2027 - val_accuracy: 0.9675
Epoch 39/1000
371/375 [============================>.] - ETA: 0s - loss: 0.0252 - accuracy: 0.9929
Epoch 39: val_accuracy did not improve from 0.97292
375/375 [==============================] - 3s 8ms/step - loss: 0.0251 - accuracy: 0.9930 - val_loss: 0.1996 - val_accuracy: 0.9671
Epoch 40/1000
369/375 [============================>.] - ETA: 0s - loss: 0.0234 - accuracy: 0.9936
Epoch 40: val_accuracy did not improve from 0.97292
375/375 [==============================] - 3s 7ms/step - loss: 0.0234 - accuracy: 0.9936 - val_loss: 0.1808 - val_accuracy: 0.9698
Epoch 41/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0170 - accuracy: 0.9953
Epoch 41: val_accuracy did not improve from 0.97292
375/375 [==============================] - 3s 8ms/step - loss: 0.0171 - accuracy: 0.9952 - val_loss: 0.2064 - val_accuracy: 0.9660
Epoch 42/1000
372/375 [============================>.] - ETA: 0s - loss: 0.0264 - accuracy: 0.9928
Epoch 42: val_accuracy did not improve from 0.97292
375/375 [==============================] - 3s 8ms/step - loss: 0.0264 - accuracy: 0.9928 - val_loss: 0.1778 - val_accuracy: 0.9704
Epoch 43/1000
370/375 [============================>.] - ETA: 0s - loss: 0.0171 - accuracy: 0.9955
Epoch 43: val_accuracy did not improve from 0.97292
375/375 [==============================] - 2s 6ms/step - loss: 0.0170 - accuracy: 0.9955 - val_loss: 0.1571 - val_accuracy: 0.9700
Epoch 44/1000
374/375 [============================>.] - ETA: 0s - loss: 0.0176 - accuracy: 0.9955
Epoch 44: val_accuracy did not improve from 0.97292
375/375 [==============================] - 2s 7ms/step - loss: 0.0176 - accuracy: 0.9955 - val_loss: 0.1823 - val_accuracy: 0.9724
Epoch 45/1000
373/375 [============================>.] - ETA: 0s - loss: 0.0139 - accuracy: 0.9959
Epoch 45: val_accuracy did not improve from 0.97292
375/375 [==============================] - 3s 7ms/step - loss: 0.0139 - accuracy: 0.9959 - val_loss: 0.2186 - val_accuracy: 0.9665
Epoch 45: early stopping