上一个代码只能实现小数据的读取与训练,在大数据训练的情况下。会造内存紧张,于是我根据keras的官方文档,对上一个代码进行了改进。

用keras实现人脸关键点检测

数据集:https://pan.baidu.com/s/1cnAxJJmN9nQUVYj8w0WocA

第一步:准备好需要的库

  • tensorflow  1.4.0
  • h5py 2.7.0
  • hdf5 1.8.15.1
  • Keras     2.0.8
  • opencv-python     3.3.0
  • numpy    1.13.3+mkl

第二步:准备数据集:

我对每一张图像进行了剪裁,使图像的大小为178*178的正方形。

并且对于原有的lable进行了优化

第三步:将图片和标签转成numpy array格式:

参数

 trainpath = 'E:/pycode/facial-keypoints-master/data/50000train/'
 testpath = 'E:/pycode/facial-keypoints-master/data/50000test/'
 imgsize = 178
 train_samples =40000
 test_samples = 200
 batch_size = 32
 def __data_label__(path):
     f = open(path + "lable-40.txt", "r")
     j = 0
     i = -1
     datalist = []
     labellist = []
     while True:

         for line in f.readlines():
             i += 1
             j += 1
             a = line.replace("\n", "")
             b = a.split(",")
             lable = b[1:]
             # print(b[1:])
             #对标签进行归一化(不归一化也行)
             # for num in b[1:]:
             #     lab = int(num) / 255.0
             #     labellist.append(lab)
             # lab = labellist[i * 10:j * 10]
             imgname = path + b[0]
             images = load_img(imgname)
             images = img_to_array(images).astype('float32')
             # 对图片进行归一化(不归一化也行)
             # images /= 255.0
             image = np.expand_dims(images, axis=0)
             lables = np.array(lable)

             # lable =keras.utils.np_utils.to_categorical(lable)
             # lable = np.expand_dims(lable, axis=0)
             lable = lables.reshape(1, 10)
        #这里使用了生成器
             yield (image,lable)

第四步:搭建网络:

这里使用非常简单的网络

     def __CNN__(self):
         model = Sequential()#178*178*3
         model.add(Conv2D(32, (3, 3), input_shape=(imgsize, imgsize, 3)))
         model.add(Activation('relu'))
         model.add(MaxPooling2D(pool_size=(2, 2)))

         model.add(Conv2D(32, (3, 3)))
         model.add(Activation('relu'))
         model.add(MaxPooling2D(pool_size=(2, 2)))

         model.add(Conv2D(64, (3, 3)))
         model.add(Activation('relu'))
         model.add(MaxPooling2D(pool_size=(2, 2)))

         model.add(Flatten())
         model.add(Dense(64))
         model.add(Activation('relu'))
         model.add(Dropout(0.5))
         model.add(Dense(10))
         return model
 #因为是回归问题,抛弃了softmax

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 176, 176, 32) 896
_________________________________________________________________
activation_1 (Activation) (None, 176, 176, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 88, 88, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 86, 86, 32) 9248
_________________________________________________________________
activation_2 (Activation) (None, 86, 86, 32) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 43, 43, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 41, 41, 64) 18496
_________________________________________________________________
activation_3 (Activation) (None, 41, 41, 64) 0
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 20, 20, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 25600) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 1638464
_________________________________________________________________
activation_4 (Activation) (None, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 650
=================================================================
Total params: 1,667,754
Trainable params: 1,667,754
Non-trainable params: 0
_________________________________________________________________

第五步:训练网络:

 def train(model):
     # print(lable.shape)
     model.compile(loss='mse', optimizer='adam')
     # optimizer = SGD(lr=0.03, momentum=0.9, nesterov=True)
     # model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
     epoch_num = 14
     learning_rate = np.linspace(0.03, 0.01, epoch_num)
     change_lr = LearningRateScheduler(lambda epoch: float(learning_rate[epoch]))
     early_stop = EarlyStopping(monitor='val_loss', patience=20, verbose=1, mode='auto')
     check_point = ModelCheckpoint('CNN_model_final.h5', monitor='val_loss', verbose=0, save_best_only=True,
                                   save_weights_only=False, mode='auto', period=1)

     model.fit_generator(__data_label__(trainpath),callbacks=[check_point,early_stop,change_lr],samples_per_epoch=int(train_samples // batch_size),
                         epochs=epoch_num,validation_steps = int(test_samples // batch_size),validation_data=__data_label__(testpath))

     # model.fit(traindata, trainlabel, batch_size=32, epochs=50,
     #           validation_data=(testdata, testlabel))
     model.evaluate_generator(__data_label__(testpath),steps=10)

 def save(model, file_path=FILE_PATH):
     print('Model Saved.')
     model.save_weights(file_path)

 def predict(model,image):
     # 预测样本分类
     image = cv2.resize(image, (imgsize, imgsize))
     image.astype('float32')
     image /= 255

     #归一化
     result = model.predict(image)
     result = result*1000+20

     print(result)
     return result

使用了fit_generator这一方法,加入了learning_rate,LearningRateScheduler,early_stop等参数。

第六步:图像验证

 import tes_main
 from keras.preprocessing.image import load_img, img_to_array
 import numpy as np
 import cv2
 FILE_PATH = 'E:\\pycode\\facial-keypoints-master\\code\\CNN_model_final.h5'
 imgsize =178
 def point(img,x, y):
     cv2.circle(img, (x, y), 1, (0, 0, 255), 10)

 Model = tes_main.Model()
 model = Model.__CNN__()
 Model.load(model,FILE_PATH)
 img = []
 # path = "D:\\Users\\a\\Pictures\\face_landmark_data\data\\test\\000803.jpg"
 path = "E:\pycode\\facial-keypoints-master\data\\50000test\\049971.jpg"
 # image = load_img(path)
 # img.append(img_to_array(image))
 # img_data = np.array(img)
 imgs = cv2.imread(path)
 # img_datas = np.reshape(imgs,(imgsize, imgsize,3))
 image = cv2.resize(imgs, (imgsize, imgsize))
 rects = Model.predict(model,imgs)

 for x, y, w, h, a,b,c,d,e,f in rects:
     point(image,x,y)
     point(image,w, h)
     point(image,a,b)
     point(image,c,d)
     point(image,e,f)

 cv2.imshow('img', image)
 cv2.waitKey(0)
 cv2.destroyAllWindows()

完整代码如下

 from tensorflow.contrib.keras.api.keras.preprocessing.image import ImageDataGenerator,img_to_array
 from keras.models import Sequential
 from keras.layers.core import Dense, Dropout, Activation, Flatten
 from keras.layers.advanced_activations import PReLU
 from keras.layers.convolutional import Conv2D, MaxPooling2D,ZeroPadding2D
 from keras.preprocessing.image import load_img, img_to_array
 from keras.optimizers import  SGD
 import numpy as np
 import cv2
 from keras.callbacks import *
 import keras

 FILE_PATH = 'E:\\pycode\\facial-keypoints-master\\code\\CNN_model_final.h5'
 trainpath = 'E:/pycode/facial-keypoints-master/data/50000train/'
 testpath = 'E:/pycode/facial-keypoints-master/data/50000test/'
 imgsize = 178
 train_samples =40000
 test_samples = 200
 batch_size = 32
 def __data_label__(path):
     f = open(path + "lable-40.txt", "r")
     j = 0
     i = -1
     datalist = []
     labellist = []
     while True:

         for line in f.readlines():
             i += 1
             j += 1
             a = line.replace("\n", "")
             b = a.split(",")
             lable = b[1:]
             # print(b[1:])
             #对标签进行归一化(不归一化也行)
             # for num in b[1:]:
             #     lab = int(num) / 255.0
             #     labellist.append(lab)
             # lab = labellist[i * 10:j * 10]
             imgname = path + b[0]
             images = load_img(imgname)
             images = img_to_array(images).astype('float32')
             # 对图片进行归一化(不归一化也行)
             # images /= 255.0
             image = np.expand_dims(images, axis=0)
             lables = np.array(lable)

             # lable =keras.utils.np_utils.to_categorical(lable)
             # lable = np.expand_dims(lable, axis=0)
             lable = lables.reshape(1, 10)

             yield (image,lable)

 ###############:

 # 开始建立CNN模型
 ###############

 # 生成一个model
 class Model(object):
     def __CNN__(self):
         model = Sequential()#218*178*3
         model.add(Conv2D(32, (3, 3), input_shape=(imgsize, imgsize, 3)))
         model.add(Activation('relu'))
         model.add(MaxPooling2D(pool_size=(2, 2)))

         model.add(Conv2D(32, (3, 3)))
         model.add(Activation('relu'))
         model.add(MaxPooling2D(pool_size=(2, 2)))

         model.add(Conv2D(64, (3, 3)))
         model.add(Activation('relu'))
         model.add(MaxPooling2D(pool_size=(2, 2)))

         model.add(Flatten())
         model.add(Dense(64))
         model.add(Activation('relu'))
         model.add(Dropout(0.5))
         model.add(Dense(10))
         model.summary()
         return model

     def train(self,model):
         # print(lable.shape)
         model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
         # optimizer = SGD(lr=0.03, momentum=0.9, nesterov=True)
         # model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
         epoch_num = 10
         learning_rate = np.linspace(0.03, 0.01, epoch_num)
         change_lr = LearningRateScheduler(lambda epoch: float(learning_rate[epoch]))
         early_stop = EarlyStopping(monitor='val_loss', patience=20, verbose=1, mode='auto')
         check_point = ModelCheckpoint('CNN_model_final.h5', monitor='val_loss', verbose=0, save_best_only=True,
                                       save_weights_only=False, mode='auto', period=1)

         model.fit_generator(__data_label__(trainpath),callbacks=[check_point,early_stop,change_lr],samples_per_epoch=int(train_samples // batch_size),
                             epochs=epoch_num,validation_steps = int(test_samples // batch_size),validation_data=__data_label__(testpath))

         # model.fit(traindata, trainlabel, batch_size=32, epochs=50,
         #           validation_data=(testdata, testlabel))
         model.evaluate_generator(__data_label__(testpath))

     def save(self,model, file_path=FILE_PATH):
         print('Model Saved.')
         model.save_weights(file_path)

     def load(self,model, file_path=FILE_PATH):
         print('Model Loaded.')
         model.load_weights(file_path)

     def predict(self,model,image):
         # 预测样本分类
         print(image.shape)
         image = cv2.resize(image, (imgsize, imgsize))
         image.astype('float32')
         image = np.expand_dims(image, axis=0)

         #归一化
         result = model.predict(image)

         print(result)
         return result

  

用keras实现人脸关键点检测(2)的更多相关文章

  1. keras实现简单CNN人脸关键点检测

    用keras实现人脸关键点检测 改良版:http://www.cnblogs.com/ansang/p/8583122.html 第一步:准备好需要的库 tensorflow  1.4.0 h5py ...

  2. dlib人脸关键点检测的模型分析与压缩

    本文系原创,转载请注明出处~ 小喵的博客:https://www.miaoerduo.com 博客原文(排版更精美):https://www.miaoerduo.com/c/dlib人脸关键点检测的模 ...

  3. 机器学习进阶-人脸关键点检测 1.dlib.get_frontal_face_detector(构建人脸框位置检测器) 2.dlib.shape_predictor(绘制人脸关键点检测器) 3.cv2.convexHull(获得凸包位置信息)

    1.dlib.get_frontal_face_detector()  # 获得人脸框位置的检测器, detector(gray, 1) gray表示灰度图, 2.dlib.shape_predict ...

  4. OpenCV实战:人脸关键点检测(FaceMark)

    Summary:利用OpenCV中的LBF算法进行人脸关键点检测(Facial Landmark Detection) Author:    Amusi Date:       2018-03-20 ...

  5. OpenCV Facial Landmark Detection 人脸关键点检测

    Opencv-Facial-Landmark-Detection 利用OpenCV中的LBF算法进行人脸关键点检测(Facial Landmark Detection) Note: OpenCV3.4 ...

  6. Opencv与dlib联合进行人脸关键点检测与识别

    前言 依赖库:opencv 2.4.9 /dlib 19.0/libfacedetection 本篇不记录如何配置,重点在实现上.使用libfacedetection实现人脸区域检测,联合dlib标记 ...

  7. opencv+python+dlib人脸关键点检测、实时检测

    安装的是anaconde3.python3.7.3,3.7环境安装dlib太麻烦, 在anaconde3中新建环境python3.6.8, 在3.6环境下安装dlib-19.6.1-cp36-cp36 ...

  8. Facial landmark detection - 人脸关键点检测

    Facial landmark detection  (Facial keypoints detection) OpenSourceLibrary: DLib Project Home:  http: ...

  9. 级联MobileNet-V2实现CelebA人脸关键点检测(转)

    https://blog.csdn.net/u011995719/article/details/79435615

随机推荐

  1. Sina微博爬取@pyspider

    这是一篇不应该写的文章,都写了,针对特定“方式”的爬虫也就没法爬了. 1.模拟登录的一些文章: 解析新浪微博的登录过程(2013-12-23): http://www.cnblogs.com/houk ...

  2. Linux 命令行输入

    这几天刚刚接触到Linux,在windows上安装的VMWare虚拟机,Centos7.安装什么都是贾爷和办公室的同事帮忙搞定的. 在虚拟机界面,按快捷键Ctrl+Alt+Enter,可以全屏显示Li ...

  3. 同一台电脑上配置多个解压版tomcat方法(本例安装两个)

    一.在环境变量中设置变量() CATALINA_HOME = tomcat路径一 CATALINA_BASE = tomcat路径一 CATALINA_HOME2 = tomcat路径二 CATALI ...

  4. leetCode刷题(找到两个数组拼接后的中间数)

    There are two sorted arrays nums1 and nums2 of size m and n respectively. Find the median of the two ...

  5. lintcode 在O(1)时间复杂度删除链表节点

    题目要求 给定一个单链表中的一个等待被删除的节点(非表头或表尾).请在在O(1)时间复杂度删除该链表节点. 样例 Linked list is 1->2->3->4, and giv ...

  6. 洛谷 P2491 解题报告

    P2491 消防 题目描述 某个国家有n个城市,这n个城市中任意两个都连通且有唯一一条路径,每条连通两个城市的道路的长度为zi(zi<=1000). 这个国家的人对火焰有超越宇宙的热情,所以这个 ...

  7. js万年历,麻雀虽小五脏俱全,由原生js编写

    对于前端来说,我们可能见到最多的就是各种各样的框架,各种各样的插件了,有各种各样的功能,比如轮播啊,日历啊,给我们提供了很大的方便,但是呢?我们在用别人这些写好的插件,框架的时候,有没有试着问一问自己 ...

  8. cas 4.1.4单点登录实战

    使用工具 maven-3.3.9 cas-4.1.4 Tomcat-7.0.57-win-x64 cas-sample-Java-webapp 一.Hello cas 1.下载Tomcat,解压:修改 ...

  9. Spring Cloud Eureka Server高可用注册服务中心的配置

    前言 Eureka 作为一个云端负载均衡,本身是一个基于REST的服务,在 Spring Cloud 中用于发现和注册服务. 那么当成千上万个微服务注册到Eureka Server中的时候,Eurek ...

  10. Java的锁

    今天练习了Java的多线程,提到多线程就基本就会用到锁 Java通过关键字及几个类实现了锁的机制,这里先介绍下Java都有哪些锁:   一.Java实现锁的机制: Java运行到包含锁的代码时,获取尝 ...