首先从中这里下载下代码:

https://github.com/ageitgey/face_recognition#face-recognition

然后安装所以必须的组件,我用的Python3.5

进入example里面跑他的demo,主要就是掉了dlib的接口比如:

    face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
face_landmarks_list = face_recognition.face_landmarks(rgb_frame)

引入了OpenCV稍微改了下显示,camera实时跟踪人脸的特征点,速度奇慢,而且还不准。

代码很简单:

import face_recognition
import cv2 # This is a super simple (but slow) example of running face recognition on live video from your webcam.
# There's a second example that's a little more complicated but runs faster. # PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead. # Get a reference to webcam # (the default one)
video_capture = cv2.VideoCapture() # Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[] # Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[] # Create arrays of known face encodings and their names
known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
known_face_names = [
"Barack Obama",
"Joe Biden"
] while True:
# Grab a single frame of video
ret, frame = video_capture.read() # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-] # Find all the faces and face enqcodings in the frame of video
# face_locations = face_recognition.face_locations(rgb_frame)
# face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
face_landmarks_list = face_recognition.face_landmarks(rgb_frame) # Loop through each face in this frame of video
# for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# # See if the face is a match for the known face(s)
# matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
#
# name = "Unknown" # If a match was found in known_face_encodings, just use the first one.
# if True in matches:
# first_match_index = matches.index(True)
# name = known_face_names[first_match_index] # # Draw a box around the face
# cv2.rectangle(frame, (left, top), (right, bottom), (, , ), )
#
# # Draw a label with a name below the face
# cv2.rectangle(frame, (left, bottom - ), (right, bottom), (, , ), cv2.FILLED)
# font = cv2.FONT_HERSHEY_DUPLEX
# cv2.putText(frame, name, (left + , bottom - ), font, 1.0, (, , ), ) for face_landmarks in face_landmarks_list: # Print the location of each facial feature in this image
facial_features = [
'chin',
'left_eyebrow',
'right_eyebrow',
'nose_bridge',
'nose_tip',
'left_eye',
'right_eye',
'top_lip',
'bottom_lip'
] for facial_feature in facial_features:
for point in face_landmarks[facial_feature]:
cv2.circle(frame, point, , (, , )) # Display the resulting image
cv2.imshow('Video', frame) # Hit 'q' on the keyboard to quit!
if cv2.waitKey() & 0xFF == ord('q'):
break # Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

dlib实现人脸landmark点检测以及一些其他的应用的更多相关文章

  1. Python 3 利用 Dlib 实现人脸检测和剪切

    0. 引言 利用 Python 开发,借助 Dlib 库进行人脸检测 / face detection 和剪切:   1. crop_faces_show.py : 将检测到的人脸剪切下来,依次排序平 ...

  2. 基于CNN的人脸相似度检测

    人脸相似度检测主要是检测两张图片中人脸的相似度,从而判断这两张图片的对象是不是一个人. 在上一篇文章中,使用CNN提取人脸特征,然后利用提取的特征进行分类.而在人脸相似度检测的工作中,我们也可以利用卷 ...

  3. OpenCV4.1.0实践(2) - Dlib+OpenCV人脸特征检测

    待更! 参考: python dlib opencv 人脸68点特征检测

  4. dlib python 人脸检测与关键点标记

    http://blog.csdn.net/sunmc1204953974/article/details/49976045 人脸检测 #coding=utf-8 # -*- coding: utf-8 ...

  5. 人脸检测学习笔记(数据集-DLIB人脸检测原理-DLIB&OpenCV人脸检测方法及对比)

    1.Easily Create High Quality Object Detectors with Deep Learning 2016/10/11 http://blog.dlib.net/201 ...

  6. 使用dlib基于CNN(卷积神经网络)的人脸检测器来检测人脸

    基于机器学习CNN方法来检测人脸比之前介绍的效率要慢很多 需要先下载一个训练好的模型数据: 地址点击下载 // dlib_cnn_facedetect.cpp: 定义控制台应用程序的入口点. // # ...

  7. face landmark 人脸特征点检测

    1.ASM&AAM算法 ASM(Active Shape Model)算法介绍:http://blog.csdn.net/carson2005/article/details/8194317 ...

  8. 写个神经网络,让她认得我`(๑•ᴗ•๑)(Tensorflow,opencv,dlib,cnn,人脸识别)

    训练一个神经网络 能让她认得我 阅读原文 这段时间正在学习tensorflow的卷积神经网络部分,为了对卷积神经网络能够有一个更深的了解,自己动手实现一个例程是比较好的方式,所以就选了一个这样比较有点 ...

  9. Python 3 利用 Dlib 实现人脸 68个 特征点的标定

    0. 引言 利用 Dlib 官方训练好的模型 “shape_predictor_68_face_landmarks.dat” 进行 68 个点标定: 利用 OpenCv 进行图像化处理,在人脸上画出 ...

随机推荐

  1. Shell脚本开发规范

    一.前言 由于工作需要,最近重新开始拾掇shell脚本.虽然绝大部分命令自己平时也经常使用,但是在写成脚本的时候总觉得写的很难看.而且当我在看其他人写的脚本的时候,总觉得难以阅读.毕竟shell脚本这 ...

  2. ajaxupload.js调用始终进入error回调

    现象:脚本调用成功,文件上传也成功,但是始终进入error的回调函数. 1. ajaxfileupload.js jQuery.extend({ handleError: function( s, x ...

  3. [Err] ORA-00911: 无效字符

    使用navicat执行从pw中导出的sql语句时报[Err] ORA-00911: 无效字符  这个错误. 经过分析后发现,是因为某个表的id中的类型设置用的中文括号包起来的. 但是不知道为什么sql ...

  4. 更改MySQL数据库目录位置[zz]

    MYSQL默认的数据文件存储目录为/var/lib/mysql.假如要把目录移到/home/data下需要进行下面几步:1.home目录下建立data目录cd /homemkdir data2.把My ...

  5. libsvm_readme[zz from github]

    Libsvm is a simple, easy-to-use, and efficient software for SVM classification and regression. It so ...

  6. KMP算法理解

    1.KMP算法解决问题:对BF(Brute Force)算法优化,避免对主串进行回溯匹配(匹配不成功主串指针向后移1位,子串指针重置开始位置,两串继续匹配),效率底. 2.KMP算法原则/目的:主串不 ...

  7. Java:多线程,Exchanger同步器

    1. 背景 类java.util.concurrent.Exchanger提供了一个同步点,在这个同步点,一对线程可以交换数据.每个线程通过exchange()方法的入口提供数据给他的伙伴线程,并接收 ...

  8. Using ADB and fastboot

    What is adb? The Android Debug Bridge (adb) is a development tool that facilitates communication bet ...

  9. vivado保存debug波形

    vivado保存debug波形   Vivado下debug后的波形通过图形化界面并不能保存抓取到波形,保存按钮只是保存波形配置,如果需要保存波形需要通过TCL命令来实现: write_hw_ila_ ...

  10. [na]交换机原理/macof

    交换机的工作原理 简单来说,就是根据源mac学习-->形成cam表,根据cam表转发. 正常情况下先arp广播,sw收到后发到本vlan所有出口,所有机器学习更新arp缓存. 目标机返回单播ar ...