Private: Blog – Old

January 25, 2020 Blogs 0 Comments

FACE RECOGNITION API

At the beginning of this blog, we will have a short introduction to Face-api.js.

Face-api.js is a JavaScript library that is built on top of the TensorFlow.js. Now the question is, What is TensorFlow.js?

Here is the short introduction of it from its official documentation site,

“TensorFlow.js is a JavaScript library for training and deploying machine learning models in the browser and Node.js.”

If you want to learn more then visit here TensorFlow.js.

Let’s come back to our topic of what Face-api.js does?

Face-api.js gives us the already built face detection functionality that works with inbuilt cameras of computer and mobile devices to perform different applications such as:

  • Secure login with auto face-detection
  • TensorFlow.js Identify the peoples from the picture
  • Attendance with face-detection
  • Face Expression Detection’s

Many more that as per your creative mind.

Hush, let’s start with our first question below.

What is Face recognition?

Face recognition is a biological software that recognizes or identifies the person by analyzing or comparing the facial pattern on the person’s facial outline. It is mostly used for security purposes, with the potential for a wide range of applications for government and enterprise use. The Face recognition technology has received huge attention, and for good reason!


Let’s move up to our actual implementation of the face-api.js,

we’ll cover up the following topics one by one to get a crystal and clear idea of the face recognition with Face-api.js.

  1. JavaScript code to start the webcam.
  2. What does the model mean?
  3. What models we are going to use?
  4. How to draw the face detection box or boxes and detect the expressions?

Guys, lets start with our pre-requisites. So we are using the Face-api.js which would then require the Face-api.js, and the models that will help us to detect the faces and the expressions.

We’ll come soon on the topic, what are the models?

Meanwhile, download it from here, https://github.com/justadudewhohacks/face-api.js/tree/master/dist

and models we are going to use from here,

https://github.com/WebDevSimplified/Face-Detection-JavaScript/tree/master/models


JavaScript code to start the webcam.

starting with index.html.

index.html

  <!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <title>Face Recognition</title>
    <link rel="stylesheet" href="style.css">
    <script defer src="face-api.min.js"></script>
    <script defer src="script.js"></script>
</head>

<body>
    <video id="video" width="720" height="560" autoplay muted></video>
</body>

</html>

and style.css

style.css

body {
    margin: 0;
    padding: 0;
    width: 100vw;
    height: 100vh;
    display: flex;
    justify-content: center;
    align-items: center;
}

canvas {
    position: absolute;
}


/* 
video {
    width: 720px;
    height: 560px;
} */

here is the simple code snippet to start the webcam through JavaScript

script.js

const video = document.getElementById('video')

function startVideo() {
  navigator.mediaDevices.getUserMedia(
    { video: {} },
    stream => {
      video.srcObject = stream
    },
    err => console.error(err)
  )
}

startVideo();

Moving to the next step that will answer the question of, what does model mean?


What does the model mean?

Here is the description from the official site of TensorFlow.js.

In machine learning, a model is a function with learnable parameters that maps an input to an output. The optimal parameters are obtained by training the model on data. A well-trained model will provide an accurate mapping from the input to the desired output.

In TensorFlow.js there are two ways to create a machine learning model:

  1. Using the Layers API where you build a model using layers.
  2. Using the Core API with lower-level ops such as tf.matMul()tf.add(), etc.

To learn more, follow this link: Models and Layers, which clears up the concept of models.


What models we are going to use?

For our example, we are using the following models:

  1. tinyFaceDetector
  2. faceLandmark68Net
  3. faceRecognitionNet
  4. faceExpressionNet

We have stored all required models into the model directory. Here is the code snippet to load the models, and then call the function ‘startVideo()’.

script.js


Promise.all([
  faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
  faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
  faceapi.nets.faceRecognitionNet.loadFromUri('/models'),
  faceapi.nets.faceExpressionNet.loadFromUri('/models')
]).then(startVideo)

function startVideo() {
  navigator.mediaDevices.getUserMedia(
    { video: {} },
    stream => {
      video.srcObject = stream
    },
    err => console.error(err)
  )
}

How to draw the face detection box or boxes?

To draw the face detection rectangle on the face while playing the video stream from the device camera, we’ll need the canvas to draw all these on top of the video input shown on the browser.

Here is the sample code snippet below:

script.js

startVideo()

video.addEventListener('play', () => {
  //create the canvas from video element as we have created above
  const canvas = faceapi.createCanvasFromMedia(video);
  //append canvas to body or the dom element where you want to append it
  document.body.append(canvas)
  // displaySize will help us to match the dimension with video screen and accordingly it will draw our detections
  // on the streaming video screen
  const displaySize = { width: video.width, height: video.height }
  faceapi.matchDimensions(canvas, displaySize)
  setInterval(async () => {
    const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions()
    const resizedDetections = faceapi.resizeResults(detections, displaySize)
    canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
    faceapi.draw.drawDetections(canvas, resizedDetections)
    faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)
    faceapi.draw.drawFaceExpressions(canvas, resizedDetections)
  }, 100)
})

detectAllFaces method will detect all faces in the video and takes the video element and instance model TinyFaceDetectorOptions as parameters. withFaceLandmarks method will detect human face parts like nose, lips, eyes and jaw lines, etc. withFaceExpressions method will detect the expression on the face like happy, neutral or angry, etc.

resizeResults method calculated the size of face detection’s, as per the video screen dimension, and fits to accurately to the detected face. It will take the displaySize as a parameter, which is the height and width of the video element.

getContext method sets the context to draw detection into 2d or 3d format. Before setting the after setting, the context clearRect method will clear the previous detection’s.

drawDetections method will draw the rectangle on the detected face and takes the canvas and resizedDetections as a parameter. resizedDetections get new calculations from the method resizeResults that takes the detection’s and displaySize as a parameter to generate the result.

drawFaceLandmarks method will outline face parts like nose, lips, eyes, etc., and takes the canvas and resizedDetections as a parameter.

drawFaceExpressions method will detect the expression on the face and takes the canvas and resizedDetections as a parameter.

Who I am?

Santosh N. S. More

Senior Software Engineer,

Innovecture, Pune, India.

Thanks to,

https://github.com/justadudewhohacks/face-api.js/

https://www.tensorflow.org/

Here is the funny face of mine. -:)



Back to blog list

Tags



Comment

Conect With Us