Face Recognition using Transfer Learning

Anshraj Shrivastava
4 min readJun 6, 2020

Transfer learning makes use of the knowledge gained while solving one problem and applying it to a different but related problem.

For example, the knowledge gained while learning to recognize cars can be used to some extent to recognize trucks.

Traditional Learning vs Transfer Learning

Pre-Training

When we train the network on a large dataset(for example ImageNet), we train all the parameters of the neural network, and therefore the model is learned. It may take hours on your GPU.

Deep learning systems and models are layered architectures that learn different features at different layers (hierarchical representations of layered features). These layers are then finally connected to the last layer (usually a fully connected layer, in the case of supervised learning) to get the final output. This layered architecture allows us to utilize a pre-trained network (such as Inception V3 or VGG) without its final layer as a fixed feature extractor for other tasks.

The key idea here is to just leverage the pre-trained model’s weighted layers to extract features but not to update the weights of the model’s layers during training with new data for the new task.

Fine Tuning

We can give the new dataset to fine-tune the pre-trained CNN. Consider that the new dataset is almost similar to the original dataset used for pre-training. Since the new dataset is similar, the same weights can be used for extracting the features from the new dataset.

  1. If the new dataset is very small, it’s better to train only the final layers of the network to avoid overfitting, keeping all other layers fixed. To remove the final layers of the pre-trained network. Add new layers. Retrain only the new layers.
  2. If the new dataset is very much large, retrain the whole network with initial weights from the pre-trained model.

Project:

Github link: https://github.com/rajansh87/Face-Recognition-using-Transfer-Learning.git

Requirements:

  1. Anaconda Jupyter Notebook
  2. Python3
  3. Install modules such as TensorFlow, Keras, NumPy, OpenCV.
  4. The system with Webcam Attached. Note: if using an external webcam so you need to change in code for data creation. (cv2.VideoCapture(1))

Initial Steps:

  1. Create a Workspace for the project.
  2. Create a directory called “Dataset” inside which create 2sub-directories as “train” and “test”. Inside both of these directories create a sub-directory with the name of the person whose face is to be recognized, this would store images of the particular person. (Note: multiple people face recognition require multiple such sub-directories)
  3. Download and store Haarcascade’s Model for frontal face detection in your workspace. (Note: Can be downloaded from my Github repository)

Step 1: Creating Dataset:

a. Replace the path as specified in variable “file_name_path” to your path of where the person’s pics would be stored.

Step 2: Creating Model:

a. Loading the VGG16 pre-trained model with calculated weights and biases.

b. The architecture of the loaded model.

here, False means that the layer is Freezed and can’t be trained, whereas True means Not Frozen and can be trained again.

c. Freeze all the layers except last 4( i.e the top 4):

d. Create a Function for returning the head of fully connected network layers.

e. add the Fully connected layer’s head back on to the VGG model.

f. Final Structure of our new model.

Step 3: Loading Our Created Dataset:

Step 4: Training Model:

Step 5: Testing Model:

So, This VGG16 model for face recognition using Transfer Learning is completed and now this model can recognize a person from the given image.

--

--