Video Background Removal using Python with Interactive Deployment.
Image segmentation is a process of separating multiple image segments from an image.
The most common usage of image segmentation is in Video Conferencing Apps like Zoom, Google meets, Teams, etc. where a person can add various background effects to their videos.
Today we are going to develop a model which will segment a person from the background while replacing the background with an image. The model will be deployed on TrueFoundry for you all to try.
Key takeaways from this blog
- Using Mediapipe for face segmentation and background removal.
- Deploying the model in a Gradio app in which we can upload any video or use a webcam for video input.
- The model will be deployed using TrueFoundry
We will start by installing the required libraries.
pip install mediapipe mlfoundry servicefoundry gradio
import mediapipe as mp
import numpy as np
from IPython.display import HTML
from base64 import b64encode
import numpy as np
import matplotlib.pyplot as plt
import mlfoundry as mlf
import servicefoundry.core as sfy
from servicefoundry import Build, PythonBuild, Service, Resources
Previewing Test Video
Previewing Video on Kaggle is not there by default therefore we are going to do a workaround that will embed the video into the page source
html = ''
video = open(filename,'rb').read()
src = 'data:video/mp4;base64,' + b64encode(video).decode()
html += '<video width=1000 controls autoplay loop><source src="%s" type="video/mp4"></video>' % src
return HTML(html)play('../input/videos-for-segmentation/Dance - 32938.mp4')
Creating Mediapipe Objects
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_face_mesh = mp.solutions.face_mesh
mp_selfie_segmentation = mp.solutions.selfie_segmentation
mp_objectron = mp.solutions.objectron
Importing Video using CV2
cap = cv2.VideoCapture('../input/videos-for-segmentation/Dance - 32938.mp4')
Finalizing Frame widths
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
size = (frame_width, frame_height)
result = cv2.VideoWriter('result.mp4',
tik = 0
Face mesh Object
Now our model is ready, we will deploy the model
Login to TrueFoundry
Using our API Key we are going to log in to the platform.
Writing Deployment Script
In this script, we will download the test video and background from Drive and then use our model on those videos
We are going to write the below script to deploy.py using the “writefile” magic function.
requirements = sfy.gather_requirements("deploy.py")
requirements['opencv-python-headless'] = '188.8.131.52'
requirements['chardet'] = '3.0.4'
requirements['jinja2'] = '3.1.2'
reqs = 
for i, j in enumerate(requirements):
with open('requirements.txt', 'w') as f:
for line in reqs:
We are going to use the OpenCV docker image by Jjanzic and then deploy our code.
%%writefile Dockerfile'''FROM jjanzic/docker-python3-opencv:opencv-4.0.0
COPY ./requirements.txt /tmp/
RUN pip install -U pip && pip install -r /tmp/requirements.txt
COPY . ./app
ENTRYPOINT python deploy.py'''
The directory structure should look like this.
Creating Service for deploying Model
Here, we will use DockerFileBuild which will use the Dockerfile for image creation and deployment.
We will limit the memory to 2.5 GB and CPU to 3.5 Cores for the model using Resources Class.
service = Service(
The model is Deployed here: https://face-service-final-arsh-dev.tfy-ctl-euwe1-develop.develop.truefoundry.tech/
The above Code is also present in this Kaggle Notebook
- TrueFoundry: https://truefoundry.com/
- Kaggle Notebook: https://www.kaggle.com/d4rklucif3r/image-segmentation-deployment