Saturday, Oct 22, 2022

As a part of the course from OpenCV, this is the explanation of the Virtual Makeup - LipsColor project.

This project share some notions and code from EyeGlasses that won’t be repeated here.

This time the goal is to apply lipstick to the lips of a face.

The steps performed by the program are:

Load the images and detect the landmarks

This is very similar to the same part in EyeGlasses project, the only change is on the landmark choosen: they are the ones from 48 to 68

1imOut, pointsOut = fbc.normalizeImagesAndLandmarks((1000, 1000), imDlib,
2                                                       np.array(landmarks))
3
4lipsPoints = pointsOut[48:68]

For example:

Face with lips points overlayed

Smile :)

To following tasks are slightly different if the face has a smile or not. In the first case the teeth must be masked, so a simple algorithm to detect if the face has a smile is run.

It is based on the ratio between the length of two jaw points and the external points of the lips.

 1def smileDetection(landmarks):
 2    rightLipPoint = landmarks[54]
 3    leftLipPoint = landmarks[48]
 4
 5    rightJawPoint = landmarks[12]
 6    leftJawPoint = landmarks[4]
 7
 8    jawWidth = np.linalg.norm(np.array(rightJawPoint) - np.array(leftJawPoint))
 9    lipWidth = np.linalg.norm(np.array(rightLipPoint) - np.array(leftLipPoint))
10
11    ratio = lipWidth / jawWidth
12    return ratio > 0.5

Find a suitable mask of the lips

To process only the part regarding the lips, a mask must be extracted. First, a bounding box of the lips point is created:

1rectLips = bbox(lipsPoints)

where bbox is

1def bbox(points):
2    minX = np.min(points[:,0])
3    maxX = np.max(points[:,0])
4    minY = np.min(points[:,1])
5    maxY = np.max(points[:,1])
6
7    width = maxX - minX
8    height = maxY - minY
9    return minX, minY, width, height

The next step is to cut the lips from the whole image to work only on that part:

1newLipsPoints = np.zeros_like(lipsPoints)
2for i,p in enumerate(lipsPoints):
3    newLipsPoints[i] = np.array([p[0] - rectLips[0], p[1] - rectLips[1]])
4lipsImage = cut(imOut, rectLips)
5lipsImageMasked = cutOutsideLips(lipsImage, newLipsPoints)

newLipsPoints is the transformed set of points corresponding to the original lips points on the lips-only image. cut simply crops the image on the region of the lips

Lips region crop

while cutOusideLips create a mask from the convex hull

Only lips crop
1def cutOutsideLips(lipsImage, lipsPoints):
2    hull = cv2.convexHull(lipsPoints)
3    tmpMask = np.zeros((lipsImage.shape[0], lipsImage.shape[1],3), np.uint8)
4    cv2.drawContours(tmpMask, [hull], -1, (255,255,255), -1)
5
6    return tmpMask

This mask is used to process only the region around the points on the lips. The image we are working on is the one with the mask applied:

1onlyLips = (cv2.multiply(lipsImage.astype(float)/255, lipsImageMasked.astype(float)/255) * 255).astype(np.uint8)

where the lips region and mask are converted to floating point values, rescaled to [0,1], multiplied and then converted back to bytes in region [0,255]

Only lips crop

To avoid to use also the region of the teeth, the function findLipsMask tries to create a mask that contains only the lips. The image is converted to HLS color space. To detect the teeth the heuristic here is to find, on the saturation channel the pixel with a value over a threshold (here it is fixed to 90) are set them to black in the mask. A dilation operation is used to fill the gaps that may remain in the mask.

 1def findLipsMask(lipsImage, isSmiling : bool):
 2    r,g,b = cv2.split(lipsImage)
 3    hlsImage = cv2.cvtColor(lipsImage, cv2.COLOR_RGB2HLS)
 4    h,l,s = cv2.split(hlsImage)
 5    a = 1
 6    tmpMask = np.zeros_like(s)
 7    # remove the teeth
 8    if isSmiling:
 9        tmpMask[s>90] = 255
10    else:
11        tmpMask[s>0] = 255
12    element = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
13    tmpMask = cv2.dilate(tmpMask, element, iterations=2)
14    return tmpMask

The mask created from the saturation channel
The mask after the dilation operation

The mask could be futher improved by applying different dilation kernel, using more iterations or by chosing another method to separate the teeth from the lips.

The mask must be put back on the original image to perform the operation to apply the lipstick.

1mask = cv2.merge((mask,mask,mask))
2result = np.zeros_like(imOut)
3result[rectLips[1]:rectLips[1] + rectLips[3], rectLips[0]:rectLips[0] + rectLips[2]] = mask
4result = cv2.GaussianBlur(result, (25,25), 0)
5alpha = result.copy()
6result = (cv2.multiply(imOut.astype(float)/255, result.astype(float)/255) * 255).astype(np.uint8)

Line 3 restore the part of the mask on a copy of the original image and (line 4) blur the result to better blend the changes on the final result. The mask and the copy are then applied to the image to obtain only the lips portion

The portion of the lips, blurred for better blending

Change the saturation of the lips and apply to the final image

Ok, time to finally make the change to the lips:

1hlsResult = cv2.cvtColor(result, cv2.COLOR_RGB2HLS)
2h,l,s = cv2.split(hlsResult)
3indices = s > 0
4s_new = l.copy()
5s_new[indices] += 30
6hslNew = cv2.merge([h,l,s_new])
7recombined = cv2.cvtColor(hslNew, cv2.COLOR_HLS2RGB)
8result = alphaBlend(imOut, recombined, alpha)

First (lines 1,2) the result is converted to HSL color model, then the saturation channel is enhanched by adding more saturation (here by adding 30 values), but only to the non black pixels (line 3). The whole image is reassembled and alpha blended with the original image, by using the alpha channel as a mask.

Result