To make a convincing deepfake — an AI-generated fake of a video or audio clip — you usually need a neural model that’s trained with a lot of reference material. Generally, the larger your dataset of photos, video, or sound, the more eerily accurate the result will be. But now, researchers at Samsung’s AI Center have devised a method to train a model to animate with an extremely limited dataset: just a single photo, and the results are surprisingly good.
The researchers are able to achieve this effect, (as spotted by Motherboard) by training its algorithm on “landmark” facial features (the general shape of the face, eyes, mouth shape, and more) scraped from a public repository of 7,000 images of celebrities gathered from YouTube.
From…
from The Verge – All Posts http://bit.ly/2QhQmZs
via IFTTT