A New AI system Could Make Lip Sync Dubbing Accurate

Scientists have developed a new Artificial Intelligence (AI)-based system that possess the ability to edit one's facial expressions and match the dubbed voices. The new system could easily make Dodgy lip-sync dubbing a thing of the past.

According to a research presented at the SIGGRAPH 2018 conference in Vancouver, Canada, the system called Deep Video Portraits will help to correct gaze and head pose in video conferencing, and enables new possibilities for video post-production and visual effects.

Co-author Christian Richards from the University of Bath in Britain said, "This technique could also be used for post-production in the film industry where computer graphics editing of faces is already widely used in today's feature films."

This new system will certainly help the film industry to save time and reduce post-production costs. Deep Video Portraits also has potential to animate the whole face including eyes, eyebrows, and head position in videos, using controls known from computer graphics face animation.

One of the researchers Hyeongwoo Kim from the Max Planck Institute for Informatics in Germany said, "It works by using model-based 3D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video."

"It then transposes these movements onto the 'target' actor in the film to accurately sync the lips and facial movements with the new audio," Hyeongwoo added.

The new Deep Video Portrait helps in modifying the appearance of a target actor by transferring head pose, facial expressions, and eye motion with a high level of realism," Theobalt added.

No comments:

Post a Comment

Recent Story

Featured News