Skip to main content

The Dawn of the AI Age – Detecting Facial Reenactment against Fake Videos

Photo 1
 

Facial recognition has been broadly used in our daily lives, for example, looking to the camera of our mobiles to unlock them has become our daily practice as a standard security measure. One step further, facial reenactment, which means reenacting a person’s facial movements and facial features, has also made common in producing animations, virtual reality (VR) products and entertainment programmes. What if someone reenacted you from a video of yours and created fake videos? 


Seeing the chains of security issues that might result from facial reenactment and the emergence of ChatGPT-like techniques that might bring even more fake videos, Professor Cheung Yiu Ming, Chair Professor (Artificial Intelligence) of the Department of Computer Science, was inspired to start his project named “Facial Reenactment in a Monocular Video Stream for Speaker Verification and Its Applications”, which was awarded Research Grants Council (RGC) Senior Research Fellow Scheme (SRFS) 2023/24.

 

Professor Cheung’s SRFS-awarded AI Project

 

Having joined HKBU for over 20 years, Professor Cheung has been enjoying a long research journey with a close and friendly teaching and research environment. Given comprehensive support, like the additional studentship funding for research postgraduate students’ recruitment, from the Department and the Faculty of Science, Professor Cheung was empowered to realise his ambitions through his research and make contributions to the artificial intelligence (AI) community and society. 


This SRFS-awarded project is one of Professor Cheung’s ambitious but practical projects, which integrated his expertise in machine learning and visual computing focusing on fundamental and applied research in the AI field. The Project focused on detecting facial reenactment against fake videos that are generated by transferring a source face shape to a target face, reserving the appearance and the identity of the target face. To fill in this challenging research gap, Professor Cheung and his team strive to develop an effective method to detect facial reenactment for speaking verification and apply the resulting techniques to two applications – fake video detection and visually speech-independent speaker verification.
 

The Pathway to Progress: Impact and Implications


As the methodologies are novel, unique and feasible, the tremendous scientific values and social impact of the project outputs are beyond doubt. First, the resulting techniques, theoretical findings, and empirical discoveries will serve as promising countermeasures to the malicious use of facial reenactment techniques. “In particular, the underlying techniques can provide new promising tools for identifying fake videos,” said Professor Cheung. Second, the prototype systems, to be applied to fake video detection and visually speech-independent speaker verification, will be beneficial to not only researchers and AI developers, but also users from the industries of Cyberspace administration, financial and insurance companies, smart-lock manufacturers, and mobile phone manufacturers. Thus, the Project is expected to have a significant market value in the long run. Third, Professor Cheung expects to train and nurture a number of research postgraduate students and young information technology (IT) talents through this project. 
 

Photo 2

The procedure of facial reenactment, given a source image and target video.

 

Curiosity Lights the Way


Professor Cheung envisions himself to be working on this Project in the coming five years. The project deliverables include the research papers and the prototype systems. In the long run, he hopes to develop and launch the systems in the market to benefit our society.


“Stay curious and keep doing what you think is right,” said Professor Cheung, who would like to encourage aspiring researchers to make an impact in their specific field. He pointed out that cultivating critical thinking and creativity is of utmost importance, “To achieve this, we need to foster a curious mindset, and embrace creativity to generate innovative ideas and solutions.” 


Another key piece of advice from Professor Cheung is to learn from experienced researchers, for instance, to seek their guidance and insightful suggestions, and to visit their research groups for learning. Moreover, Professor Cheung finds it vital to attend relevant workshops and conferences to network and learn from peers and experts, as well as to develop the profound skills required to conduct research.

 

About the Researcher

 

Photo 3
 

Professor Cheung Yiu Ming is a Chair Professor (Artificial Intelligence) of the Department of Computer Science, Dean of the Institute for Research and Continuing Education (IRACE), and Associate Director of the Institute of Computational and Theoretical Studies in HKBU. He is a Fellow of IEEE, AAAS, IET, BCS, and AAIA. He is the awardee of RGC Senior Research Fellow receiving a fellowship grant of HK$7.8 million over a period of 60 months. Since 2019, he has been ranked the World’s Top 1% Most-cited Scientists in the field of Artificial Intelligence and Image Processing by Stanford University for five consecutive years. He was elected as a Distinguished Lecturer of IEEE Computational Intelligence Society, and named a Chair Professor of Changjiang Scholars Program by the Ministry of Education of the People’s Republic of China for his dedication and exceptional achievements in his academic career. Also, he is currently serving as the Editor-in-Chief of IEEE Transactions on Emerging Topics in Computational Intelligence.