A Network Connecting School Leaders From Around The Globe
As it becomes easier to create hyper-realistic digital characters using artificial intelligence, much of the conversation around these tools has centered on misleading and potentially dangerous deepfake content. But the technology can also be used for positive purposes—to revive Albert Einstein to teach a physics class, talk through a career change with your older self, or anonymize people while preserving facial communication.
To encourage the technology’s positive possibilities, Media Lab researchers and their collaborators at UC Santa Barbara, and Osaka University have compiled an open source, easy-to-use character generation pipeline that combines AI models for facial gestures, voice and motion and can be used to create a variety of audio and video outputs.
The pipeline also marks the resulting output with a traceable, as well as human-readable, watermark to distinguish it from authentic video content and to show how it was generated—an addition to help prevent its malicious use.