Google's New AI Detection Tool SynthID and What It Means for Creative Fields

Submitted by jon on Thu, 08/31/2023 - 07:07

Google's artificial intelligence research lab DeepMind recently announced a new AI detection tool called SynthID. SynthID is designed to identify images created by AI systems like DeepMind's own Imagen image generator. It works by imperceptibly watermarking AI-generated images so they can later be identified as synthetic.

In a conversation with The Verge, DeepMind CEO Demis Hassabis explained that SynthID marks images in a way that doesn't alter them visually but allows detection tools to easily identify them as AI-generated. The goal is to help combat disinformation and "deepfakes" - fabricated images and media designed to mislead.

This development has intriguing implications, especially as AI techniques like generative adversarial networks (GANs) enable the creation of increasingly realistic fake media. While DeepMind is starting with images, it's easy to see how similar detection methods could be applied to other areas like audio and music.

In this post, I'll summarize the SynthID announcement and then explore potential impacts for various creative industries if AI detection capabilities evolve quickly. How will we identify which content is human-made versus machine-made? And how might that change creative fields like music, film, journalism, and more?

The Growing Power of Generative AI

First, some background on why AI detection tools are becoming necessary...

In recent years, AI capabilities have advanced tremendously thanks to computational power and algorithms like GANs. GANs work by pitting two neural networks against each other - one generates content while the other evaluates it. This adversarial competition encourages them to improve until the generated output is indistinguishable from reality.

GANs can now create strikingly realistic images, audio, video, and text. Apps like DALL-E 2, Stable Diffusion, and Jasper showcase their creative potential. They allow nearly anyone to instantly generate images, music, and more simply by describing what they want.

However, these generative AIs also enable new forms of misinformation and fraud. Forged faces and vocal recordings could be used for political disinformation or financial scams. The ability to create synthetic media at scale poses risks ranging from reputational damage to large-scale societal harms.

Enter AI Detection Systems

In response, researchers are developing AI systems focused on detection. These aim to identify machine-generated content and add attribution so people know it wasn't created by a human.

Google's new SynthID is an early example focused on images. It works by adding a hidden "watermark" that's imperceptible to humans but identifiable by AI detectors. DeepMind plans to integrate it into Google Cloud's AI services as a security measure.

Over time, Hassabis hopes SynthID could become an open standard adopted more widely across the internet. However, he acknowledges generative AIs and detection tools will be locked in an endless arms race as methods evolve on both sides.

What Could AI Detection Mean for Creative Fields?

So what are the implications if AI detection capabilities mature for areas like audio, video, and text? How might the ability to distinguish "real" from "fake" content impact creative professions? Here are some possibilities:


  • Songs and instrumental music generated by AI tools like Jukebox could be detectable. An AI "watermark" could identify tracks composed by machines versus humans.
  • Music platforms might integrate detection to flag AI content. This could disrupt royalty collection and attribution.
  • Artists may intentionally incorporate synthetic mixes and mastering while keeping the core song human-made. Hybrid human/AI music could become a trend.


  • AI video and voice synthesis opens the door for deepfake videos. Detection methods would try to identify generated faces, voices, and footage.
  • Movies could use AI-generated backgrounds, vehicles, crowds etc while keeping principal footage human-made. Detection would enable preferential treatment for "original" content.
  • Films may increasingly integrate both human and AI-generated content while using attribution to assign royalties.


  • Text generated by systems like GPT-3 could be detected through analysis of statistical patterns, watermarks, etc.
  • News sites could automatically flag articles or sections written by AI. This would disrupt ad revenue and compensation for synthetic content.
  • Freelance writers and journalists may need to verify their work was human-generated to get assignments or payment. Those using AI tools could see income decline.


  • AI art created by systems like DALL-E 2 and Midjourney could automatically embed attribution markers.
  • Online art marketplaces might require attribution flags to list pieces for sale or determine pricing. "AI-generated" works could trade at a discount.
  • Generative design tools may frequently get credited for designs instead of humans who guided the process.

There are countless other potential examples across music, movies, journalism, photography, fashion, architecture, product design, and more. In a data-driven world, we may see a proliferation of benchmarks, quotas, and guidelines around human creativity versus machine creativity across industries.

Key Questions Around Implementation

As you can see, the implications span from attribution to compensation to value judgments around "real" vs. "fake" creative work. Some key questions include:

  • How will detection systems distinguish human vs. machine creativity in ambiguous cases? What if an AI is guided by a human throughout the process or vice versa?
  • Who will build and maintain the databases required for detection across various domains? Will there be centralized standards or fragmented approaches?
  • Should detection be mandatory for commercial creative work? Who should enforce proper attribution?
  • How will creative industries be disrupted if synthetic content garners less pay and lower perceived value?
  • What if generative AIs eventually produce novel, groundbreaking content surpassing humans? How long until detection shuts them out rather than promotes them?

A Recurring Tension Between Old and New

Stepping back, there are parallels between the questions raised by AI detection tools and previous technological shifts like photography and sampling:

  • Critics initially derided photography as a soulless mechanical process rather than art. It took time to appreciate the unique creative opportunities cameras provided.
  • When hip hop pioneers started sampling older records, lawsuits abounded. Eventually sampling became an art itself, with attribution resolving the legal issues.

Each transition resulted in tensions between existing creators and disruptive new technologies. We see hints of this today as artists grapple with AIs invading fields like visual art, music, and storytelling that seemed intrinsically human.

But historically, we've adapted to new tech while continuing to value exceptional human creativity built upon it. The best human creators embrace new tools, integrating them alongside uniquely human ingenuity, emotion, and perspective.

Initial reactions may cast machines versus humans as rivals. But in time, the most creative minds discover how to harmoniously combine our complementary strengths.

Moving Forward

For now, detection systems are in their infancy, and the arms race with generative AIs has only just begun. The companies leading the AI revolution have strong incentives to build identification and attribution mechanisms proactively as their capabilities advance.

However, this emerging technology raises many questions without clear answers. As AI detection evolves, we need open debate involving creators, technologists, businesses, policymakers, and the broader public to determine how it should be applied.

This is uncharted but fascinating territory at the intersection of technology, creativity, ethics, and society. If history is any guide, striking the right balance will be challenging but critical. By learning from the past and looking ahead, I'm hopeful we can embrace these tools to augment human creativity in exciting new ways. But we have some thorny discussions ahead...

So what's your take? I'm eager to hear your thoughts on where these technological trends might lead us! Please share your feedback below.