DEFEND REALITY

Reality Defender™ is excited to continue growing our community of users that help identify fake images and videos.  Please leave your email address and we will contact you when we are ready for you to join!

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form :(

 RESPONSIBILITY

Reality Defender is intelligent software built to run alongside digital experiences (such as browsing the web) to detect potentially fake media. Similar to virus protection, it scans every image, video, and other media that a user encounters for known fakes, allows reporting of suspected fakes, and runs new media through various AI-driven analysis techniques to detect signs of alteration or artificial generation.
Reality Defender is built on the AI Foundation’s Human-AI collaboration platform which brings together the best innovations from leading research communities. We also partner with content creators to establish and use an “Honest AI watermark” to clearly identify and call out AI-generated text, images, audio, and videos.

Our community of users who are reporting suspected fakes, flagging false positives, and training the AI agents are extremely critical to the ongoing success of Reality Defender, as there are many dimensions of "fake" media. For example, almost every fashion image online has been altered, but this is not as important as an altered news image. Important factors include how much the media has been altered, why it was altered, and how significant the alteration is to the overall context.
At the core of our social responsibility efforts is a nonprofit established to anticipate and counteract the dangers of AI by giving all of us the tools to protect our lives, prosperity, dignity and human agency. We spend considerable resources and energy thinking through, and educating the public about,the risks of AI, especially those that could potentially destabilize human societies or prevent us from enjoying the vast benefits of the advancement of AI.

Reality Defender is the first of the Guardian AI technologies that we are building on our responsibility platform.  Guardian AI is built around the concept that everyone should have their own personal AI agents working with them through human-AI collaboration, initially for protection against the current risks of AI, and ultimately building value for individuals as the nature of society changes as a result of AI.
We will continue to be at the forefront of building products that offset the current risks of AI, moving from simply detecting fakes to actively combating fake reality and the accompanying bias and manipulation that underlies it.

The solutions we are building focus on personal AI to combat the societal risks inherent in AI.

We are taking a long-term leadership role in building and promoting beneficial AI.