AI Security Risk Management

AI Deep Fakes

Developing shrewdness to discern reality

in an AI Deep Fake world in the pressures of risk and persecution.

“In the new world of artificial intelligence (AI) invading everywhere, the horrifying reality is that AI can create “deep fakes” of a person, and make the AI individual say persuasive statements. AI is able to mimic a person’s voice, their mannerisms, and likeness with terrifying accuracy. AI is also used to spread disinformation, causing reality to be interpreted different from its actuality.

In the dangerous world of front line Gospel advancement, AI can be used by hostile actors to cause havoc among the naive. There are at least 4 significant ways AI can be used against the Gospel worker and organization, although implementation of these 4 methods is unlimited and only dependent upon one’s nefarious creativity:


Videos:

In terms of AI Deep Fake Videos, there is something call “Generative adversarial networks (GANs).

“The GAN system consists of a generator that generates images from random noises and a discriminator that judges whether an input image is authentic or produced by the generator. The two components are functionally adversarial, and they play two adversarial roles like a forger and a detective literally. After the training period, the generator can produce fake images with high fidelity. (p. 2).” (Rand, 3)

It is difficult to detect deep fake videos. “Essentially, as GANs improve the image resolution that they can create, deepfakes and real images will become indistinguishable, even to high-quality detectors.” (Rand, 11). The programs that try to detect AI-generated videos cannot keep up.

An AI video could be generated of a loved one being kidnapped. In this case, it would require both analysis of the video to analyze if it’s AI, as well as a security question or agreed upon word unknown to the internet for proof-of-life.

The threat of a false narrative being created is in a slightly different use of AI/Altered videos. Another use of video is termed “Shallow Fakes.” “Shallow fakes are videos that have been manually altered or selectively edited to mislead an audience. A classic contemporary example in this genre is a video that appears to show Speaker of the U.S. House of Representatives Nancy Pelosi slurring her words during an interview. The video was edited to slow down her speech, thus making her seem intoxicated.” (Rand, 8).


Voice Cloning:

Phone apps can capture someone’s voice and be used over the phone to mimic someone close to you. For example, a father received a phone call from his son asking for $9000 to get him out of jail. The money was sent, and than he learned it was a deep fake (Rand, 4). Loved ones calling for help, or calling to say they’ve been kidnapped, there’s no end to the trouble. Again, a simple risk mitigation would be to ask a security question unknown to the internet for proof of identity.


Images:

Fake profiles on social media to create an online profile that looks real and makes a person appear trustworthy.


Generative text:

This can generate text for disinformation, propaganda, even fake news reports, causing confusion and conflict.

Create an AI Risk Mitigation Plan

  1. Assess where in the risk cycle an AI generated deep fake would most likely be used.

  2. Create a security question or key word or phrase agreed upon ahead of time and known only to the humans, not available on the internet, to verify if the entity you are speaking with is human.

  3. Confirm identities before sharing information that can put people in danger.

  4. Fact check.

  5. Do a Reverse Image Search: “One of the most frequently cited tools is reverse image search. Using reverse image search, a user can help validate the authenticity of a suspicious image or video by taking a screen capture of the image or video and running it through Google’s or a third party’s reverse image search platform. (Rand, 15).

  6. Confirm the source, and that it is confirmed elsewhere by other reliable entities.

  7. Train your team and organization in media literacy. (Rand, 17). This means they are aware of disinformation and deep fakes, and learn how to question and confirm.

  8. Confirm the Source, again.

    Jesus made it clear in Scripture we are to be as shrewd as snakes and innocent as doves. Shrewdness implies learned skills, which includes media literacy to spot AI-generated deep fakes of all kinds. He is worthy.


    Artificial Intelligence, Deepfakes, and Disinformation: A Primer, By Todd C. Helmus, Rand.org, July 2022



Previous
Previous

What is a Martyr?

Next
Next

Kayla Mueller's Response to ISIS Captivity