This article was originally published on Washington Times - World. You can read the original article HERE
U.S. intelligence officials issued a rare alert revealing Iran and Russia have deployed artificial intelligence to advance efforts to manipulate American voters ahead of the November election.
An Office of the Director of National Intelligence official told reporters U.S. investigators had previously seen AI-powered influence campaigns overseas, but the official confirmed this week that “this is now happening here.”
“The risk to U.S. elections from foreign AI-generated content depends on the ability of foreign actors to overcome restrictions built into many AI tools and remain undetected, develop their own sophisticated models, or strategically target and disseminate such content,” the official said. “Foreign actors are behind in each of these three areas. Nonetheless, the [intelligence community] and its partners are closely watching this as election day nears.”
Earlier this year, U.S. intelligence agencies branded AI a “malign influence accelerant” that was enhancing foreign adversaries’ ability to quickly generate content and target audiences with manipulated audio and video.
Intelligence officials said Monday that the AI deployed by adversaries ahead of the November election has changed the speed of influence campaigns but has not, so far, overhauled how the operations are developed.
“Information operations are the threat and AI is an enabler,” an official said. “Generative AI is helping to improve and accelerate aspects of foreign influence operations but thus far the [intelligence community] has not seen it revolutionize such operations.”
Microsoft has similarly reported it has not seen evidence that AI-fueled deepfakes are changing the way online manipulators dupe audiences, as security professionals formerly feared.
Microsoft Threat Analysis Center’s Clint Watts said earlier this month that people on social media were surprisingly good at recognizing fake content portraying foreign leaders and candidates in the upcoming election.
Mr. Watts told the Billington CyberSecurity Summit that crowds proved “remarkably brilliant about detecting deepfakes.”
“It’s when they’re alone that they tend to get duped,” Mr. Watts told the summit. “So the setting matters, public vs. private.”
Mr. Watts said subtle changes to video are more effective at tricking viewers than fully AI-generated content, and he noted Russian influence actors had moved back to the smaller video manipulations.
The Senate Foreign Relations Committee is digging into the threat of new tech tools used by foreign adversaries to promote digital authoritarianism in place of democracy.
Sen. Mitt Romney, Utah Republican, said at a committee hearing on Tuesday that America needed tech tools and capabilities to push back against authoritarians because foreign adversaries do not care about norms.
Jamil Jaffer, founder of George Mason University’s National Security Institute, said he feared America’s adversaries would not heed strongly worded letters from the Senate and America’s diplomats.
“If you look around the world today, we are the leaders in AI but that position is not guaranteed,” Mr. Jaffer told lawmakers. “In fact, if we adopt the approach the Europeans have taken — which is regulate, regulate, and regulate — we’re likely to lose that edge.”
This article was originally published by Washington Times - World. We only curate news from sources that align with the core values of our intended conservative audience. If you like the news you read here we encourage you to utilize the original sources for even more great news and opinions you can trust!
Comments