Musk, Tesla, X,

A news report that representatives from Elon Musk’s Department of Government Efficiency (DOGE) fed sensitive data from across the Education Department into artificial intelligence (AI) software to probe the agency’s programs and spending has prompted cybersecurity experts to red-flag the risk of data exfiltration.

“The risk of data exfiltration across GenAI services is very real, especially given the value of such sensitive government agencies’ financial data to our adversaries and bad actors,” Acuvity CEO Satyam Sinha said in an email. “While many providers adhere to requirements such as GovCloud and Fedramp, not all providers do. We have to exercise an abundance of caution and an additional layer of security.”

Data gleaned by DOGE members using AI at the Education Department included personally identifiable information for people who manage grants, as well as sensitive internal financial data, according to a Washington Post investigation, which cited two people familiar with the matter.

The AI software was accessed through Microsoft Corp.’s Azure to examine every dollar of money the department disburses, from contracts to grants to work trip expenses, the Post reported.

Privacy advocates on Monday said Musk and his DOGE team’s actions over the past three weeks around data handling represent the largest data breach in U.S. history. Their access reportedly covered classified information, millions of Americans’ sensitive personal and financial data, and Treasury Department systems that control everything from Social Security payments to tax refunds.

“Immense volumes of personal data” from a high percentage of Americans was compromised, exposing them to ID theft, stalking and blackmail, John Davisson, senior counsel and director of litigation at Electronic Privacy Information Center, said in a panel discussion. The tranche of data covered Social Security numbers, taxpayer info, date of birth, address, zip code, patient ID, marital status, and more, he added.

“This violates the constitutional right to personal information and long-range damage to individuals,” Davisson said.

Davisson and Lisa Gilbert, co-president of Public Citizen, a progressive advocacy organization, argue Musk’s team in essence took over the government’s most basic functions, using the Technology Transformation Services section of the General Services Administration as “Swiss army knives” to overhaul the federal bureaucracy.

[On Saturday, 19 states sued President Donald Trump and the Treasury Department, accusing them of violating federal law by granting Musk’s aides access to a sensitive federal payments database. “Musk and DOGE have no authority to access Americans’ private information and some of our country’s most sensitive data,” said New York Attorney General Letitia James, who is leading the complaint. “I am taking action to keep our information secure.”]

Processing of sensitive data from the government or any organization through AI tools raises important cybersecurity considerations, says J Stephen Kowski, field chief technology officer at SlashNext.

“Modern AI-powered security controls and real-time threat detection should be standard practices when handling such sensitive information, especially given the potential for data exposure to foreign adversaries or cybercriminals,” Kowski warned. “Organizations working with government systems should implement comprehensive security measures that combine AI safeguards with human oversight to protect sensitive information while maintaining operational efficiency.”

As GenAI spreads throughout organizations and state and federal agencies, the practice of using the technology to interrogate raw, vast datasets to accelerate “time to opinion” makes sense on a purely technical and solution level.

But, as with any hurried mass adoption of GenAI, there comes security considerations — a calculus that often goes ignored or unnoticed, security experts note.

“It raises some serious questions around privacy and the transit of sensitive data, and the governance being applied to how data privacy is being managed, especially for personnel files, project/program plans, and anything impacting intelligence or defense,” Casey Ellis, founder of Bugcrowd, a crowd-sourced cybersecurity firm, said in an email.