Gen AI Struggles With Privacy-data Protection Tech Offers A Solution
AI & Privacy
AdobeStock_313089358According to Kurt Rohloff, CTO and co-founder of leading privacy-enhanced secure data collaboration software vendor Duality Technologies, the recent bipartisan move to ban the AI platform DeepSeek from U.S. government devices signals far more than just national security concerns—it’s a red flag for the broader trajectory of generative AI.
“DeepSeek’s potential vulnerabilities are a symptom of a larger, more pressing issue with how society is trying to deploy generative AI,” Rohloff says. “The privacy architecture of most GenAI systems simply isn’t designed for the regulatory realities many sectors face.”
In May 2025, Senators Bill Cassidy and Jacky Rosen introduced legislation to bar DeepSeek from federal contracts, citing concerns over the platform’s acknowledgment of routing user data to China. For Rohloff, however, this kind of reactive regulation only scratches the surface. “The foundational problem persists generative AI platforms bring serious risks of structural privacy flaws, and existing security measures aren't cutting it.”
This unease is shared by consumers as well. A recent Prosper Insights & Analytics survey found that 58.6% of consumers are extremely or very concerned about their privacy being violated by AI using their data. Rohloff believes this sentiment is well-founded, especially in sectors like government, finance, and healthcare, where the stakes of data mishandling are existential.
Prosper - How Concerned are You About Privacy Being Violated From AI Using Your Dats
Prosper Insights & Analytics“The promise of GenAI is real,” he adds, “but so is the risk.”
Generative AI models are fueled by data—immense volumes of it. Yet what makes them so powerful also makes them uniquely vulnerable. “These systems have the potential to consume everything you feed them: user prompts, documents, even behavioral cues,” Rohloff explains. “But unlike traditional software, they learn from and sometimes regurgitate that data. That creates a massive attack surface.”
Many organizations, he notes, don’t fully understand what data their AI systems are ingesting. Without proper oversight, confidential or regulated information can unintentionally enter model training cycles or be exposed during inference. The result can be catastrophic, particularly in regulated sectors where data exposure carries legal, ethical, and financial consequences.
“In critical sectors, even a minor lapse could mean leaked state secrets, manipulated financial trades, or breached patient records,” Rohloff warns. “And once trust is lost in these systems, it can take decades to rebuild.”
He points to technical threats like model inversion attacks, where bad actors reconstruct training data by repeatedly querying models or prompt injections, where cleverly crafted inputs can override safety controls and extract restricted information. These aren’t theoretical issues; they’re live threats are already being tested.
Recent government action is beginning to reflect the urgency. In January, the Biden administration issued Executive Order 14179, which aims to boost U.S. AI leadership while emphasizing the importance of secure development practices. In April, the Office of Management and Budget released memoranda directing agencies to establish standards around AI testing, monitoring, and the handling of personally identifiable information.
“These are encouraging steps,” Rohloff says, “but we can’t audit or regulate our way out of flawed design. Privacy needs to be built in at the architecture level, not patched on after deployment.”
Financial incentives also raise the stakes. The 2024 IBM Cost of a Data Breach report found the healthcare industry’s average breach cost to be $9.8 million, which is higher than any other sector. Rohloff sees that as an urgent reminder that the costs of under-secured AI are already tangible.
For Rohloff, the solution lies in a category of techniques known as Privacy-Enhancing Technologies, or PETs. Of these, Fully Homomorphic Encryption (FHE) stands out.
“FHE lets us run computations on encrypted data without ever decrypting it,” he explains. “It flips the traditional model on its head. Data can remain protected at every step—at rest, in transit, and in use.”
This innovation addresses a core vulnerability in current AI pipelines. Traditionally, sensitive data must be decrypted before an AI model can process it, leaving it briefly exposed. FHE eliminates that exposure altogether, making it possible to perform even complex machine learning operations without ever viewing the plaintext data.
“The point is to make strong encryption usable in real-world AI deployments,” says Rohloff. “And we’re finally getting there.”
His company, Duality Technologies, has helped drive this shift by developing tools that apply FHE to real-world applications like finance, healthcare, and cross-enterprise data collaboration. Open-source platforms like OpenFHE—evolved from earlier libraries like PALISADE—have also accelerated adoption by offering practical, performance-optimized implementations of multiple FHE schemes.
The implications of FHE go beyond individual data protection. “It enables confidential collaboration across organizations,” Rohloff explains. “Think hospitals working together on patient analytics without ever revealing personal records. Or financial institutions conducting joint fraud detection without compromising proprietary data.”
This capability is increasingly vital as AI workflows stretch across jurisdictions and regulatory frameworks. FHE helps organizations maintain compliance with laws like HIPAA, GDPR, and CCPA by ensuring that data is never processed in an unencrypted state.
Equally important, FHE prevents the AI model itself from “learning” anything sensitive. “Even if the model is compromised,” Rohloff says, “it doesn’t have access to the actual data. That’s a game-changer for trust and resilience.”
For Rohloff, adopting FHE and other PETs isn’t just about technical hygiene. It’s a strategic imperative. “Waiting for regulations to force your hand is a losing strategy,” he warns. “Technical leaders must act now to secure AI’s future.”
That means making PETs part of an organization’s AI strategy from the start, not as a compliance afterthought. It requires vetting tools for data handling risks, demanding encrypted-by-default architectures, and investing in secure development skills across technical teams.
“We need a culture shift,” Rohloff says. “It’s not enough to trust the vendors. Leaders must ask hard questions, fund the research, and collaborate across sectors to set new norms.”