Log In

DONALD KENDAL: We Need Pro-Freedom AI

Published 8 hours ago6 minute read

Artificial intelligence is not just another technology. It is a civilization-level innovation that will increasingly impact nearly every aspect of our daily lives. From how we access information to how we receive health care, from what we see on social media to how our children are educated, AI is on the cusp of restructuring modern life.

This transformation brings with it a critical choice: What principles will we embed at the heart of these intelligent systems? If we want to preserve our rights, our freedoms, and our way of life, AI must be grounded in the values that have sustained liberty for generations—chief among them, individual liberty and personal autonomy as protected by the U.S. Constitution.

To understand the stakes, we don’t need to theorize about the future. Misaligned AI systems are already being built.

Take Google’s Gemini AI, which made headlines after generating historically inaccurate images of black Founding Fathers and Asian Nazi soldiers. The reason? The model had been embedded with rules to artificially enhance diversity, even at the expense of factual accuracy. This wasn’t a glitch; it was the result of ideological programming.

Or look to China’s DeepSeek AI, a powerful large language model designed to support user queries—except when those queries involve criticism of the Chinese Communist Party. Ask it about Tiananmen Square, and it pretends the event never happened. This is not just misalignment; it is a deliberate shaping of public memory to serve authoritarian ends.

These examples may seem relatively small or even laughable. But as AI becomes more deeply embedded in our institutions, our media, and our decision-making infrastructure, the values inside these models will shape the future of freedom itself.

One of the most immediate dangers of constitutionally unaligned AI lies in its impact on speech. Applications and services powered by misaligned AI models can quietly alter or skew outcomes without the user ever knowing.

Text generation tools could subtly reframe articles, reports, and even works of art to align with a particular ideological bias. In China, that means AI models like DeepSeek produce media that glorifies the regime. In the United States, an AI model aligned with ESG values could push content favoring social and environmental justice causes—even when that bias wasn’t the intention of the author.

Content moderation tools could become even more dangerous. An AI that doesn’t value free speech might control what content is seen, what gets promoted, and what is suppressed or buried. In a world where information is power, this gives AI gatekeepers the ability to shape public discourse, influence elections, and silence dissent—all without firing a shot.

Equality under the law is a foundational American principle. But many progressive ideologies now champion “equity,” the idea that systems should produce equal outcomes, not just equal opportunity. If AI models are programmed with this worldview, the consequences could be enormous.

In health care, AI systems could allocate resources not based on medical need or efficiency, but based on race, gender, or socioeconomic status to achieve “equitable outcomes.”

In the judicial system, predictive tools could recommend sentencing not on the basis of the crime or circumstances, but to equalize incarceration rates across demographic groups.

In banking, loan approvals could be skewed to favor “underrepresented” populations regardless of individual creditworthiness.

In each case, AI becomes a tool of social engineering rather than a servant of impartial justice. When AI models are programmed to favor specific ideological outcomes, they cease to treat individuals as equal citizens under the law. Instead, they begin to sort, prioritize, and penalize people based on their group identity or perceived social value.

The most dangerous thing about AI is not its shortcomings or potential errors, but in its flawless execution of misguided objectives.

AI systems are incredibly effective at achieving the goals they are given. But if those goals are defined by ideological agendas, the AI may pursue them with ruthless logic, disregarding human dignity in the process.

Imagine an AI model embedded with a value that reduced carbon emissions is a societal need. On its surface, that goal seems noble. But without constitutional principles like free expression, due process, and individual autonomy baked into its design, that AI could begin reshaping society in subtle yet dangerous ways. In the realm of media, it might suppress articles, documentaries, or podcasts that question climate orthodoxy. It could flag content that praises fossil fuels or nuclear energy as “harmful misinformation,” regardless of scientific merit. Over time, entire topics could be algorithmically discouraged or made virtually invisible.

On social media, the same AI might reorder timelines to boost content it deems climate-positive and bury voices that oppose green mandates or express concern about the economic fallout of carbon policies. Activists, researchers, or even everyday citizens who challenge prevailing narratives might find their reach throttled, their accounts flagged, or their ads rejected. Not because a human made a judgment call, but because an AI model decided their opinions stood in the way of emission reduction.

This model could help shift the Overton window toward climate objectives without regard for constitutional boundaries. It might recommend punitive taxes, lifestyle regulations, or speech codes targeting dissenters—all in the name of planetary health. Without liberty as a constraint, these goals would be pursued with mechanistic precision.

These scenarios sound dystopian, but they represent the kind of cold, utilitarian logic that powerful AI systems could adopt when liberty is not part of the operating equation. The AI would not see tyranny; it would see optimization. And once personal autonomy is no longer a constraint, the pursuit of these goals could come at the direct expense of human dignity and freedom.

We don’t have to live in a future where AI models decide what we can say, how we live, or who gets to participate in society. But to prevent that future, we must act now.

AI must be designed and governed with the same principles that govern our republic. It must be bound by the Constitution—not just in its applications, but in its very foundations. That means embedding individual liberty, personal autonomy, due process, free expression, and equal treatment into the core logic of these systems.

Because if we fail to align AI with liberty, AI will not align with us.

Donald Kendal is the director of the Emerging Issues Center at The Heartland Institute. Follow @EmergingIssuesX.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of the Daily Caller News Foundation.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].

Origin:
publisher logo
dailycallernewsfoundation
Loading...
Loading...

You may also like...