Editorial
Will AI Divide Us or Bring Us Together?
Whether we love it or hate it, artificial intelligence is here to stay.
Concerns about AI’s environmental footprint, ethical risks, and impact on human creativity are valid. At the current pace of AI growth, data centers could, by 2030, consume as much water annually as ten million Americans and generate as much carbon pollution as a dozen coal-fired power plants. Artists, writers, and filmmakers are already debating whether AI will dilute original expression or automate creative labor out of existence.
As powerful as these concerns are, AI continues to grow and is being adopted across many sectors. So the question isn’t “Should we accept AI?” but “How do we guide it toward the betterment of humanity?”
The future depends on how intentionally we build, regulate, and use it.
AI’s Fork in the Road: Will It Divide Us or Bring Us Together?
The term Artificial Intelligence only became mainstream in the last few years, but the underlying technology has been shaping our digital lives for much longer. Well before ChatGPT was a household name, social media companies were using early AI systems—internally called “machine learning models” or “ranking algorithms”—to predict which videos, posts, and ads we were most likely to engage with.
Once these machine-learning systems kicked in on Facebook in the early 2010s, social media changed forever. The platforms that had promised to connect us, inform us, and empower us ended up doing the opposite. They pushed people further apart and fueled division in ways almost no one saw coming. Research later showed that Facebook’s algorithmic changes fed users content that evoked the strongest emotions, which made political discourse angrier, more extreme, and less true. And because these systems tended to show people more of what they already believed, they intensified what researchers call the “echo chamber effect,” a dynamic where individuals are repeatedly exposed to information that aligns with their existing views, which makes their beliefs feel stronger and makes it harder to understand where other people are coming from.
But this new wave of AI is a different animal entirely. While the AI of the 2010s could feed you content based on what you’d watched in the past, today’s AI shapes how you understand the world by generating its own arguments. Old AI curated the world for you. New AI constructs it for you. That’s a much bigger leap in power—and a much bigger risk if mismanaged.
Large Language Models (LLMs) like ChatGPT carry the same dual possibility social media did, but with exponentially more influence: they could either divide us or bring us together. In the best-case scenario, they could help us see the world more clearly by synthesizing thousands of sources at once, cutting through noise, and offering context without partisan bias. But early testing shows how easily they can do the opposite. An analysis by Johns Hopkins University found that chatbots tell us what we want to hear, generating biased answers that could further divide us on controversial issues.
This is exactly why guardrails shouldn’t be optional—they should be foundational. Transparent auditing can help expose whether an AI system skews left, right, or toward sensationalism. Public oversight can create standards for how recommendation engines are allowed to shape information flows, similar to how we regulate financial markets or environmental safety. And clearer transparency requirements—such as forcing platforms to disclose why a user is being shown specific content or how a model arrives at an answer—give people the ability to understand, challenge, or opt out of algorithmic influence.
The lesson from the last decade is simple: when we build powerful tools without transparency or accountability, the tools end up shaping us more than we shape them. If we want this generation of AI to bring people together rather than fracture us further, the guardrails have to be built in now—not after the damage is already done.
How AI Could Reduce Polarization Instead of Fueling It
Despite the risks, AI holds enormous potential to counter polarization if we use it wisely. One powerful shift is reframing AI as a tool that helps us think better rather than think for us.
For example, researchers found that when people used AI tools to suggest language during difficult political conversations, it helped them communicate more calmly and reduced hostility—without influencing their opinions on the issues at hand. AI can highlight shared values that often get buried under heated rhetoric. It can summarize complex issues without oversimplifying them. It can detect misinformation in real time and flag inaccuracies before they spread.
AI can also help translate public input into actionable policy. Builders’ AI tool, Ima, is doing exactly this in Texas. Ima gathers stories and practical ideas from everyday Texans about healthcare. Instead of simply collecting data, Ima synthesizes common themes and identifies where the public already agrees—something traditional politics often ignores. The findings will be shaped into real policy proposals rooted in what Texans actually want, proposals that will be brought directly to state lawmakers.
That’s AI as an amplifier of democratic voice.
A Future Defined by the Choices We Make Today
AI’s impact is not predetermined. It can widen divides, deepen misinformation, and calcify ideological bubbles—or it can broaden participation, elevate shared values, and help us understand one another more clearly.
The difference lies in our choices. If we demand accountability, transparency, and public benefit as core principles, AI can strengthen community and civic life rather than erode it. If we treat it as a tool, not an oracle, it can help us think more deeply rather than think less.
The question isn’t whether AI will change the world. It’s whether we will shape AI in ways that serve all of us.
—Alex Buscemi (abuscemi@buildersmovement.org)
Art by Matthew Lewis
Keep Reading
Will AI Divide Us or Bring Us Together?
How ‘Hustle Culture’ is Dividing Us—and How We Can Stop It