The Hidden Costs of Artificial Intelligence



 

“If Canada becomes complacent about human oversight, algorithms will write rules instead of people.”
Via Policy Options

From the frontlines of war to the front pages of climate crises, artificial intelligence (AI) is outpacing the rules designed to govern it. At SPPGA, students and faculty are investigating how its rapid rise is reshaping power, accountability, and human judgment. 

Before Canada Spends Billions, Control the Killer Robots 

As Canada embraces artificial intelligence, creating a new Minister of AI and Digital Innovation, while committing to its largest military spending increase since the Cold War, MPPGA student Nishtha Gupta warns these two priorities are on a collision course. 

Writing in Policy Options, “Before Canada spends billions more on defense, let’s control the killer robots,” Gupta examines how Canada’s AI priorities are converging with its commitment to raise military spending to 5% of GDP by 2035. She points to disturbing examples from Gaza, where AI systems reduce human oversight to just 20 seconds per strike despite 10% error rates, and Ukraine, where Russian drones have terrorized civilians. Gupta argues that as Canada prepares its largest military investment since the 1950s, it must establish clear red lines: “If Canada is preparing to invest at levels not seen since the 1950s, the military must keep humans firmly and accountably in charge.” Read the full article.

Data Centers Drain Water as AI Expands 

In their Canadian Dimension article “Water woes from data centers,” SPPGA Professor M.V. Ramana and CARE program visiting research student Justine Babin discuss how data centers, the backbone of AI, are consuming enormous amounts of water and threatening local communities. Google’s Council Bluffs facility alone used 980 million gallons of drinking water in 2023, accounting for nearly a quarter of the city’s total supply. In Alberta, where the provincial government is promoting itself as a prime location for AI data centers, the Sturgeon Lake Cree Nation has accused a proposed project of threatening their livelihoods. Ramana and Babin argue that Canada should require transparency from tech companies regarding their water and energy use before approving further projects. Read the full article.

SMRs Won’t Solve AI’s Energy Crisis 

Beyond water, AI’s expansion requires a massive amount of energy. Ontario is investing $21 billion in small modular reactors at Darlington to meet increasing electricity demands. But SPPGA Director Allison Macfarlane, former chair of the U.S. Nuclear Regulatory Commission, urges caution. In CBC’sSmall nuclear reactors: Why Canada is investing billions,” Macfarlane warns that SMRs “have never been built, so there’s a lot of uncertainty about the cost, especially.” While she supports keeping existing nuclear plants running, she’s skeptical of unproven solutions: “I don’t want us spending lots of money on something that’s going to take a very long time to become a reality.” Watch CBC video here.

Professor M.V. Ramana has raised similar doubts about tech companies’ nuclear ambitions. In “Big Tech’s Nuclear Lies” for Counterpunch, he argues that SMRs won’t be built in time to power the data centers driving AI’s growth, warning that these announcements serve as a “dangerous distraction” while companies continue to expand their use of fossil fuels. Read the full article.

AI Images Deceive During Climate Emergencies 

As wildfires rage and climate disasters intensify, AI-generated misinformation is creating new dangers. Speaking to Business in Vancouver for “Stumbled right into it’: AI images trick wildfire expert during crisis,” Heidi Tworek, Canada Research Chair in Communications and professor at SPPGA, explains how visual deception has become a “staple feature of any kind of climate emergency.” While image manipulation has always existed, Tworek notes that advances in AI now enable nearly anyone to produce false and convincing pictures, complicating emergency response when accurate information is most critical. Read the full article.

Tworek and Gupta were also co-authors of “Harmful Hallucinations: Generative AI and Elections,” a report examining how GenAI threatens democratic processes worldwide. Read the report.

Learn more about the research of Nishtha Gupta, M.V. Ramana, Allison Macfarlane, and Heidi Tworek.