Insights from the AI Symposium: Building inclusive, transparent and innovative AI

By University of Ottawa

Office of the Vice-President, Research and Innovation, OVPRI

Panelists on stage at the uOttawa AI Symposium seen from above.
Artificial intelligence has rapidly woven itself into the fabric of our daily lives, influencing how we learn, heal, create and govern. This disruptive technology cannot be tackled in silos — it demands collaboration across disciplines to ensure responsible implementation that doesn’t worsen inequality, harm the environment or erode critical thinking.

The uOttawa AI Symposium brought together researchers from every. single. faculty. to debate and explore AI from multiple angles, offering insights for technologists, policymakers and the public alike.

From the start, one message rang clear: AI is not just a technical tool. It’s a socio-technical system that is both shaping and shaped by our world. The day’s conversations revealed both the promise and the peril of this technology, with experts calling for innovation that is inclusive, transparent and sustainable.

Driving innovation across sectors

Panelists spoke about the many ways AI is already transforming medicine, law, education and telecommunications by boosting efficiency and unlocking new possibilities.

  • Professor Tanya Schmah spoke about using AI in medical imaging to detect anomalies, allowing researchers to identify changes in brain structures linked to Alzheimer’s disease.
  • In telecommunications, Professor Melike Enrol-Kantarci predicted 6G networks by 2030 to support AI’s growing demand for speed and bandwidth, calling AI “the next Internet.”
  • Professor Richard Naud explained how AI and brain science are advancing together. Reinforcement learning, a technique that allows algorithms to “learn” by trial and error and receive “rewards” for good decisions, helps researchers model how human brains process serotonin. At the same time, insights from neuroscience inspire new AI designs, such as energy efficient chips that mimic brain function.

While these examples illustrate exciting opportunities for growth and discovery, other discussions highlighted the need for governance frameworks tailored to sector-specific innovations.
 

Strengthening AI infrastructure

Researchers said that the key to building trustworthy AI systems lay in robustness (i.e., resiliency to errors), explainability (understanding AI decision-making), and energy efficiency, and that reliable, scalable AI begins with strong research into its foundation, architecture and learning rules.

Computer scientist Diana Inkpen highlighted research on large language models (LLMs), the technology that forms the basis of tools like ChatGPT, Gemini and Copilot. Her work focuses on understanding how LLMs represent knowledge. These models learn patterns from massive datasets but most importantly, they lack reasoning and context: understanding this gap is vital as these tools become increasingly embedded in everyday life.

Social justice, equity and inclusion in AI

Because AI is shaped by the world around it, this technology risks amplifying systemic inequities and biases that disproportionately affect marginalized communities. Speakers warned that without deliberate oversight, AI could deepen inequality and environmental harm.

Examples of such social effects ranged from deepfake harassment targeting women and girls to algorithmic bias in health care and policing. The environmental footprint of AI, which requires massive water and energy use by data centers, was another concern.

Legal ethics Professor Amy Salyzyn added that while the market for specialized legal AI tools has surged, access to such technology remains concentrated among large firms with substantial resources. This creates a two-tier system in which well-funded organizations benefit from advanced tools, while smaller firms and individuals risk being left behind, a situation that widens the justice gap.

To prevent bias and automated discrimination, researchers stressed the need for inclusive design, deliberative processes and participatory governance that empowers marginalized voices. Building equitable access to AI requires frameworks that prioritize safety over blind trust.

A crowd of people sitting at conference tables, seen from above.
AI as a sociotechnical tool

“AI mirrors the data that trained it and the values of those who design and use it — making transparency and accountability essential to protect democracy.”

Symposium panelists on misinformation and democracy

Misinformation and democracy: AI’s role in shaping public discourse

These concerns reflect a broader truth: AI is far more than a technical tool and that it is neither neutral nor impartial. As Professor Kelly Bronson put it, “it was not immaculately conceived.”

Professor Elizabeth Dubois illustrated this through her research on political communication, showing how AI tools reshape the information landscape: deepfakes spread misinformation and erode trust while AI-driven engagement platforms enable hyper-targeted outreach. This raises concerns about representation, transparency and the potential for echo chambers.

Panelists pointed to recent elections where AI-generated content fueled disinformation campaigns that challenged civic engagement and democratic norms. This shift highlights another societal impact of AI: it has reshaped access to reliable, high-quality information, undermining public trust and threatening democratic processes.

AI mirrors the data that is used to train it and the values of those who design and use it, making transparency and accountability all the more essential in order to protect democracy.

Panelists on stage at the uOttawa AI Symposium seen from above.
Innovation vs. regulation

“Far from hindering progress, well-designed policies create the stability that innovation needs to thrive.”

Symposium panelists on trust, transparency and oversight

Trust, transparency and oversight: Balancing innovation and regulation

Trustworthiness emerged as a central theme. As Professor Jason Millar, an expert in ethical engineering, put it: “Public trust depends on systems proving they’re trustworthy.” Transparency and accountability are critical to making systems safer and more reliable.

For example, AI-generated fake cases have already made their way into court filings, highlighting the risks of relying on tools that lack proper oversight. To counter such abuses, Teresa Scassa, a professor of information law, called for enforceable regulations, audits, documentation and ongoing transparency checks.

While concerns about regulation stifling innovation are understandable, panelists agreed that clear rules are essential for building public trust and making AI truly trustworthy. Far from hindering progress, well-designed policies create the stability that innovation needs to thrive.

These principles extend to education, where questions about AI’s role in learning are growing. Heavy reliance on AI for homework can reduce retention and understanding, raising concerns about long-term learning. Microsoft even prohibits AI use by children under 13.

However, AI is everywhere — it’s the future. Researchers concluded that instead of banning it, we must teach students to use it ethically and responsibility. Students must learn how to think critically, understand AI’s limits, and verify information for accuracy.