Members of the media may directly contact the following experts on this topic:
Jasmin Manseau (English and French)
Assistant Professor, Telfer School of Management
jasmin.manseau@telfer.uOttawa.ca
Professor Manseau’s research focuses on emerging AI technologies, particularly on gen-AI and conversational assistants where he explores the dark side of technology, examining how digital systems can foster dehumanization and how biometric innovations raise profound ethical concerns around surveillance and privacy.
“AI can sometimes contribute to subtle forms of dehumanization by reducing human knowledge, social interactions and intellectual labour to “datapoints” that can be reused or reproduced by machines. In that sense, when AI systems reuse or simulate the work or intellectual voice of scholars without clear recognition, it raises broader questions about how these technologies may reshape the value, dignity and recognition of human contributions.”
David Fewer (English and French)
Part-time Professor, Faculty of Law – Common Law Section, and Interim Director of the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC).
Professor Fewer’s research tackles intellectual property and technology law issues. He can discuss copyright, personality-rights, and consumer-protection issues surrounding this topic.
“Firstly, there are serious copyright questions. Where a company trains a model on protected works and then markets a commercial product designed to replicate the distinctive manner of those works, there is a strong argument that the conduct may cross into unfair reproduction of protectable expression. That issue is not yet settled in Canadian law, but it raises a serious and credible infringement case.
Second, there is a clear personality-rights problem. If a product uses the names and identities of real writers and academics to market AI-generated advice, the law will ask whether it is commercially exploiting their persona without consent. In Canada, that can engage misappropriation of personality, and, in some provinces, statutory privacy protections aimed directly at unauthorized promotional use of a person’s name or likeness.
And thirdly, there is the consumer-protection issue. Even if the company adds qualifiers like “inspired by,” the real question is the general impression created for users. If consumers are led to believe that named authors endorsed the tool, participated in it, or are effectively giving the advice, that risks misleading representation.”
Mayur Joshi (English only)
Assistant Professor of information systems and Father Edgar Thivierge Fellowship in AI and Transformation of Work, Occupations, and Organizations at the Telfer School of Management.
Professor Joshi’s research interests lie at the intersection of information systems and organization theory. He recently argued for conceiving AI as emerging from relations among human and algorithmic actors.
“The ethical concern is obvious, but I think the deeper issue is epistemic. There is a fundamental difference between mimicry and knowledge production. Tools that learn from my own inputs, with my ongoing involvement, may approximate aspects of my expertise because I remain part of the learning loop. While systems like this do have access to user inputs, they primarily capture textual traces and editing patterns, rather than the situated judgment, problem framing, and standards of evaluation that constitute expertise.
“As a result, inferring expertise from published artifacts risks reproducing expression while missing the underlying knowledge production. This moment pushes us to design AI systems that engage expertise relationally, co-produced with experts, rather than trying to extract and commodify it from static textual traces.”
Rafid Mahmood (English only)
Assistant Professor, Telfer School of Management
Professor Mahmood’s research targets applications in large-scale AI systems, examining the building of predictive and prescriptive models to automate decision-making and how to better manage the data-centric pipelines needed to operationalize these models.
“If a company uses the identity of a real person to make machine-generated content seem more credible, without permission, that raises serious concerns about consent, misrepresentation, and public trust. This becomes an issue of governance. AI products need clearer standards for consent and accountability when they draw on the recognizable identity of real people.”