Will AI help or hinder democracy? Will social media bring citizens together in lively debate or will misinformation drive them apart? What role do citizens play in shaping our digital future?
Prominent Canadian experts in AI and big tech joined their U.S. counterparts to debate these questions at our third Responsible Tech Summit co-hosted by the Consulate General of Canada in New York and All Tech is Human.
Close to 300 people from civil society, government and the tech sector itself attended to better understand the multitude of issues facing our society and how we can shape our digital future to better align with the public interest.
Canada is well-positioned to lead this global discussion. “Canada was the first country to launch a national AI strategy to commercialize Canadian AI technologies, advance federal AI standards, and invest in world-class training and talent,” said Canada’s Deputy Consul General André Frenette.
“But we know we can’t forge the path ahead alone. Canada and the U.S. are each other’s oldest allies and partners, and our work together on digital technologies is just getting started.”
“Tackling complex tech and society issues, such as the impact of generative AI, require a broad mix of backgrounds and stakeholders to come together to understand values, best practices, and the best pathway forward,” added David Ryan Polgar, Founder and President of All Tech Is Human.
“Too often, however, it feels as if tech innovation is moving significantly faster than our ability to understand its impact and create appropriate guardrails and policies.”
The event kicked off with a fireside chat with Get Media Savvy Executive Director Julie Scelfo in conversation with MIT sociologist Dr. Sherry Turkle about her decades-long research on how technology is shaping mental health and well-being.
Turkle highlighted how the prevalence of smart phones and other technology was negatively impacting early childhood development, impeding the ability of young people to read social cues and develop healthy relationships.
Urging policy makers, advocates, and academia to come together, Turkle called for a consumer movement to make technology more humane: “Even though it’s going to be a very long fight, I believe it’s a fight that we’re going to win because the damages and the harms are just too great. I think of this as a break the glass moment, where it’s time to break the glass and say it’s now or never.”
Next, Munk School Director Peter Loewen moderated a panel on generative AI and elections, featuring CIGI Fellow Dr. Samantha Bradshaw, Canadian tech entrepreneur Shuman Ghosemajumder, and Freedom House’s Research Director on Tech and Democracy Allie Funk.
Dr. Bradshaw spoke about how large language models can enable disinformation campaigns that are more persuasive, more personalized, and more difficult to track at a larger scale than has been previously possible.
Ghosemajumder said widespread layoffs of trust and safety teams across the U.S. tech industry was particularly worrying given the rapid advancements in generative AI and its potential for misuse by cybercriminals.
Funk highlighted how generative technology is already shaping elections, pointing to an example from earlier this year of a deepfake audio clip of a Nigerian presidential candidate talking about vote rigging.
However, Funk added, “A lot of issues with disinformation campaigns and the reliability of information spaces are societal problems, not technical problems. It’s about building resilience as a society, because regulations and technical solutions are only going to get you so far.”
Noted Facebook whistleblower Frances Haugen took the stage to discuss the state of online safety with Nabiha Syed, CEO of The Markup. Currently a Senior Fellow at McGill University’s Center for Media, Technology, and Democracy, Haugen has spent the past two years advocating for tech regulation and educating governments and litigators on online safety issues.
Calling for better access to social media data, Haugen said, “We need to demand transparency. Researchers need to be able to ask questions and get answers. We need public data feeds…. It has to be mandatory. It has to be backed up by law.”
Haugen urged social advocates and technologists to work together and leverage their respective strengths, emphasizing that both groups must come together to identify what harms are being perpetuated online and how they can be addressed.
Columbia University law professor Tim Wu joined Tech Policy Press Editor Justin Hendrix for a conversation on the future of tech policy. Reflecting on his time as an advisor to President Biden from 2021-2023, Wu said he was proud of the administration’s work on antitrust and competition policy, citing the U.S. government’s ongoing cases against Google, Facebook, and Amazon as examples of an effort to rebalance the power that has aggregated in Big Tech.
Though he also lamented that the administration’s push for privacy and child safety legislation has been less successful despite broad public support.
The day wrapped up with a panel on digital spaces and public goods, moderated by Camille Carlton, Senior Policy Manager at the Center for Humane Technology. Speaking about her experience on the now-dismantled Trust and Safety team at Twitter, Theodora Skeadas reiterated the importance of collaboration between people inside and outside of Big Tech companies to deliver healthier, more inclusive spaces.
WAYE Founder Sinead Bovell pointed to Twitter’s Community Notes function as an example of optimizing a social media platform to promote content with input from diverse users, demonstrating how it is possible to align the public interest and shareholder values.
Tech Global Institute Executive Director Sabhanaz Rashid Diya called for tech companies to engage in consultations with a broader spectrum of stakeholders, particularly when entering new markets and in less established democracies.
Author Patrick K. Lin said a consumer protection or product liability approach to tech regulation was an under-explored avenue for governing Big Tech, citing the Federal Trade Commission’s fining of Facebook in 2019 as an example of the approach being used effectively to boost accountability and transparency.