CEO Message

Welcome to the October edition of Profound Connections.

In this issue of the newsletter, we launch Early Career Spotlight, where you’ll meet outstanding young researchers working on topics ranging from AI (Artificial Intelligence) to theoretical astrophysics to sustainable agriculture. This month we feature Computer Science PhD candidate, Domenic Rosati, who is conducting pioneering research on AI safety at Dalhousie University in Halifax.

This issue’s Researcher Spotlight profiles Kate Larson, Professor of Computer Science at the University of Waterloo and Research Scientist at DeepMind. Kate’s research focuses on multiagent systems and reinforcement learning, and applications of AI to support sustainable development and climate-related initiatives. 

September was a busy month for the Profound Impact team. We celebrated the fifth annual Profound Impact Day on Monday, September 16, 2024. Inaugurated in 2019, Profound Impact Day is a celebration of the world’s diverse leaders, changemakers, and researchers who are leaving their mark on the global community through their initiatives, influence, and impact.

This year we recognized Roger Grosse, University of Toronto, Ali Ouni, ETS Montréal, and Liam Paull, Université de Montréal, the three winners of the CS-Can|Info-Can Outstanding Early Career Computer Science Researcher Award. Professors Ouni and Paull participated in a panel discussion with Feridun Hamdullahpur, former President and Vice-Chancellor at the University of Waterloo and inaugural winner of the Impactful Actions Award.

Celebrations also included a conversation between Dr. Kelly Lyons, Professor, Faculty of Information and the Department of Computer Science at the University of Toronto, and Chair of the CSCan-InfoCan Awards Committee and Profound Impact’s Sherryl Petricevic. Kelly and Sherryl talked about the vital importance, challenges and benefits of industry-academia collaboration and the challenges of early-stage career research.

Check out the conversations with Kelly Lyons and the award winners on Profound Impact’s YouTube channel.

Profound Impact participated in the launch of Women Funding Women Inc. (WFW) Waterloo Region on September 18. This exciting event featured passionate conversations with forward-thinking investors, ambitious founders, and dedicated ecosystem leaders. We’re proud to be part of the powerful movement to bridge the funding gap for women-led businesses and reshape the future of angel investing. 

We were pleased to sponsor the Accelerator Centre’s She Talks Tech event, a conversation focused on fostering a supportive environment for the next generation of women in STEM, on September 26. I participated in a panel conversation with other women entrepreneurs, innovators, creatives, and academics who have overcome barriers in their STEM careers. A full recording of the event can be found here.

Profound Impact team members Jacqueline Watty and Sherryl Petricevic were present to support our partner, Innovation Factory, southwestern Ontario’s business accelerator for tech innovation, at the 14th annual LiONS LAIR pitch competition in Hamilton on September 26.  LiONS LAIR showcases the best local talent and innovation in Brant, Halton, Hamilton and Norfolk. Congratulations to prize winners Infinite Harvest Technologies, Yellowbird Diagnostics, Inc., Maman Biomedical and DOUBL.

Want to see firsthand how the revolutionary matchmaking features of our AI-powered platform, Research Impact, can transform your research projects by finding the perfect funding match? Join us for University Research Impact Demo Day on 23rd. Sign up here to participate.

As always, thank you for your support and we hope you enjoy this month’s edition of Profound Connections!

Sherry Shannon-Vanstone

Domenic Rosati

Domenic Rosati
Computer Science PhD Candidate, Dalhousie University
Head of Artificial Intelligence, Scite

In 2023, senior AI (Artificial Intelligence) researchers and company executives, including Nobel Prize winner Geoffrey Hinton, Yoshua Bengio and Sam Altman, signed a statement noting that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Domenic Rosati, Computer Science PhD candidate at Dalhousie University in Halifax, is an early-career researcher whose work focuses on understanding these AI safety risks, both current and potential, and to constructing and measuring future defenses against bad actors who might purposely fine-tune Large Language Models to achieve harmful goals. 

As the son of tech entrepreneurs, Domenic grew up around computers, attended summer computer camp and learned to program while very young. “At the time, personal computers weren’t what they are now. With no apps to work with, I naturally picked up programming at a very early age,” he says.

As a teenager, Domenic’s enthusiasm for technology waned and he became more interested in the humanities. “I was really good at programming and already knew the material that my high school offered in computer sciences courses. I took history courses instead,” he adds.

Domenic studied history at Carleton University in his hometown of Ottawa. “For most of my course work, I employed methods using computational linguistics. That got me back into computing,” he explains. 

After completing an undergraduate degree in linguistics, Domenic applied to conduct graduate work in history using these methods. After rejections from the University of Toronto and McGill University, he applied to and was accepted to the library and information science program at Dalhousie. He says, “I very quickly understood that working as a library or archivist wasn’t for me. I wanted to do deeper technical work, so I started taking masters’ level computer science courses. By the final year of my program, I was exclusively studying computer science.”

Domenic’s masters’ research was in machine learning for natural language processing, conducted at a time when deep learning was quite new and was being used primarily for computer vision tasks.  

“In 2016, I had a moment where I had to choose between pursuing a PhD in computer science or continuing to work in industry,” he says. With a young family to support, he decided to move from academia to industry, where he joined a start-up company doing machine learning for video analysis for movie studios. As Director of Machine Learning and Natural Language Processing, Domenic worked for two years in both research and engineering roles.

In 2018, Domenic founded a start-up that focused on reinforcement learning. “It didn’t work out but was a really good experience,” he notes. 

In 2020, Domenic joined Scite, a platform that uses deep learning, natural language processing to deliver a new type of citation index for literature discovery tasks, as one of the first employees. He says, “We were lucky to build a generative AI experience for doing research tasks, including writing, before ChatGPT.”

Domenic’s work at Scite attracted him back to his roots in computational linguistics. “The motivation to pursue my PhD studies was having industry experience with generative AI, early on before the emergence of ChatGPT, and understanding that customers could use it in really unsafe and irresponsible ways. I knew that someone needs to think, on a regulatory and ethical level, about the safety implications of using these models,” he adds.

His work as an early provider of a generative AI solution for research writing tasks led him to think about the next generation of technology, prior to its development and release. Domenic says, “I’m really concerned about the use of large language models, not as they are today, but as they start to be used to accomplish tasks autonomously, as agents in the world. That very strongly motivated me to try to resolve some of the AI safety issues.”

The main focus of Domenic’s PhD research is a very particular type of threat around the ability of large language models and other neural networks to be trained to competently conduct a wide spectrum of tasks. “Some of the tasks that you can make large language models do are very harmful. For example, autonomously hacking websites, developing security exploits, generating hate speech or misinformation campaigns,” he says.

He notes that the problem with large language model research is that some classify it as a dual use risk problem. “The better you make the technology, the easier it is to use for harmful purposes. There is a massive market incentive for commercial companies and the research community to develop more and more capable large language models. This comes at the risk of those models being able to autonomously perform harmful tasks much better. Completely autonomous agents could take a series of harmful actions.”

Safety guards are the mainstream method used for AI safety. With the use of safety guards, if a model in a commercial or research application is tasked to perform a harmful action, it will refuse and will explain why it’s harmful. The standard AI paradigm is to develop better and better safety guards that refuse more and more harmful tasks.

Domenic’s research is not about better safety guards, but how to prevent their removal through training. If safety guards can be easily removed, they don’t matter.

Domenic will continue to work on his PhD for the next several years. When considering his next career steps, he notes that his focus will be on foundational rather than applied research. That work could be conducted as a university researcher, in industry or as a technical voice to help inform government regulations to build AI for the future.  

This early-career researcher’s work in machine learning, natural language generation systems and AI safety is truly pioneering and key to defending against the risks of advanced AI systems.