Welcome to the October edition of Profound Connections.
In this issue of the newsletter, we launch Early Career Spotlight, where you’ll meet outstanding young researchers working on topics ranging from AI (Artificial Intelligence) to theoretical astrophysics to sustainable agriculture. This month we feature Computer Science PhD candidate, Domenic Rosati, who is conducting pioneering research on AI safety at Dalhousie University in Halifax.
This issue’s Researcher Spotlight profiles Kate Larson, Professor of Computer Science at the University of Waterloo and Research Scientist at DeepMind. Kate’s research focuses on multiagent systems and reinforcement learning, and applications of AI to support sustainable development and climate-related initiatives.
September was a busy month for the Profound Impact team. We celebrated the fifth annual Profound Impact Day on Monday, September 16, 2024. Inaugurated in 2019, Profound Impact Day is a celebration of the world’s diverse leaders, changemakers, and researchers who are leaving their mark on the global community through their initiatives, influence, and impact.
This year we recognized Roger Grosse, University of Toronto, Ali Ouni, ETS Montréal, and Liam Paull, Université de Montréal, the three winners of the CS-Can|Info-Can Outstanding Early Career Computer Science Researcher Award. Professors Ouni and Paull participated in a panel discussion with Feridun Hamdullahpur, former President and Vice-Chancellor at the University of Waterloo and inaugural winner of the Impactful Actions Award.
Celebrations also included a conversation between Dr. Kelly Lyons, Professor, Faculty of Information and the Department of Computer Science at the University of Toronto, and Chair of the CSCan-InfoCan Awards Committee and Profound Impact’s Sherryl Petricevic. Kelly and Sherryl talked about the vital importance, challenges and benefits of industry-academia collaboration and the challenges of early-stage career research.
Check out the conversations with Kelly Lyons and the award winners on Profound Impact’s YouTube channel.
Profound Impact participated in the launch of Women Funding Women Inc. (WFW) Waterloo Region on September 18. This exciting event featured passionate conversations with forward-thinking investors, ambitious founders, and dedicated ecosystem leaders. We’re proud to be part of the powerful movement to bridge the funding gap for women-led businesses and reshape the future of angel investing.
We were pleased to sponsor the Accelerator Centre’s She Talks Tech event, a conversation focused on fostering a supportive environment for the next generation of women in STEM, on September 26. I participated in a panel conversation with other women entrepreneurs, innovators, creatives, and academics who have overcome barriers in their STEM careers. A full recording of the event can be found here.
Profound Impact team members Jacqueline Watty and Sherryl Petricevic were present to support our partner, Innovation Factory, southwestern Ontario’s business accelerator for tech innovation, at the 14th annual LiONS LAIR pitch competition in Hamilton on September 26. LiONS LAIR showcases the best local talent and innovation in Brant, Halton, Hamilton and Norfolk. Congratulations to prize winners Infinite Harvest Technologies, Yellowbird Diagnostics, Inc., Maman Biomedical and DOUBL.
Want to see firsthand how the revolutionary matchmaking features of our AI-powered platform, Research Impact, can transform your research projects by finding the perfect funding match? Join us for University Research Impact Demo Day on 23rd. Sign up here to participate.
As always, thank you for your support and we hope you enjoy this month’s edition of Profound Connections!
Kate Larson is an acclaimed AI (Artificial Intelligence) researcher whose work focuses on multi-agent systems and brings together computer science, mathematics, and economics.
As the daughter of a biology professor at Memorial University in Newfoundland, Kate thought she might like to study biology. “The trouble was that everyone in the department knew me as Katie, my father’s daughter,” she says. A very good course in first-year mathematics sparked her interest and led her to major in the subject as an undergraduate at Memorial.
Kate received an NSERC Undergraduate Research Award, and it was Sherry Mantyka, her supervisor for the research project that she conducted related to the award, who encouraged Kate to attend graduate school. She explains, “Sherry was very supportive and an important mentor. She submitted my project, which was on math and cognitive science, to a research conference. That was a really big deal!”
When looking at options for graduate school, Kate initially considered math programs in Canada, but was intrigued by the advice given by a researcher she met at a conference to study computer science in the United States. “I chose to focus on a field I knew nothing about in a place that was unfamiliar to me,” says Kate.
Kate earned a Masters’ degree in Computer Science at Washington University in St. Louis and went on to complete her PhD in Computer Science at Carnegie Mellon University in Pittsburgh. Committee members for her thesis dissertation, Mechanism Design for Computationally Limited Agents, included computer scientists and a microeconomic theorist, exemplifying the multi-disciplinary nature of her research.
Kate’s research focuses on multiagent systems and reinforcement learning and applications of AI to support sustainable development and climate-related initiatives. She is especially interested in research challenges that emerge when cooperation is made the heart of AI systems. Issues associated with cooperation are pervasive and important and can be found at scales ranging from daily routines like driving on highways, scheduling meetings and working collaboratively, to global challenges like peace, commerce, and pandemic preparedness. With AI-powered machines playing an increasingly greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and foster cooperation.
Kate’s work has earned her a Province of Ontario Early Researcher Award, the Canadian Association of Computer Science/Association d’informatique canadienne (CACS/AIC) Outstanding Young Researcher Prize, a University of Waterloo Research Chair and the Pasupalak AI Fellowship. She currently splits her research time as a Professor in the Cheriton School of Computer Science at the University of Waterloo, and as a Research Scientist at DeepMind in Montreal.
Kate’s role as a researcher and professor has included serving as graduate student supervisor and PhD thesis examiner and committee member for many students during the 20 years she has been on faculty at the University of Waterloo. She is also an active member of the university community, acting as Director of Undergraduate Studies for the Cheriton School of Computer Science during the COVID pandemic and as a member of the University Senate representing the Faculty of Mathematics. Kate has also been active in outreach activities to female high school students and in events for university students to promote Computer Science as a career option for women.
The international scientific community has benefitted from Kate’s expertise in her roles as a member of the International Joint Conferences on Artificial Intelligence (IJCAI) Board of Trustees Board, program chair for the IJCAI 2024 conference, and member of the Computing Research Association (CRA) and CS-Can|Info-Can Boards of Directors. She is Co-Editor-in-Chief of the Journal of Autonomous Agents and Multiagent Systems, and has served as Associate Editor and member of the editorial board for a range of AI scientific journals.
Kate strongly believes that the advancement of AI will benefit greatly from collaborative research that incorporates a diversity of ideas and backgrounds, leading to the consideration of a range of interesting questions. “Issues affecting those with lower incomes, women, and minority populations need to be addressed in AI research. A homogenous group of researchers will develop tools to solve a narrow set of problems,” notes Kate. Kate Larson’s research on cooperation in AI, her emphasis on diversity and interdisciplinary research teams, and her work in both academia and industry are leading the way to machines learning to find common ground to address a wide range of global challenges.
In 2023, senior AI (Artificial Intelligence) researchers and company executives, including Nobel Prize winner Geoffrey Hinton, Yoshua Bengio and Sam Altman, signed a statement noting that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Domenic Rosati, Computer Science PhD candidate at Dalhousie University in Halifax, is an early-career researcher whose work focuses on understanding these AI safety risks, both current and potential, and to constructing and measuring future defenses against bad actors who might purposely fine-tune Large Language Models to achieve harmful goals.
As the son of tech entrepreneurs, Domenic grew up around computers, attended summer computer camp and learned to program while very young. “At the time, personal computers weren’t what they are now. With no apps to work with, I naturally picked up programming at a very early age,” he says.
As a teenager, Domenic’s enthusiasm for technology waned and he became more interested in the humanities. “I was really good at programming and already knew the material that my high school offered in computer sciences courses. I took history courses instead,” he adds.
Domenic studied history at Carleton University in his hometown of Ottawa. “For most of my course work, I employed methods using computational linguistics. That got me back into computing,” he explains.
After completing an undergraduate degree in linguistics, Domenic applied to conduct graduate work in history using these methods. After rejections from the University of Toronto and McGill University, he applied to and was accepted to the library and information science program at Dalhousie. He says, “I very quickly understood that working as a library or archivist wasn’t for me. I wanted to do deeper technical work, so I started taking masters’ level computer science courses. By the final year of my program, I was exclusively studying computer science.”
Domenic’s masters’ research was in machine learning for natural language processing, conducted at a time when deep learning was quite new and was being used primarily for computer vision tasks.
“In 2016, I had a moment where I had to choose between pursuing a PhD in computer science or continuing to work in industry,” he says. With a young family to support, he decided to move from academia to industry, where he joined a start-up company doing machine learning for video analysis for movie studios. As Director of Machine Learning and Natural Language Processing, Domenic worked for two years in both research and engineering roles.
In 2018, Domenic founded a start-up that focused on reinforcement learning. “It didn’t work out but was a really good experience,” he notes.
In 2020, Domenic joined Scite, a platform that uses deep learning, natural language processing to deliver a new type of citation index for literature discovery tasks, as one of the first employees. He says, “We were lucky to build a generative AI experience for doing research tasks, including writing, before ChatGPT.”
Domenic’s work at Scite attracted him back to his roots in computational linguistics. “The motivation to pursue my PhD studies was having industry experience with generative AI, early on before the emergence of ChatGPT, and understanding that customers could use it in really unsafe and irresponsible ways. I knew that someone needs to think, on a regulatory and ethical level, about the safety implications of using these models,” he adds.
His work as an early provider of a generative AI solution for research writing tasks led him to think about the next generation of technology, prior to its development and release. Domenic says, “I’m really concerned about the use of large language models, not as they are today, but as they start to be used to accomplish tasks autonomously, as agents in the world. That very strongly motivated me to try to resolve some of the AI safety issues.”
The main focus of Domenic’s PhD research is a very particular type of threat around the ability of large language models and other neural networks to be trained to competently conduct a wide spectrum of tasks. “Some of the tasks that you can make large language models do are very harmful. For example, autonomously hacking websites, developing security exploits, generating hate speech or misinformation campaigns,” he says.
He notes that the problem with large language model research is that some classify it as a dual use risk problem. “The better you make the technology, the easier it is to use for harmful purposes. There is a massive market incentive for commercial companies and the research community to develop more and more capable large language models. This comes at the risk of those models being able to autonomously perform harmful tasks much better. Completely autonomous agents could take a series of harmful actions.”
Safety guards are the mainstream method used for AI safety. With the use of safety guards, if a model in a commercial or research application is tasked to perform a harmful action, it will refuse and will explain why it’s harmful. The standard AI paradigm is to develop better and better safety guards that refuse more and more harmful tasks.
Domenic’s research is not about better safety guards, but how to prevent their removal through training. If safety guards can be easily removed, they don’t matter.
Domenic will continue to work on his PhD for the next several years. When considering his next career steps, he notes that his focus will be on foundational rather than applied research. That work could be conducted as a university researcher, in industry or as a technical voice to help inform government regulations to build AI for the future.
This early-career researcher’s work in machine learning, natural language generation systems and AI safety is truly pioneering and key to defending against the risks of advanced AI systems.
I’m excited to share Profound Impact’s plans for this year as we amplify our focus on research and researchers across Canada and internationally.
Our new Research Spotlight column, which debuts this month highlighting Canada’s role as an AI research leader, will feature stories about emerging research and collaboration in areas including Artificial Intelligence, Quantum Information Processing, Sustainability, Alternative Energy, Climate Change, Biomanufacturing, Social Innovation, and Technology and Society.
We’ll also introduce some of the world-renowned researchers working in these areas to transfer their research results from the lab to innovative products and services. This month you’ll meet Professor Doina Precup from McGill University, who conducts fundamental research on reinforcement learning and works on AI applications in areas that have a social impact.
March 8 is International Women’s Day and this year’s theme is Embrace Equity. The Profound Impact team is delighted to be working with the Waterloo Region Chapter of Women in Communications and Technology and community organizations from across the region to present a series of IWD2023 events throughout the month of March to celebrate the women of Waterloo Region. We’ll share information about these events in upcoming issues of Profound Connections.
I know that you’ll be impressed by the accomplishments of Adrija Jana, featured in this month’s Impact Story. At just 18 years of age and just beginning her studies in English Literature at the University of Delhi in India, this exceptional young woman has made great impact as a poet, researcher, social activist, artist and active citizen.
Enjoy this month’s edition of Profound Connections and hope you are having a great start to a healthy, happy and prosperous 2023!
Artificial Intelligence (AI) has been featured in popular culture for decades. From the giant robots who kidnapped Lois Lane and were taken down by Superman in the 1941 animated film The Mechanical Monsters, to HAL 9000, the AI antagonist in 2001: A Space Odyssey to the currently ubiquitous AI portrait generators, artificial intelligence has been portrayed as a promise, a threat and a cool tool.
At Profound Impact, our newly-launched Research Impact product uses AI and data analytic tools to automatically match research collaborators with multiple online sources for funding opportunities and with potential industry partners to create competitive grant applications.
But what is AI and what role do Canada’s researchers play in advancing the field?
Canada’s Advisory Council on Artificial Intelligence states that AI represents a set of complex and powerful technologies that will touch or transform every sector of industry and that has the power to address challenging problems while introducing new sources of sustainable economic growth.
In 2017, in partnership with the Canadian Institute for Advanced Research (CIFAR), Canada launched the Pan-Canadian Artificial Intelligence Strategy. The country’s national AI strategy, the first in the world, has a stated vision that “by 2030 Canada will have one of the most robust national AI ecosystems in the world, founded upon scientific excellence, high-quality training, deep talent pools, public-private collaboration and our strong values of advancing AI technologies to bring positive social, economic and environmental benefits for people and the planet.”
AI research in Canada is currently centred in three national AI institutes: the Alberta Machine Intelligence Institute (Amii) in Edmonton, the Vector Institute in Toronto and Mila in Montreal. These not-for-profit organizations work in partnership with research universities and companies conducting AI research and development across Canada.
Four key strategic priorities have been identified as part of the Pan-Canadian Artificial Intelligence Strategy:
Advancing AI Science
Fundamental and applied research in areas including machine learning, natural language processing, autonomous vehicles, games and game theory and human-AI interaction.
AI for Health
AI-based approaches to health and healthcare that leverage Canada’s strength in health research and publicly-funded healthcare systems.
AI for Energy and the Environment
Innovative solutions to protect the environment and deal with the effects of climate change.
AI Commercialization
Funding and incentives for Canadian companies to develop AI technology and products.
The three hubs of AI excellence in the Pan-Canadian AI are recognized internationally for their research expertise and results, training of the next generation of AI researchers and practitioners and the transfer of scientific knowledge to industry.
Alberta-based Amii’s team includes 28 Fellows (including 23 Canada CIFAR AI Chairs) and eight Canada CIFAR AI chairs at universities across Western Canada. Amii researchers are pioneers and leaders in fields including Reinforcement Learning, Precision Health, Games and Game Theory, Natural Language Processing, Deep Learning and Robotics and work with a range of companies to translate research results to innovative products across industry sectors.
The Vector Institute was launched in March 2017 in partnership with the University of Toronto, the University of Guelph and the University of Waterloo to work with research institutions, industry, and incubators and accelerators across Canada to advance AI research and drive its application, adoption and commercialization.
Three key pillars in the Vector Institute’s three-year strategy are research, industry partnerships and thought leadership. Currently, the Vector Institute comprises more than 600 active researchers and professionals from across the country. More than 40 industry sponsors, representing a broad range of industries including health care, finance, advanced manufacturing, telecommunications, retail and transportation, collaborate with Vector Institute researchers on projects related to opportunities in AI.
The fourth pillar in the institute’s strategy and a focus of research is health, including responsible health data access for research, the use of machine learning tools, methods to analyze de-identified health data, and the creation of a secure data platform for applied AI research. Vector programs, including the Smart Health initiative and the support of Pathfinder Projects, facilitate the use of AI-assisted technologies in the health sector and the deployment of machine learning tools in hospitals across Ontario.
Mila was found in 19983 by Professor Yoshua Bengio of the Université de Montréal as a research lab to bring together researchers with a shared vision for the ethical development and advancement of AI. In 2017, the scope of Mila was expanded through collaboration between the Université de Montréal and McGill University and work with academic institutions Polytechnique Montréal and HEC Montréal.
Now a non-profit research institute, Mila also works with Quebec universities including Université Laval, Université de Sherbrooke and École de technologie supérieure. More than 1,000 researchers, including 51 CIFAR AI chairs, with expertise in machine learning theory and optimization, deep learning, computer vision and robotics, reinforcement learning, computational neuroscience and natural language processing.
In addition to conducting leading-edge research, Mila also works closely with 87 industry partners via collaborative research and technology transfer to facilitate the use of AI in company processes and product development. And the Mila Entrepreneurship Lab fosters student entrepreneurship from ideas to business projects through mentorship and funding. Eighteen Mila start-ups operate in Montreal, Toronto, New York City, Addis Ababa and Germany, working on the use of AI in medicine, finance, neuroscience and transportation.
Canada continues to fund emerging AI research institutes including the Centre for Innovation in Artificial Intelligence Technologies (CIAIT) at Seneca College of Applied Arts and Technology in Toronto and the Durham College Hub for Applied Research in Artificial Intelligence for Business Solutions (the AI Hub) in Oshawa, Ontario. At CIAIT, Seneca researchers will collaborate with industry partners to find AI solutions in sectors ranging from advanced manufacturing and commerce to creative media and finance. The AI Hub provides industry partners with access to AI expertise, state-of-the-art facilities and student talent to integrate AI solutions into products and business operations.
Canada’s strengths and global leadership in AI are powered by the investments made by the Government of Canada in AI research at institutions across the country. These investments are developing the adoption of artificial intelligence across Canada’s economy, connecting researchers and the next generation of AI professionals with industry partners to facilitate commercialization and advancing the development and adoption of AI standards to be used in Canada and around the world.
Researcher Spotlight: Doina Precup
Growing up in Romania, Doina Precup enjoyed science fiction featuring benign and helpful robots. That interest, plus the influence of her mother (a computer science professor), and the other women in her family with successful careers in science, were early draws for Professor Precup to the field of artificial intelligence.
Doina Precup is an associate professor at McGill University and head of the Montreal office of Deepmind. In addition to teaching at McGill, she is a core academic member at Mila, a Canada CIFAR AI Chair, a Fellow of the Royal Society of Canada, a Fellow of the CIFAR Learning in Machines and Brains program and a senior member of the Association for the Advancement of Artificial Intelligence.
Dr. Precup conducts fundamental research on reinforcement learning with a focus on AI applications in areas, such as health care, that have a social impact. At Deepmind, a subsidiary of Google, she leads a team of scientists, engineers and ethicists dedicated to using AI to advance science and solve real-world problems.
Dr. Precup’s focus on creating social impact goes beyond her work in the research laboratory. To address the issue of gender imbalance in science and technology, she co-founded and serves as advisor of the CIFAR-OSMO AI4Good Lab, an organization that encourages women to study and work in artificial intelligence via a seven-week AI training program for undergraduate and graduate students who identify as women. Dr. Precup was also one of four renowned Canadian AI researchers who signed a letter sent in 2017 to Canadian Prime Minister Justin Trudeau, asking that Canada announce its support for the call to ban lethal autonomous weapons systems at the United Nations Conference on the Convention on Certain Conventional Weapons (CCW).
Her work as an award-winning AI researcher dedicated to solving problems to benefit humanity, her leadership in building a diverse and inclusive culture in AI and her support and mentorship of emerging talent have established Doina Precup as a respected and distinguished member of the AI research community in Quebec, Canada and internationally.