Ajung Moon, Founder of ORI

Elyssa Macfarlane
Ajung Moon, Founder of ORI

Tell me a bit about yourself and how you ended up where you are now?

Well, I’m AJung. I’m passionate about helping engineers innovate and build autonomous technologies (AI/robots) with ethics in mind. I founded and lead a think tank called the Open Roboethics Institute (ORI, formerly known as the Open Roboethics initiative), which has been spearheading discussions on social and ethical implications of robotics and advanced technologies since 2012. I’m a roboticist with mechatronics engineering background, which I think is the source of my solution-driven nature. A few months ago, I started a sister company with fellow ORI execs called, Generation R Consulting, that translates these complex and seemingly philosophical discussions about ethics into actionable and practical strategies. That way the companies can actually know what challenges they have, and have proactive and tangible ways to address them as they use/develop the innovative technologies. I think it’s safe to say that I’m passionate about roboethics, and happily wear many different hats because of it. The whole discussion of ethics in AI/machine learning and robotics used to be super niche back when I started about 10 years ago. I got into mechatronics because I thought I was going to be the next Bill Gates of robotics – and because my sister was good at everything but technology related stuff, so it was a natural choice. But then I realized that, with the skills that I was learning from the university, I could build whatever I want and there is nothing there to stop me. Just imagine. Anyone who knows a bit of electronics, programming, and hardware can put together sensors and actuators to build a killer robot. I became really interested in studying roboethics since then, and picked up a minor in Philosophy. I eventually found incredible people at the CARIS lab at UBC and happily spent my graduate studies, in part, developing robots to negotiate with people, and at other times, hopping from the UN to Parliament to high schools to talk about ethics and social implications of the technologies.

How did you develop the idea for ORI? Can you explain a bit about the division between the non-profit side and the for-enterprise side of the business model?

The idea for ORI came during a discussion with my lab mates at the CARIS lab. We were discussing how the whole roboethics discussions really didn’t cross disciplinary and national boundaries: Koreans would discuss developing a charter in Korea in Korean; conversations being had between philosophers would have a hard time crossing over to engineers and vice versa. We wanted to take on the idea of talking about roboethics issues in a way accessible to a wider audience and explore ways that can be as inclusive of different stakeholders as possible. At the end of the day, we are all in the business of design our increasingly robotic future, whether we realize it or not. So we started by demonstrating how responses to public surveys and machine learning algorithms can be used to program socially appropriate robot behaviours. We also ran a series of public polls to highlight really interesting ethical issues that are unique to robotics and AI, especially in the domain of autonomous cars, care robots, and military robots. Now that we have grown into an Institute, ORI is planning to continue our research projects that are public facing – we generate the knowledgebase the society can benefit from on this topic and make the results open and public. But there’s a huge limitation to this, which is that there are companies that are using these technologies already that do not know how to address their AI/roboethics needs. Generation R Consulting aims to address the fact that ethics in AI and robotics is really an elephant in the room in the corporate world. Some companies don’t know that the added machine autonomies come with inherent ethical and social challenges. Many companies know it, but are afraid that talking about ethics will just put unrealistic constraints on their developers and operations. We essentially help them figure out what their challenges are and what they can do about it to enable their team rather than stifle innovation.

What are some of the implications of AI, robotics and advanced tech that we are facing today?

Oh boy. I could write a book on this. But let me not start on that right now. One obvious implication of AI, robotics, and advanced technologies is that they have and will continue to change the way we behave, make our decisions, and live our lives. The discussion around autonomous cars and how it will change the way people get from point A to point B is now old news on mainstream media. It obviously has positive promised set of impacts, but it has also raised the ethical and social issues. Not only are we starting to think about how human drivers’ jobs will be affected by the introduction of such technologies, we are also forced to think about the greater economic implications of slide towards machines on the human-autonomy to machine-autonomy spectrum. When I was giving an expert witness at one of the senate hearings in Ottawa this year, one of the questions a senator asked me is what is the role a government should play in this domain and who should be responsible for retraining of displaced workforce. I think the interaction I had with them at the Parliament Hill really highlighted the fact that technology is really challenging everyone to think about these questions of ethics and social values with a heightened sense of urgency. I do hope that this sense of urgency leads people to take on a proactive approach to come up with practical and socially beneficial solutions, rather than distill fear around increase in machine autonomy.

Do you often hear the claim that we’re in the midst of a sort of dystopian future where robots and AI show a threat to human civilization? What’re your thoughts on this topic?

I really think it’s a simplification of the whole landscape of issues we have at hand. I have been fascinated by the question of “What should a robot/AI do?” in my research because it has is a basis of so many ethical issues that are uniquely raised by the added machine autonomy. But much more interesting is how technologists today are stuck struggling to address these questions in their everyday engineering practice, without the appropriate tools and resources they need. We don’t go through engineering or computer science education and magically come out at the end of the university tunnel as expert ethicists. We get some professional ethics training, but not about how to foresee the implications of the design decisions you make and make the ones that are aligned appropriate set of values. If the people designing and deploying these technologies are not provided with the tools and resources necessary to address their ethics challenges and questions (e.g., is what I’m building going to replace people’s jobs? What’s the actual risk of my predictive algorithm making a false prediction?), then I think we are really going to be in the dystopian trouble. There’s lots of work being done all across the world, especially that of the IEEE GIECAS led by the amazing John Havens, who are actively trying to fix this. I think these interdisciplinary initiatives should be strongly supported so that we don’t end up with the dystopian future, and rather, have concrete guidelines that help transform the ethics discussion into enablers that help people innovate better.

What’re some characteristics that you hold that you think have helped you throughout your personal and professional development?

I am not afraid of embracing the niche, and being a minority in a group. I found myself in minority situations throughout my life. I was the one of two Asian students in my elementary school when my family first immigrated to Canada. I am a female in a male-dominated field of robotics and engineering. Within robotics, I am again a minority in the specialization I have – roboethics. Within the roboethics community, I often find myself as a minority because there is often not a lot of technical experts on the panel or speaker list. I also happen to be the youngest in many committees/panels that I am a member. So I have taken on this implicit role as an ambassador for the minority group I happen to be representing that day, and to translate discussions and viewpoints from one group to another. I think that has really helped me to be comfortable about being uncomfortable, and have confidence in what I am and what I represent even in uncomfortable situations. Somehow it relates to the type of risk-taker I view myself as.

What’re some of the projects in AI/robotics that fascinate you right now?

One of the projects that I am working on right now is super exciting. I’m working with Hallie Siegel – former editor of a large robotics news network called Robohub – and seven other amazing roboticists from across Canada on a project called the Canadian Robotics Network. We are trying to pave the way for having a national robotics strategy in Canada, which I think is essential if Canada is to invest in our future in the technology. Countries such as the UK, South Korea, and even Singapore has a roadmap. Yet, Canada is the only G7 country to not have a national strategy in place. We just received an NSERC grant to be able to hold a stakeholder meeting on this topic, and I think it’s a great opportunity for us to learn from how others have tackled such projects and build one that is very much our own. We have a dream team participants and speakers coming to town in Vancouver to discuss this this September, and I am really excited to be helping to organize this project.