Dr. Marian Croak has spent decades working on groundbreaking technology, with over 200 patents in areas such as Voice over IP, which laid the foundation for the calls we all use to get things done and stay in touch during the pandemic. For the past six years she’s been a VP at Google working on everything from site reliability engineering to bringing public Wi-Fi to India’s railroads.
Now, she’s taking on a new project: making sure Google develops artificial intelligence responsibly and that it has a positive impact. To do this, Marian has created and will lead a new center of expertise on responsible AI within Google Research.
I sat down (virtually) with Marian to talk about her new role and her vision for responsible AI at Google. You can watch parts of our conversation in the video above, or read on for a few key points she discussed.
Technology should be designed with people in mind.
“My graduate studies were in both quantitative analysis and social psychology. I did my dissertation on looking at societal factors that influence inter-group bias as well as altruistic behavior. And so I’ve always approached engineering with that kind of mindset, looking at the impact of what we’re doing on users in general. […] What I believe very, very strongly is that any technology that we’re designing should have a positive impact on society.”
Responsible AI research requires input from many different teams.
“I’m excited to be able to galvanize the brilliant talent that we have at Google working on this. We have to make sure we have the frameworks and the software and the best practices designed by the researchers and the applied engineers […] so we can proudly say that our systems are behaving in responsible ways. The research that’s going on needs to inform that work, the work we’re doing with engineering better solutions, and it needs to be shared with the outside world as well. I am thrilled to support teams doing both pure research as well as applied research — both are valuable and absolutely necessary to ensure technology has a positive impact on the world.’’
This area is new, and there are still growing pains.
“This field, the field of responsible AI and ethics, is new. Most institutions have only developed principles, and they’re very high-level, abstract principles, in the last five years. There’s a lot of dissension, a lot of conflict in terms of trying to standardize on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use? There’s quite a lot of conflict right now within the field, and it can be polarizing at times. And what I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now, so we can truly advance this field.”
Compromise can be tough, but the result is worth it.
“If you look at the work we did on VoIP, it required such a huge organizational and business shift in the company I was working for. We had to bring teams together that were very contentious — people who had domain expertise in the internet and could move in a fast and furious way, along with others who were very methodical and disciplined in their approach. Huge conflicts! But over time it settled, and we were able to really make a huge difference in terms of being able to scale VoIP in a way that allowed it to handle billions and billions of calls in a very robust and resilient way. So it was more than worth it.”
(Photo credit: Phobymo)