Scale Events
timezone
+00:00 GMT
Sign in or Join the community to continue

Panel: Digital Transformation with Responsible AI

Posted Oct 06, 2021 | Views 1.9K
# TransformX 2021
Share
SPEAKERS
Beena Ammanath
Beena Ammanath
Beena Ammanath
Executive Director @ Deloitte AI Institute & Founder @ Humans For AI

Beena is the Executive Director of the Global Deloitte AI Institute and Founder of non-profit, Humans For AI. She also leads Ethical Tech & AI for Deloitte. Beena is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, e*trade and a number of Silicon Valley startups. A well-recognized thought leader in the industry, she also serves on the Advisory Board at Cal Poly College of Engineering and has been a Board Member and Advisor to several startups. Beena thrives on envisioning and architecting how data, artificial intelligence and technology in general, can make our world a better, easier place to live for all humans.

+ Read More

Beena is the Executive Director of the Global Deloitte AI Institute and Founder of non-profit, Humans For AI. She also leads Ethical Tech & AI for Deloitte. Beena is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, e*trade and a number of Silicon Valley startups. A well-recognized thought leader in the industry, she also serves on the Advisory Board at Cal Poly College of Engineering and has been a Board Member and Advisor to several startups. Beena thrives on envisioning and architecting how data, artificial intelligence and technology in general, can make our world a better, easier place to live for all humans.

+ Read More
Karen Silverman
Karen Silverman
Karen Silverman
CEO and Founder @ The Cantellus Group

Karen is a recognized thought leader in technology governance and has founded an experts-based consulting group to advise leaders in business and government on how to better oversee and manage the AI and other frontier technologies that they are bringing into their operations. A lawyer by training, at retired Latham partner, she is now focused on the broader set of strategic and risk issues associated with these breathtaking technologies. Karen sits on the World Economic Forum Global AI Council and her new group is a WEF Global Innovator. She also serves as the Outside General Counsel for HIMSS, the global digital health society. Karen's work has appeared in the WEF Agenda, MIT Sloan Management Review, CogX, the ABA Science and Technology magazine and elsewhere.

+ Read More

Karen is a recognized thought leader in technology governance and has founded an experts-based consulting group to advise leaders in business and government on how to better oversee and manage the AI and other frontier technologies that they are bringing into their operations. A lawyer by training, at retired Latham partner, she is now focused on the broader set of strategic and risk issues associated with these breathtaking technologies. Karen sits on the World Economic Forum Global AI Council and her new group is a WEF Global Innovator. She also serves as the Outside General Counsel for HIMSS, the global digital health society. Karen's work has appeared in the WEF Agenda, MIT Sloan Management Review, CogX, the ABA Science and Technology magazine and elsewhere.

+ Read More
Caroline Lair
Caroline Lair
Caroline Lair
Founder @ The Good AI

Caroline Lair is the founder of The Good AI, the first community of AI companies and talent on a mission to help deliver on the Sustainable Development Goals. The Good AI supports more than 180 projects from 30 countries to date. She’s also the co-founder of Women in AI, an international community of 6000+ members, working toward a gender-inclusive AI. Prior to this, Caroline did operate in various business positions, lately at Snips, building private-by-design AI Voice Assistant ( acquired by Sonos in November 2019) and at HCVC venture capital firm, as an investor and partner. Caroline holds two master's degrees in business (EM Lyon, France) and international relations (Lyon III, France).

+ Read More

Caroline Lair is the founder of The Good AI, the first community of AI companies and talent on a mission to help deliver on the Sustainable Development Goals. The Good AI supports more than 180 projects from 30 countries to date. She’s also the co-founder of Women in AI, an international community of 6000+ members, working toward a gender-inclusive AI. Prior to this, Caroline did operate in various business positions, lately at Snips, building private-by-design AI Voice Assistant ( acquired by Sonos in November 2019) and at HCVC venture capital firm, as an investor and partner. Caroline holds two master's degrees in business (EM Lyon, France) and international relations (Lyon III, France).

+ Read More
Kay Firth-Butterfield
Kay Firth-Butterfield
Kay Firth-Butterfield
Head of Artificial Intelligence and a member of the Executive Committee, @ World Economic Forum

Kay Firth-Butterfield is Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum and is one of the foremost experts in the world on the governance of AI. She is a Barrister, former Judge and Professor, technologist and entrepreneur who has an abiding interest in how humanity can equitably benefit from new technologies, especially AI. Kay is an Associate Barrister (Doughty Street Chambers), Master of the Inner Temple, London and serves on the Lord Chief Justice’s Advisory Panel on AI and Law. She co-founded AI Global and was the world’s first Chief AI Ethics officer in 2014 and created the AIEthics twitter hashtag. Kay is Vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles. She is on the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All. Kay has advanced degrees in Law and International Relations and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI. She has been consistently recognized as a leading woman in AI since 2018 and was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.

+ Read More

Kay Firth-Butterfield is Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum and is one of the foremost experts in the world on the governance of AI. She is a Barrister, former Judge and Professor, technologist and entrepreneur who has an abiding interest in how humanity can equitably benefit from new technologies, especially AI. Kay is an Associate Barrister (Doughty Street Chambers), Master of the Inner Temple, London and serves on the Lord Chief Justice’s Advisory Panel on AI and Law. She co-founded AI Global and was the world’s first Chief AI Ethics officer in 2014 and created the AIEthics twitter hashtag. Kay is Vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles. She is on the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All. Kay has advanced degrees in Law and International Relations and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI. She has been consistently recognized as a leading woman in AI since 2018 and was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.

+ Read More
SUMMARY

Hosted by The World Economic Forum. With the continued acceleration and adoption of artificial intelligence, businesses and business leaders are facing more challenges to ensure AI systems are trustworthy, are being developed ethically and equitably while respecting individuals’ privacy. In this expert panel, we discuss policy frameworks to help realize the benefits of AI while mitigating the risks and how tech companies can and should partner with the public sector.

+ Read More
TRANSCRIPT

Kay Firth-Butterfield (00:00): So welcome all of you to this session on digital transformation with responsible AI. I'm Kay Firth-Butterfield head of AI and machine learning at the world economic forum and member of the executive committee. And I'm going to be your moderator today. It's not often in AI that you see a panel of all women and a female moderator. So this is hugely exciting for me. Also, I'm going to be talking to fascinating, clever women, all of whom are leaders in their chosen field. So whilst the benefits of accelerated adoption of artificial intelligence are worth millions businesses and business leaders are now facing more challenges to ensure that AI systems are trustworthy, are being developed ethically and equitably, while's respecting individuals privacy.

Kay Firth-Butterfield (01:34): Today, we're going to suggest some solutions and guidance for businesses in this period of difficult transformation. Some of the questions you will have heard before, but I know that the answers you hear will be new and insightful. I'd like this to be a conversation and hope that the panelists will jump in to add, to answers, making our time together, even richer. But to start, let me turn to you Beena. You're a technologist who's worked in a number of industries and also founder of humans for AI. And Caroline, you are an investor co-founder of women in AI and Good AI. So you've both started AI nonprofits. Can you tell us why and why you felt that they were needed? Beena, perhaps I could ask you to go first.

Beena Ammanath (02:33): Okay. What a great question to start with and I am so excited. Let me just say I'm so excited to be on this all women panel today. So the reason I started humans for AI is really I've always been an advocate of getting more women into tech, women in stem. And once AI started becoming real it went into hyper mode because AI truly needs diversity in everything that has been right from design to development to scaling it out. We hear a lot about bias in AI. One of the most basic fix it is by bringing in diversity of thought. When I say diversity of thought, that means different genders, races, ethnicities, geographic background, cultural background, education background, the more diversity we can bring to AI, the more robust and reliable that AI solution will be. And the more equitable.

Beena Ammanath (03:30): So I started looking at the AI teams that I myself was building and my peers were building and we saw that there was a real problem in the AI teams. And one of the ways... And no company was really addressing it. And one of the ways I felt was really by driving more AI literacy, specifically targeted at women and underrepresented minorities so that they had basic AI fluency to at least engage in AI conversations, read an AI news article and understand it. And hopefully then say, okay, I want to be part of AI. How do I get in? Because I think there are inaudible 00:04:08 programs around teaching coding or teaching machine learning or data science, but to get women and URMs to that stage is what humans for AI is trying to do. It is registered as a nonprofit and we have a pretty global team because I think getting more humans as part of AI is absolutely crucial for AI's own success.

Kay Firth-Butterfield (04:33): Super, thank you very much. And as you well know, I agree with you entirely, Caroline.

Caroline Lair (04:40): Yes. Thanks for having me today. I'm also really happy to be with so many brilliant women. So on my hand I've co-founded women in AI four years ago also because exactly like inaudible 00:04:52 were feeding the lack of women in the field. And we felt as well that it was also a super opportunity for women to contribute in shaping and inclusive AI and empowers themself through this. So the way we're doing that is that we build this community of today. That is today 8,000 members from 130 plus countries. And we providing different programs in education or research to empower them. We doing our public speaking in this kind of empowering programs. And last year I also study another project which is named Good AI. And here again, it's a community. So I'm a community builder. It's a community of startups of AI, startups, and talent that are focusing on AI to address the sustainable development goals. And today so it's around 180 starters and 150 talent all working together for good.

Kay Firth-Butterfield (05:49): Thank you. Well, starting a community is obviously a really good way of making sure that we increase that diversity but also starting communities and building even more structure around them and making sure that we're all working together is something that I've been very passionate about. And it's part of my work at the forum. And Karen, you're CEO of the Cantellus Group, and you're formerly senior partner of a multinational law firm. I know that you founded Cantellus to help stakeholders navigate AI transformation. Can you help us a little bit with the adjudge question of whether AI governance and regulation of AI are the same and why that difference if there is any matters for businesses, governments and society?

Karen Silverman (06:47): Certainly. And I'm just going to also just echo my thanks and appreciation for being here with this group of women. So I actually do think that there's an important distinction between regulation and governance and it's in part a temporal one that we're sort of in the process now of seeing regulations emerge and regulation brings with it I think the connotation of at least capital or regulation enforcement and compliance obligations that come along with that. I think governance is a much broader concept and one that we can and should be working on now. And it's one of the things that we very much focus on with the Cantellus Group.

Karen Silverman (07:31): And in that there were looking at governance a little bit more strategically. So it's everything from looking at opportunities to risks, to steps that one can use to mitigate risk and also compliance. But it's a much broader concept. And I think a much richer one when we think about governance. And it looks to not just the structures I think, but the outcomes of that work people are doing to think carefully and plan a little bit better for the ingestion of a complicated technology like artificial intelligence and any of its close cousins. I wouldn't want to confine the concept just to AI, but I do see a difference.

Kay Firth-Butterfield (08:18): Thank you. And you've been part, I think, of some of the work that we've been doing around helping boards to understand AI better and their duties as board members, when they're thinking about AI and also now with the Csuite as well. Why does it matter that those people across a whole organization actually understand AI governance, Karen?

Karen Silverman (08:50): I think, well, one very straightforward reason it matters is because they've got fiduciary duties that encompass these issues and whether they're missed opportunities or missed risks that lands at the board and with management. So that's the narrow reason. The broader reason really goes to these cultural issues. I think that all of us have been discussing, which is by setting a tone and not just talking about something, but by demonstrating a commitment to responsible technology. The rest of the organization can start to orchestrate solutions and anticipate problems in time to either avert or mitigate them. I think in the absence of that sort of leadership, these sorts of issues might be given lip service.

Karen Silverman (09:44): They might be given a little bit of soft time, but they don't become programmatic. And so one of the things that we're really focusing on is what the explicit accountability structures look like within an organization, but even more than that, what the resourcing looks like. Are you putting your most in demand people in charge of this, are you giving them budgets to really do the work that they need to do so that we can move from a pretty aspirational space to something that's a little bit more contextual and has has more traction. So I think that has to start with leadership.

Kay Firth-Butterfield (10:23): Absolutely. I of course agree, but Beena, organizations you work a lot with all different types of businesses and governments, how should organizations educating train employees, whether they're these executives that Karen was just talking about, or product managers, product designers, engineers, all on this idea of developing foundational AI, but responsible AI at the same stage.

Beena Ammanath (10:57): Yeah, not, so true. I love the way Karen articulated more of the setting the tone for the entire organization, but really AI takes as a team sport. Everybody in the organization, every employee needs to understand what are the guiding principles around ethics that the organization is anchoring on. And be empowered to actually raise concerns where necessary. What happens today is that most of it gets inaudible 00:11:30 just around data scientists or the engineering teams. Whereas I think that the sourcing officers, the procurement leaders, the finance team, the HR team, everybody, no matter where they sit within the organization has to understand what are the AI think principles that's relevant for their industry, for their business, and how does it fit in with the existing process?

Beena Ammanath (11:54): So that's one side, but the other side is also thinking about the processes that the employees adhere to, whether it is for sourcing or procurement, providing training and including checkpoints within the processes where you're checking for ethical implications. Even if you're not developing an AI product, if you're using an AI product, if you're buying an AI product, there should be checkpoints so that the employee is empowered to ask the right questions. Not everybody understands. And especially around ethics, it is a gray and fuzzy area, and it is up to the organization to be able to provide training in a way that can empower the employees. And when I say training in a way, it means we have so many training tools now. It doesn't need to be a boring, just a PowerPoint presentation. You can gamify the training, you can give points and make sure that employees are raising those concerns. So I think empowering every employee is an absolutely crucial factor for ethics to succeed within an organization.

Kay Firth-Butterfield (13:04): And Beena, do you think that empowering every employee is also empowering whistle blowers if they find some something is wrong?

Beena Ammanath (13:17): Absolutely. And that's what I think you don't need to be a whistleblower if there are processes that actually empower and enable you to raise the concerns and leadership and the board to Karen's point is listening and addressing those issues. You don't need to do it in a way, which is secretive. If the process is set up in a way, and the employees have been trained on how to use those processes.

Karen Silverman (13:46): Can I jump in Kay? inaudible 00:13:48 it's so interesting you described it that way Beena. I think that's really helpful. It's funny that when I think of ethics, I think of it as part of the overall governance project. It's an element of it. But it's also where things could go.

Karen Silverman (14:05): And when you look at a lot of AI principles, they're quite aspirational where you're leaning into fairness, or you're leading into transparency, all terms that need to be defined within a very specific context, but there's an opportunity here certainly to avert disaster. But also possibly to accelerate our progress in some ways.

Karen Silverman (14:27): So I would just be curious to your reactions, both of you for all of you to that concept, because I've been really struggling with this nomenclature as Kay knows. And we've had long conversations about but I'm trying to balance it in my own mind as a concept also.

Caroline Lair (14:46): I usually prefer not to use the word ethics and I use responsibility AI because ethics is such a complex and broad concept. And for businesses, I think responsible makes a bit more sense and below the responsible hat I will add the six pillars, so of ethical AI, which I guess we all know furnace, inclusivity, reliability, privacy, transparency, accountability. And then it takes us a bit everywhere I guess through those different pillars. But this is how usually I work with them and governance is really calming in some way to make those principle practical, I'd say like on a daily basis in businesses, because there are big concepts. Sometimes they seems obvious. Everyone wants to live in a fair society. Everyone is looking for equity, but unfortunately it's not that easy.

Caroline Lair (15:47): And so we need, and I would say even sustainable governance practice's to help businesses put that in place. So it goes through awareness, literacy, having a clear shed purpose in the company, have accountability as you mentioned earlier align principle with action. I often use this example of for instance what is inaudible 00:16:12? For example, what constitute inform concept, who can concept, what level of details are regarding the providers involve procedure steps, potential outcome, and alternate approaches is required when we talk about personal content, what does that mean practically? And so these are for instance examples of practical practices for businesses.

Kay Firth-Butterfield (16:40): Caroline, thank you for that. And I know that you do a lot of work, as you said with startups, about startups that obviously come from a point of thinking about society, because they're working on the SDGs, but just take us back a little bit and let's look through the lens of a startup about how they're going to think about responsible AI. And it's a different matter when you're a startup and resources are limited as opposed to a much more mature company that can just get on with doing the responsible AI piece, should they choose to. So what's the balance there?

Caroline Lair (17:30): The way I see startups in the field of AI and responsible AI might be a bit different from what I've been experiencing with a startup at the Good AI. For me startups are vehicle that goes super fast. They're able to iterate, try in a very limited time where companies are way more slow to move. And so, from what I see the very, very innovative solutions are coming from those change makers from startups, because they are also attracting great talent, young change makers talent, new way of thinking. And so of course they're limited in term of means for sure, but most of them are raising. And we fortunate enough to have a lot of investors that are more and more into sustaining more investment ESG investing and that are looking at the startups, enable them.

Caroline Lair (18:34): And this is also our role at the Good AI. I would say to help them recruit the best talent and also facilitate the communication and the connection with companies because as you were mentioning Beena I know it is teamwork, it's a community work. So our work at the Good AI is also to facilitate the relationship between startups and income and big companies, because they don't have the same rhythm and they don't have always the same goals sometimes. So we here in the middle as well to help both parts to leverage the best of the relationship.

Kay Firth-Butterfield (19:12): That's really interesting, Caroline. I was doing some work with the Indian government on how to move from their principles to practice in their national AI strategy. And one of the things we were talking about was should startups have a buy from having to deal with responsible AI because they didn't have the money to deal with responsible AI. And my view was absolutely not startups to be bound by the same rules as everybody else. So Karen, and as more companies and countries adopt AI, we are beginning to see regulation come out. But until we get these regulations developed, adopted, and of course tested by us lawyers, what frameworks do you think that companies should be using and thinking about in their development?

Karen Silverman (20:13): Yeah, and it's a great question. And I have a thought about the startups as well in this space, which is that... I will get to that, but startups often have the most to lose by getting this wrong too. They're very often one or two applications that are core to their business that can touch consumers and employees very intimately. Whether it's a dating website or mental health or physical health. If they don't earn and keep the trust of their users or their employees, they lose engagement and they lose their premise. And so investors in startups and developers or startups really on many of them anyway, on sort of the bleeding edge of this and should be very focused on these issues. But in this loose framework way that I think you're getting at Kay which is a much more agile environment.

Karen Silverman (21:10): And we've all touched on aspects of the way that I think about this, which is certainly the accountability and resourcing, sort of the tone at the top and a demonstration of the tone at the top. It's not just enough to have the words. And I think about it as an A, B, C, D, because that's how people remember things. And the A is around asking and articulating, what is this tool meant to do? What is it meant not to do? Ask the boundary kinds of questions and the purpose kinds of questions and the alignment kinds of questions. The B is around the behavioral alignment. Are we set up to really support the outcomes that we're looking for. So are we incentivizing our developers to take an extra period of time to think about these issues or we just incentivizing them to get out the door quickest?

Karen Silverman (22:02): So making sure that you're not undermining yourself by having the wrong alignment of incentives. The C is continuous monitoring. Because I think that's just very different. And whatever that means for you in context is going to be different, which is what does it mean to be and stay vigilant around the actual impact of these tools and their performance. And then the D is documentation, which I think is just near and dear to all the lawyer's hearts, which is what do, and do you not want in your record for demonstrating how seriously you took this. And the goal I think is to demonstrate that for September, 2021, this was the best one could do, because all of these steps are going to be evaluated from September, 2022, and it'll look very different.

Karen Silverman (22:53): And so making sure that the documentation really reflects the hard work that's being done and the reasons for that work and the objectives of that work. So I think if you did nothing else, but just get more explicit and fix the accountability problem, I think you'd go a long way to addressing what is embedded in most of these draft regulations and even some of the past ones which is around these core six principles articulated variously. And distributing ethics responsibility and governance responsibility across the organization, not housing it in one religious organization, which is what we see a lot. All right. So that's how I think about it, which is if you just go through these steps or sort of, then there's sort of AI life cycle steps.

Karen Silverman (23:48): And then the only other thing I would say is, and I will be quiet is I actually think about that it's really important to distinguish when we're talking about what we're asking humans to do and what we're asking technology to do, and the interaction between the two. So I think about actually it's responsible use of trustworthy technology. So when we talk about transparency, are we talking about transparency of process and account, and appealability and explainability and all sorts of those things or interpretability. Or are we talking about transparency within the tool itself? And the solution can be on either side of that equation, but conflating them will just confuse everybody who's not immediately in the room while you're deciding it. So you've got to be really explicit about who you're asking to do what and why.

Beena Ammanath (24:35): Can I add something Kay?

Kay Firth-Butterfield (24:36): Yeah. Of cos.

Beena Ammanath (24:37): I absolutely love what Karen said. Especially the A, B, C, D. The reality is nobody has it fully figured out. We are all figuring this out together. There is no playbook for it. There is no exact guidelines like this is what you need to do. That's why you'll see a lot of frameworks principles. And at the end of the day there's looking at your own organization and figuring out what's relevant for the industry that you're in, the work that you're doing, because even within an industry, it can be based on the use case, the guidelines for responsible AI could be different, right?

Beena Ammanath (25:16): Like say the hospitals. There's the patient bed management system. And then there inaudible 00:25:25 system. There's diagnosis system, each one of them, how to be responsible is going to be very different. Content is so important. And that the reality that it's not all fully figured out and laid out. I actually see it as our opportunity to shape it, to bring our collective thoughts together and let's figure it out together. It's not one size fits all. It is going to be very specific to the context of your use case.

Kay Firth-Butterfield (25:57): It's wonderful.

Caroline Lair (25:58): Kay as well I might have something. So when you mentioned the startups earlier, I think I did answer a bit besides, because actually the startups we have, we are selecting at the Good AI are specifically focusing on developing also responsible AI solutions. So for instance, we have startups and this is where it's getting super interesting. We have startup that working in the recruitment area, and they are really working on producing fair algorithm that are not bias on gender for instance. They really specifically working on that. We have other startups working for instance, on private, by design computer solution to monitor public space. So they're really focusing on taking this ethical AI principle and adding them directly in their value proposal. So this startup that we pushing and that we basically helping, because we really believe they can come with something concrete. And that big companies will be able to leverage very soon

Kay Firth-Butterfield (27:08): That very helpful, Caroline, because certainly this work needs to be done. We across the forum ask all of our startup members to actually sign an ethical use of tech statement. And at the moment we're working on adding responsible AI for consideration within the ESG framework. And I want to really ask this question of all of you. Do you think we need to add AI into ESG, or do you think ESG already covers it? One of the things that I'm finding and Caroline I'd like you to address this first if I can, is that VC companies are much more interested in making sure that there is responsible AI in the companies that they're investing in. And that the other side we are seeing the big investors say, okay, we want to make sure that we're investing in companies that are using and obviously adopting responsible AI. So Caroline, from the VC side, are you seeing that?

Caroline Lair (28:26): So yes. Indeed we seeing this, so there is a real real from VC from investors to identify responsible AI startups and responsible AI based investments. This said, this is still I think, a bit complex to really assess so far. Look at us. We still talking about how to transform ethical principle into real practice within businesses. So I guess this is still the beginning and still hard to assess, but to answer your question, I think AI can really help with ESG investing. So because basically you can use AI to collect and analyze more information than ever before when accounting for ESG risk and opportunities. So it can really help them to assess in a better way. And so yes, it could help, but I think it is really the beginning, honestly.

Kay Firth-Butterfield (29:35): Karen, Beema.

Beena Ammanath (29:38): crosstalk 00:29:38. Yeah, I can speak to how we are doing it at a large company like Deloitte where we are really bringing those core things together. We've set up an office of the purpose to find a role called chief purpose officer and really looking at we've made four core commitments and it includes ethics, technology, ethics. It includes a sustainability, it includes diversity and inclusion. So bringing all these fuzzy areas together, but it's just so crucial to the purpose of a company under one umbrella and finding those interconnections is how we are addressing it at Deloitte.

Kay Firth-Butterfield (30:22): And Karen, you see a lot of companies.

Karen Silverman (30:25): I do. And I think right now majority of companies that I see are interested in CSR or ESG, and they're engaged in that conversation. I would say a minority of them see technology as a piece of that yet, but it's a growing minority. I mean, or it's a shrinky minority. It's more and more of inaudible 00:30:47 but it's still very early stages. And there is a little bit of a movement out there and a debate about whether S stands for sustainability or social impact.

Karen Silverman (30:56): And for the people who think it stands for social impact. I think the technology piece is an obvious fit for those who are focused on the sustainability piece. I think it still fits, but you might need to add another letter to make it explicit. So ESG and T is something we are hearing more of. And just to be really clear I don't think it's the only way in, for corporate engagement on these issues at all. I think we should be a little bit cautious to... I think we need to note that it is included there and should be included there, but I think it's not sufficient to put it there, if that makes sense.

Kay Firth-Butterfield (31:35): Thank you.

Caroline Lair (31:35): To maybe give an example, if I may of another starter that is working on this kind of measures to assess some ESG criteria, so there is a French startup called carbometrics and basically they're building this AI that enables to map any company's greenhouse I guess, emission, so they can cover all emission up to one and two and three upstreams and outstream. And then it makes it possible to compare players of a given sector on the consistent scope. So this is one example of where you can use really AI to help investors measure this climate oriented criteria.

Kay Firth-Butterfield (32:19): That's wonderful Caroline. Thank you. So moving on to thinking about how multi stakeholders work together on responsible AI, obviously being at the forum, we are a public private organization committed to that dialogue of multi stakeholders. So Caroline, how should private companies be thinking about partnering with nonprofit organizations' government and academic at an academia to solve today's most pressing challenges in responsible AI and innovation.

Caroline Lair (32:56): So I will talk about experience that they know. So for instance woman in AI who is in a community and a nonprofit, so the answer it's like, yes, please companies go and work with existing communities with existing nonprofit academics. Because it's a treasure for you. I mean, as I was saying a bit earlier companies are moving a bit slow though they are the one that have the most impact. So we absolutely need to enable them, give them the right toolings. And there is so much resources out there. And inaudible 00:33:35 was mentioning the resources are there. Most of the time it's experts, it's talent, it's startups, and they represent as well diversity. So we have wonderful community of women who have been at yours, ours, and we have thousands of women all around the globe.

Caroline Lair (33:50): And this is great, super important because for instance, women in AI, we have 8,000 members, 130 countries. So in term of diversity, you cannot do better. And it's really a huge talent pool for any company within also to recruit more women in tech or non-tech falls in relation with the AI solutions. We are also developing educational program from teenagers to executives, syllables, we have research program. So they better really leverage this. This is already existing. And on top of it you have facilitator enablers, like the Good AI again, that will facilitate communication between those two universe. But yes, that's why I'm often very optimist because I think, and I see that every day we have the resource, we have the talents, we have the solutions, we just need to connect the dots and build some bridges.

Kay Firth-Butterfield (34:47): Super. Thank you. And Beena, you've got lots of experience of working in the tech sector and in working in public private partnerships for responsible AI. Can you tell us about some anecdotes of that experience?

Beena Ammanath (35:02): Absolutely. And I'm smiling when Caroline was responding, because I don't think any company can succeed in their digital transformation initiative without these partnerships, because the technology world that we are living in is extremely complex and it's very noisy. So you need a way to make sense of it. If you remember a few years ago we all started seeing the rise of the chief innovation officer or innovation teams being set up. And at the heart of it, it was really to be able to keep a pulse across what's happening in academia, what's happening in research groups, what's happening in nonprofits, to be able to understand what is relevant. Where it gets noisy and confusing is exactly how do you make it relevant for the problem you're trying to solve?

Beena Ammanath (35:56): And you asked me to share an example and one example is as a business executor, I know what my problem is. And one of my prior roles, it is like inaudible 00:36:08 when a jet engine might fail. So I know that's the problem, and I want to be able to predict it so that I can prevent unplanned downtime or flight delays. But I don't know what's the best way to solve for it. Is there a mature AI product that I can buy off the shelf and train it on my data set and deploy it, if not, is there a start of working on this problem that I could potentially acquire, acquihire or investing to make them get to the solution faster. So I start with mature AI products. Next is startups. Next is research and academia.

Beena Ammanath (36:46): And next is if none of these three exists, then I look at, do I use my precious data science resources house to solve for this problem? Or do I partner with another organization who can build it for me? So this is literally the thinking that most innovation goal officers go through us as to how to solve the problem the fastest way, and to get to the solution because that's actually impacting the business. And once you get through this, then you also need to figure out what are the risks involved with it? What are the regulations around it? What are the regulations around the data usage, the privacy? Is it different based on whether the data came from China versus whether it came from EU or from the US, which it absolutely can be.

Beena Ammanath (37:37): So being able to understand all the different pieces that's needed to solve for a business problem, you absolutely need all these partnership and ecosystem. Having this ecosystem is absolutely crucial. And Kay that's one of the big reasons I joined Deloitte was to set up this Deloitte AI Institute, which looks at it from a very applied AI lens from foreign enterprises. And to get to that solution in the fastest way, by keeping a pulse of this ecosystem, I don't think any company can succeed in their digital transformation without leaning into all these partnerships.

Kay Firth-Butterfield (38:18): Absolutely. I agree with you entirely Beena. And obviously I know that both Deloitte and Karen can tell us are members of the world economic forum. And so, Karen I just wanted to ask you, why do you think that that is valuable to you as a young company and that public private partnership, why is it useful in the AI transformation space?

Karen Silverman (38:45): Yeah. I mean, for all the reasons we've been talking about which is that this is a team for, it requires all the different perspectives. People are thinking about difficult issues with very different lenses and mindsets. Even on this call, you've got a diversity of industry represented. And particularly for us where we're trying to translate into that context, that sort of deep context for clients, it's really important that we understand how that conversation is going in other parts of the universe. And so that we can start to navigate for clients that way. And then also we can bring back what we learn to the conversation that's happening at the forum and places like the forum. I also think that it's important at this very moment, whether it's really enterprise, corporation, companies right now, or startups to participate in the conversation that governments are having around their own AI strategies and their regulation strategies.

Karen Silverman (39:56): And that a lot of the anxiety around what's that regulation going to look like is around not knowing. And by participating in the conversation, you're not going to know what the outcomes are, but you can start to understand what the thinking is. And I think it has salutory benefits all the way around on that front. And I would say on the multi stakeholder piece, it's an external function just like you've talked about, but I think it's also an internal function. And I was sitting in a really interesting meeting the other day where we were actually talking about fairness and in the room we have a very multi stakeholder type team. I think we literally had a bioethicist, a roboticist, and me a lawyer on the call with the client who had a data scientist, an HR manager, and an executive marketing manager.

Karen Silverman (40:51): And we started talking about concepts of fairness and what the data scientist meant by fairness, was very different than what the HR manager thought meant by fairness. And we spent a good hour just talking through those concepts from an internal perspective, no one was right or wrong. It was very helpful for them each to understand where that language was even becoming obscuring or complicating or creating false alignment where everybody thought they were in agreement, but on very different basis. So I think the value of this multi stakeholder conversation, if it's engaged in with respect and some degree of latitude where people can explore safely just rich... It just accelerates the whole process of getting to smart places that actually work well for people, as opposed to feel like something that doesn't fit. That makes sense, but doesn't work. Anyway, so I think it has these two different pieces to it at least in my very limited experience compared to the others.

Kay Firth-Butterfield (42:05): Thank you Karen. And I agree with you entirely that internal teams are going to trip themselves up if they do, if they all have different ideas of what these things mean. And that's maybe the next holy grail of thinking about responsible AI coming up with better descriptions and operationalization as Caroline was talking about amongst her startups of some of these procedures that we need to put in place.

Kay Firth-Butterfield (42:39): So diversity has come up on a number of occasions, first of all, with Beena, not only with your nonprofit, but also diversity thinking about how you think about teams in business, you were just talking about diversity of thought, Karen. And Caroline, you talked about diversity and being able to bring in diversity through perhaps employing more women in your AI department. And as we close this out, I'm just wondering whether anyone wants to add anything to those different types of diversity that we've already looked at.

Karen Silverman (43:28): I'm going to mention generational just as the mother of a young woman. I know several of us have grown and growing children. I think they have so much to say in how they think about the world, not all of it correct necessarily, but I think if we're building products and we're building a future, we've just got to include them in the conversation. And when I'm dealing with the senior, most people at organizations that tends not to happen as a matter of course. And so we have to go out and make that happen. So that's just another dimension I would add to the equation.

Beena Ammanath (44:07): That's a great point, Karen.

Caroline Lair (44:09): Yes, definitely women minorities, the youth altogether.

Kay Firth-Butterfield (44:16): Thank you. And final word to you Beena on this diversity point.

Beena Ammanath (44:26): Yeah. I will just say I have seen it too often becoming a checkbox. But putting real thought into, as you think about whether it's your AI products or solutions, have you really thought about all the ways this could have an impact and that absolutely needs diversity of thought to a level, which you would not have considered for a traditional software engineering project 10 years ago. I think AI for AI and intelligence, it's absolutely crucial to have that diversity.

Kay Firth-Butterfield (45:00): Absolutely. Well, thank you very much. I have to say the thing that I often remember about generational diversity is having will I am in a global AI council meeting. I think Karen was probably there and him standing up and saying, I'm the only person under 45 here. And I'm the only person who followed. And where all the young people? Say yes. That drove us to actually create the inaudible 00:45:34 youth council. And so I think on that note of diversity being seen in such a richness of different ways, I want to say thank you to Beena, Karen and Caroline for your wise words, and such an engaging and wonderful conversation. Thank you.

Caroline Lair (45:57): Thanks so much Kay.

Karen Silverman (45:58): Okay.

Caroline Lair (46:01): Thanks Beena. Thanks Karen.

Karen Silverman (46:02): Thank you both.

+ Read More

Watch More

47:41
Posted Jun 21, 2021 | Views 7.6K
# Transform 2021
30:38
Posted Oct 06, 2021 | Views 3.5K
# TransformX 2021
# Breakout Session