AI Exchange
+00:00 GMT
  • Home
  • Events
  • Content
  • People
  • Messages
  • Channels
  • Help
Sign In
Sign in or Join the community to continue

AI Regulation is Coming: How Should You Prepare?

Posted Oct 06
# TransformX 2021
# Fireside Chat
Eva Kaili
Eva Kaili
Eva Kaili
Chair of Science & Technology #C4AI #STOA @ European Parliament

Eva Kaili is a Member of the European Parliament, part of the Hellenic S&D Delegation since 2014. She is the Chair of the Future of Science and Technology Panel in the European Parliament (STOA) and the Centre for Artificial Intelligence (C4AI), Member of the Committees on Industry, Research and Energy (ITRE), Economic and Monetary Affairs (ECON), Budgets (BUDG), and the Special Committee on Artificial Intelligence in a Digital Age (AIDA). Eva is a member of the delegation to the ACP-EU Joint Parliamentary Assembly (DACP), the delegation for relations with the Arab Peninsula (DARP), and the delegation for relations with the NATO Parliamentary Assembly (DNAT). In her capacity, she has been working intensively on promoting innovation as a driving force of the establishment of the European Digital Single Market. She has been the draftsperson of multiple pieces of legislation in the fields of blockchain technology, online platforms, big data, fintech, AI and cybersecurity, as well as the ITRE draftsperson on Juncker plan EFSI2 and more recently the InvestEU program. She has also been the Chair of the Delegation to the NATO PA in the European Parliament, focusing on Defence and Security of Europe. Prior to that, she has been elected as a Member of the Hellenic Parliament 2007-2012, with the PanHellenic Socialist Movement (PASOK). She also worked as a journalist and newscaster prior to her political career. She holds a Bachelor degree in Architecture and Civil Engineering, and Postgraduate degree in European Politics.

+ Read More

Eva Kaili is a Member of the European Parliament, part of the Hellenic S&D Delegation since 2014. She is the Chair of the Future of Science and Technology Panel in the European Parliament (STOA) and the Centre for Artificial Intelligence (C4AI), Member of the Committees on Industry, Research and Energy (ITRE), Economic and Monetary Affairs (ECON), Budgets (BUDG), and the Special Committee on Artificial Intelligence in a Digital Age (AIDA). Eva is a member of the delegation to the ACP-EU Joint Parliamentary Assembly (DACP), the delegation for relations with the Arab Peninsula (DARP), and the delegation for relations with the NATO Parliamentary Assembly (DNAT). In her capacity, she has been working intensively on promoting innovation as a driving force of the establishment of the European Digital Single Market. She has been the draftsperson of multiple pieces of legislation in the fields of blockchain technology, online platforms, big data, fintech, AI and cybersecurity, as well as the ITRE draftsperson on Juncker plan EFSI2 and more recently the InvestEU program. She has also been the Chair of the Delegation to the NATO PA in the European Parliament, focusing on Defence and Security of Europe. Prior to that, she has been elected as a Member of the Hellenic Parliament 2007-2012, with the PanHellenic Socialist Movement (PASOK). She also worked as a journalist and newscaster prior to her political career. She holds a Bachelor degree in Architecture and Civil Engineering, and Postgraduate degree in European Politics.

+ Read More

Eva Kaili, Chair of Science & Technology, European Parliament joins Michael Kratsios, Managing Director at Scale AI and former CTO of the United States to discuss risk-based approaches to AI policy and governance in Europe and elsewhere. Together they discuss how to translate human rights in the physical world, to the digital world and the approach that has been taken in this regard, by the European Union (EU) to regulate AI. At what level will AI be regulated? In which industry sectors is AI regulation a priority? What level of access to and governance of AI deployments, will regulators require? Join this session to hear how AI what are the most important goals for AI regulators, and how the innovation community should be better prepared.

+ Read More

Nika Carlson (00:22): Next up, we're delighted to welcome Eva Kaili. Eva Kaili is a member of the European parliament. She has been working intensively on promoting innovation as a driving force of the European digital single market. She has been the drafts person of multiple pieces of legislation in the fields of blockchain technology, online platforms, big data, FinTech, AI, and cybersecurity, as well as the ITRE drafts person on Juncker plan EFSI2 and more recently the invest EU program. She is the chair of the future of science and technology panel in the European parliament, STOA and the center for artificial intelligence, C4AI, and a member of multiple parliamentary committees and delegations. She has also worked as a journalist and newscaster prior to her political career. She holds a bachelor's degree in architecture and civil engineering and post-graduate degree in European politics. Eva is joined by Michael Kratsios managing director at Scale AI and previously the fourth CTO of the United States. Michael, over to you.

Michael Kratsios (01:44): Eva, thank you so much for are joining us here today at TransformX we're so delighted to have you, I know you have been an incredible driving force behind European innovation and technology and the entire agenda for so long. So thank you so much for joining us.

Eva Kaili (02:01): Thank you for having me. It's great to have been working with you under your leadership also, and to be able to build up transatlantics bridges together on this emerging technologies that actually have no borders. And I think the challenge is that we have to address, they required to have discussions like that. And thank you for having me.

Michael Kratsios (02:28): Well, wonderful. Well, I think we'll jump right in. I think one of the biggest things that's on everyone's mind in the world of AI policy is what the EU has been doing over the past year. As many folks know there has been a proposed set of AI regulations in Europe, which could very much transform the way that this industry is developing, not only in Europe but across the world. So to kind of kick things off, I would love to hear your perspective as you sort of think about these upcoming regulations or proposal. Maybe we could start by if you could provide us with a quick kind of summary of what the proposed regulations are and what was the thinking behind how this all came together earlier this year?

Eva Kaili (03:15): So basically I have the feeling that what we're trying to achieve is to keep leading the global rule setting of the internet as we did with GDPR, that everybody welcomed. And I think the same is trying not to happen with the AI act that it's followed the DSA DMA, the regulation for the big platform, the online platforms. So the AI act, the artificial intelligence act is trying to provide legal certainty and solve also European problems. Because as you are aware, we are 27 different member states with different languages, tax and legal systems. And online, you don't have these barriers. So we needed to have a regulation that would definitely also translate our rights offline to online in the digital layer. And the pandemic has been a catalyst. So the main topics that I think we will have to discuss, and they bring some controls and some different opinions and create debates in Brussels is to create this legal certainty, to decide what should be banned initially.

Eva Kaili (04:29): So it has a list of what should be banned. And this is the initial text because the public consultation is finishing and then we are expecting to have a discussion. And the files arriving at the European parliament to start amending this text, which means it'll be completely different at the end of this process. So it's banning specific techniques that AI are using and it has a high risk, low risk approach in terms of responsibilities that companies have to fulfill. It talks about artificial intelligence board, or we don't know yet if the data protection authorities should take the responsibility to apply, implement, and enforce what will be decided because as you know, the systems, especially after deep learning, they change during their life cycle of the AI system. So you cannot just have one check.

Eva Kaili (05:27): You have to be able to have this monitoring process that we didn't have for GDPR. So it's introducing this board and then I think it's going to be very interesting to discuss about the conformity assessment which means basically how much self-regulation and self-assessment will take place to meet European standards. And if this is going to happen at the European level or at the member states level and in the end, I have the feeling that since I saw some comments from the tech industry to see how can we establish common standards also as discussed recently for international act on artificial intelligence. Because again, these are technologies that go beyond borders, and it's very difficult to ensure that the companies can be compliant under you, but not beyond. So these are the challenges ahead, and these are the main topics I believe we need to discuss further.

Michael Kratsios (06:29): Yeah. So one thing you mentioned, which I think has gotten a lot of coverage over the past year has been this question of high risk versus low risk AI technologies. And I think the way that the initial proposal was structured was that and please correct me if I'm wrong, within low risk technologies those are ones that were sort of not going to be sort of covered by the act, but if you were part of this high risk category, then there was a number of steps that you to take in order to be able to deploy that technology within the EU. So can you speak a little bit, what are some examples of things that were in mind when this concept of low versus high risk was introduced and how do you kind of see that shaping out in the months ahead?

Eva Kaili (07:17): So the truth is going to be also a low risk approach or it's going to cover also low risk applications but maybe with less requirements in order to allow innovation, to take place, if there is no harmful AI that is threatening main and high risk sectors for example, the health sector. Justice system, transportation, autonomous vehicles, for example. And when I talk about justice, I mean, law enforcement or borders control that could happen in an automatic or autonomous way. But it, again, remains to be seen if it's going to be completely under its sector or we're going to have also different grades of applications. And of course it will be very important to understand the mission and the reason why an application is being designed.

Eva Kaili (08:12): So we have to have the several levels of requirements that will be included plus we hope that we will manage to create this legal certainty, especially for harmful artificial intelligence by introducing human oversight of these AI systems because as you know, at this point being tested we don't see of course a perfect system. They are designed by humans. They are not perfect. And also the addressing and the post monitoring of bias will be mandatory. I think these are the main issues that we will have to discuss but especially on the health sector that could create discrimination or exclusion of our systems through the AI algorithms. I think this would be of high priority, and they will have more requirements than the rest of the AI systems.

Michael Kratsios (09:13): And just for many of us that are not sort of familiar with the way that the legal structures are set up in EU. If this act sort of got ultimately enacted is sort of the enforcement around the checks for some of these high risk AI applications, would that be done by a centralized body and a DEU in Brussels, or is it that sort of each individual nation should have their own within the EU, have their own process to think through that?

Eva Kaili (09:42): So this would be the discussion of the artificial intelligence board that has been already introduced, but we don't know if it's going to remain like this, or it's going to be the data protection authority that will have to have responsibility to apply, implement, and enforce what we will decide. Which means also the data protection authorities they have responsibilities through the member states, but they will be of course, a coordination at the central level. Still, I think the checks and the inspections will most probably take place at the national level, and then we're going to have a monitoring capacity and authority at the European level. I think it has also to do with the size of the company and how it can influence the systems if it addresses the European market in total or just in specific member states. So it depends. So there's going to be a matrix that will be created in order to address the different requirements in an horizontal and vertical way.

Michael Kratsios (10:55): Yes. Yeah, absolutely. I think the other thing that's gotten a lot of press here in the US, and I'm curious kind of how you view it is this question around the way that you're ultimately the AI algorithm can be monitored or more so assessed by these independent bodies. And in order for that to happen, sort of what level of IP, or sort of source code would need to be potentially disclosed to this body in order for there to be an actual, accurate assessment of it. And I think there's some skepticism about how excited companies are to necessarily share a lot of those types of, of secrets with government authorities. But how do you sort of view those trade offs and how do you kind of see that discussion playing out?

Eva Kaili (11:47): It just started this discussion. So who will have access to the source code? I think it's going to take a lot of time until we reach a conclusion, but I have the feeling there will be some access to the source code of authorities in specific cases mainly probably high risk and independent authorities or central government authorities. But I understand that we need to be clear not to allow gray zones for interpretation. I know that this is what the industry wants mainly and this is what we have to work on now. So for example, when we talk about banning of manipulation with subliminal techniques, this is something that needs to be explained because even targeted advertisement could fall under this scope. And I believe we need to be more clear about it.

Eva Kaili (12:48): We need to decide what are the limits. For example, if it involves children users under 18, we have to have more strict requirements. And then these subliminal techniques that are beyond our consciousness, I believe, should not be used. And then there's a question, of course, also about the deep fakes because they are being develop now for several reasons, like for services or even for fun, but the same time I see that the industry is developing also detection techniques.

Eva Kaili (13:29): So this goes in parallel, and we will have to see if there should be at least a notification that you are not watching a real video, you are watching a deep fake, you are talking to a bot. So these are things that I believe will be included in the end in this artificial intelligence act. But still, it's hard to say. I can just tell you that you highlighted very well, the issues that we will have debates and I cannot foresee how we're going to move, but there is a lot of, let's say sensitivity. And I think the majority among my colleagues for a respect of privacy and I have the feeling that it will be an interesting and quite specific let's say AI act.

Michael Kratsios (14:27): Yeah, no, I think when this was earlier this year, certainly among the AI community, it sort of sent a lot of shock waves through the system to wake up people to the reality that a lot of this is coming down the pipe. But I guess for a lot of us who don't track the timing of this as closely, do you have a sense of, or maybe you could walk us through the process by which an act like this would ultimately get implemented. Is this something that we should be worrying about tomorrow, or is it next year? How does the kind of timing work to ultimately get this finalized?

Eva Kaili (15:02): So let me say, I understand the concerns that are being raised, but at the same time I see also US is preparing to act. And UK has already acted and has a more strict law, hard law for children's rights, for example, online. So everybody's trying to transform and to transfer human rights and law offline to online. So I have the feeling that EU might have started but I think more countries will follow because there need to be a new rule book for the internet. In our case since the public consultation has concluded, we are expecting to have the allocation of the committee. This means it would take a couple of months, at least then I would consider that in 2022 before summer, we will have agreed in the level of the European parliament.

Eva Kaili (16:00): And then we will proceed to the trial logs. So by the end of '22, I believe we will have something more concrete. And then usually there is a buffer zone. There is a time to implement it. At least, I mean that the requirements that they need time for the compound units to adjust usually it's maximum two years to comply. So I would expect it that by the end of the year, and then I see already companies that are trying to predict a code of conduct that they should follow in order to be resilient to the new AI act that would be released from the European Union.

Michael Kratsios (16:44): Yeah, no, I think this segues very much into some other things we want to discuss today. And I think as you probably know we have a number of folks from, into industry here today that are joining us. And as you sort of discussed a little bit, how should industry leaders, or what should they be doing now to prepare for, it looks like something that may ultimately not need to be fully implemented till 2023 or beyond, but are there steps that folks can be taking now to kind of prepare as these rules are making their way through the process?

Eva Kaili (17:16): Listen, first of all the legal obligations could be applicable immediately. I mean, the strict ones are the ones that they are a translation of the law online to introduce new requirements. I think this could take a bit more time and we already have the different pieces of legislation about the cybersecurity, about the DSA DMA, that they will all come together at the puzzle of regulating the internet at the European Union level. But I have the feeling that we will have an interesting debate on specific use cases to make direct conclusions. And I understand that one of the biggest debates are actually around AI would be face recognition and biometrics if we are going to completely ban it in the public space or not. So these are the discussions that we will have to see how the different groups also will position themselves because as I said, also in our groups, in the social Democrats party, in the APPP party, there are different opinions. So we will have to see in the end how this will be described and it's going to be a very interesting dialogue.

Michael Kratsios (18:42): No, it's so true. And I guess we all have seen many, even US companies who have taken the position that they won't be involved in facial recognition, sort of technologies or sales for a certain period of time until they themselves can kind of think through what the implications of those may be. So I think there's a lot of thinking going both in industry and in government on these issues. One thing I'm sort of a little bit curious what your thoughts are on. I previously spent time in government and you're in government now. And I think as these new emerging technologies have sort of made their way, have been sort of perforated what you're seeing is a changing set of roles and oftentimes responsibilities for people who are now tasked with doing things that they weren't necessarily thought they had to worry about before.

Michael Kratsios (19:34): And we've seen the US the creation of new sort of data science roles, or even sort of technology roles you may have not seen otherwise. And I think a lot of companies are now solely going to the process of thinking about sort of the implications of AI on society and what kind of teams need to be kind of brought together to bear on that. So a great example is sort of Microsoft has a board of sorts set up that sort of evaluates any new AI use cases. And they have sort of an internal conversation about it, think through all the implications of it. And then that board can ultimately make the decision, but in your sense, what have you seen or are there ways that industry and even government can be thinking a little more carefully about some of these issues that they've never had to deal with before?

Eva Kaili (20:25): Well actually I think the pandemic acted as a catalyst so that more policy makers will start to understand, will try to understand how an algorithm works and if we can and how much we can intervene to regulate these emerging technologies that we expected to give us solutions. And before the pandemic I remember everybody was afraid of AI, scared that they might not have a freedom of choice. And suddenly I see a lot of my colleagues, the commission whoever works in the European Union, trying to understand in which extent it affects their work and how much they have to understand in order to properly legislate in order to allow innovation to happen, but to make sure they will be control of these technologies, they will be complimentary and not replacing at least decision making.

Eva Kaili (21:29): And when we talk about AI, also the definition can start by simple automation and it could reach a point where we cannot like super intelligence, where we cannot understand the decisions being taken. And if they cannot also be implemented in an autonomous way, this means we have minimum control. But I see that a lot of events are taking place. A lot of discussions, we have established the center for artificial intelligence in the parliament. There is a special committee discussing with all the stakeholders in order to gain knowledge and to see what are the topics that we have to understand and how they could influence every sector of our life, because this is a transformation that would change definitely all the business models, our social contracts, workers’ rights and, also business models of the biggest companies.

Eva Kaili (22:29): So I think that since most stakeholders at least agree that we need to do something about it, they need to have legal certainty before they develop even more, I have the feeling that it's going to be welcomed, and it's going to be mature enough that it would be a positive step for the future of the internet. I think with GDPR, we also saw that it was needed in order to avoid Cambridge Analytica, let's say cases. And it was an eye-opening case where we understood that if we don't do something about it, it could also reach a point where democracies will be threatened by manipulation through these intelligent systems. So I think at the time being, I have the feeling beside the issues that we discussed that it's being received as a balanced proposal.

Eva Kaili (23:30): It has a risk based approach that it's welcomed. So it'll not burden the smaller companies, the ones that they are not causing any harm, so that innovation can happen, actually, it will facilitate the exchange of data and the possibility to develop these AI systems beyond also borders of national states. And I understand that the biometric discussion of mass surveillance concerns has been something that's worrying the companies the most. And there should be an exhausting dialogue about when, and if it should be used, but I have the feeling that everybody's educating themselves through forums and discussions like the one you have initiated. So I'm quite optimistic about the results.

Michael Kratsios (24:22): No, I think you're totally right. I think having regulatory certainty around emerging technologies is something that can provide quite a lot of benefit to the innovation community broadly. If they have an understanding of what's out there from the sense of government regulation, I think it allows them to much more freely be able to innovate. I think you mentioned a little bit about sort of this risk based approach, and the United States took this position about a year ago where they sort of took the same sort of risk based methodology, but sort of applied it in a slightly different manner. And I think their approach was to go out and each essentially direct individual agencies that were responsible for certain types of AI applications.

Michael Kratsios (25:12): Whether it's the food drug administration for certain health issues or whether it's the department of transportation for autonomous vehicles. And individually tasking those agencies to think about how their own regulatory processes we'll be impacted by this technology. And I think the US is still in a wait and see mode where each of those agencies is still formulating the impacts that AI will have on their technology. But many people are saying hopefully by some, sometime this year, those initial reports will come out. So if you zoom out a little bit, we sort of see that as you mentioned, the UK has put stuff together, the US is sort of moving ahead, the EU is also... How do you as a policymaker think about sort of cross border regulatory harmonization. So with a lot of innovators here, and when they develop their AI application, they would love to be able to sell it both in Greece and the United States. So how should innovators be sort of thinking about what the future holds if individual countries are trying to do the similar things, but ultimately do it slightly differently?

Eva Kaili (26:19): Okay. That's a very good question. So I think basically we have first to see the direction of this proposal and then discuss about what I mentioned before you remember, we have been discussing about it actually quite a long time ago. How we can set some common standards that we could respect at an international level. And we started talking about the Democratic Alliance, but in the end, I think we have to also think how we can achieve at least a minimum standards also beyond this Alliance. So international accord, like an international law on artificial intelligence. I think at this point, the companies you mentioned Microsoft, for example, I think they should start having an internal ethical review to have their own procedures, self-assessment and self-regulation. We've set up this a board where we can have people from the biggest organizations that they are global beyond the European borders at least.

Eva Kaili (27:32): And like OECD, the IEEE standards association for engineers, ILO the labor organization in order to discuss what are these standards that we can all agree on. And I know with your initiative and your leadership, you also started the same process at the highest level of G7 and G20. So I think it's very important to be able to agree on these minimum standards and decide what should be a code of conduct and what should be enforced by law. I think this would be a very good beginning. For example, the IEEE is discussing now how they can have a code of conduct for engineers and technologists. This means once you give them the task to develop something for your business, for your business model, they have a lot of room to design it in an ethical way.

Eva Kaili (28:33): Nobody knows much details on how the technology is being used, and they can, instead of just following, let's say the conduct that says maximize the profit for the company to have an ethical layer in their decision making procedures. As technologists, they have such power, the developers, the computer science that I believe it's a pretty to just spend it into developing the system just the way it's being asked or in the most easy way it has to be in an ethical and I would say more mature way, a mission driven approach. And the choices and the actions of those people will be actually what we are trying to let's say, decide if it should be part of international law. I think we have room to move and agree on common standards, even with China, for example, because I saw recently that they were trying to have something like a GDPR or to give some control of the data that are being recorded in public spaces. We don't know in the implementation what happens, but at least the intentions are there. And I believe this is a good starting point.

Michael Kratsios (29:54): Yeah, I think you mentioned this spot on, I think that what you're seeing or have seen for many years now, this, deep interest among sort of specific international bodies to try to come together and figure out what those kind of core principles around artificial intelligence are that sort of as liberal democracies or as allies we can all agree to. And I think what we're starting to see now is how you can go from those principles themselves, that everyone sort of aspires to and agrees to, and actually implement them in with the force of law. And I think that's, what's been kind of, I think the fascinating process over the last year and a half where we're slowly moving to implementation. So it'll be interesting to see. I think for kind of the last portion of this conversation and I think one, that's sort of been fascinating for me.

Michael Kratsios (30:48): And I think something that you have been working on for quite a long time is this question around driving innovation within Europe. And I think there's a lot of discussions around what are the differences between sort of how Silicon Valley or the US system operates versus Europe? And why does one place maybe have more startups than the other? And you've done a lot of work around in the digital single market in Europe. So to me, I'm curious as you kind of look on the horizon, a big piece of the AI law that was proposed this year, talked about how you can spur AI innovation in Europe. So I would love to hear your thoughts on kind of what you think that the barriers are that the EU is, is facing today and when it comes to driving increased innovation. And how are you guys thinking about overcoming them?

Eva Kaili (31:45): Okay. Again, a very interesting question. So I understand that we have first to harmonize the environments of EU in order to create a very strong European single market. And this would also require a one stop shop, let's say for authorizing applications that come from US or China. I think this is something that we will achieve a bit later, but already by having an AI act in place it will let's say move us faster towards this direction.

Michael Kratsios (32:19): Think, what would be super fascinating for some folks listening, and it may be a little piece of history that people don't know about, but if you kind of rewind the clock to, I don't know, maybe start in the year 2000, and you're an internet entrepreneur, and you wanted to start a business in the EU. What was that like then? And how is it different now? Because I think there's been tremendous strides in the way that the sort of ecosystem has changed. And I think it's kind of important to kind of remember how far we've come.

Eva Kaili (32:48): I can give you two hints. First of all, you had rooming applying among member states. So you would pay extremely high bill if you travel among different member states in Europe and so you couldn't have access to the internet and this changed three, four years ago. And the same time it's fun that you're mentioning it because we just four years ago, I worked on the geo-blocking file. This means we removed geo-blocking for online content, because you could try to enter a site from a different member state and be redirected into something else and have a different to see something different. It was core discrimination among European citizens. Different pricing, different targeted advertisements. And we managed to remove these online barriers. But in the end, I think we have already some centralized licensing procedures.

Eva Kaili (33:45): So I think we are slowly, slowly getting there. And if we go back to 2000, yes we actually are very close into completing this task. I would say we're not trying to compete at the level of having the Silicon Valley of US. We know it's a different culture and of course the liquidity there it's more easy to get if you're a startup, but then you have such a great market here. It's so big. And if you manage to succeed in Europe then you can have a very let's say, strong company that could overcome the barriers that we have and provide with very interesting solutions and become global from Europe. So I see huge potential to make this effort in Europe, and I understand the differences, but we are trying to have a human centric and trusted approach.

Eva Kaili (34:46): We want to prioritize the respect of human rights and values. We want to have the freedom of choice. We want to have the freedom of our thoughts, because this is like the next discussion about newer technologies. We want to have privacy, privacy of our, like let's say of our thoughts even. And by sharing all our metadata not just in EU, but also beyond doesn't really ensure your privacy anymore. So you don't know if you're being manipulated by this targeted content that pops up. So we will try, let's say to stay ahead in this, I would call it a global kind of cold tech war. But we have to do it in the way that Europe always does.

Eva Kaili (35:37): So we want to have quality for the lives of our citizens. And I have the feeling that until now it seems that we are quite ahead in terms of research in terms of developing new technologies. And I think our strength lies there. So GDPR was an example. I think you saw recently that there was the implementation of GDPR for WhatsApp through Facebook violating and not being transparent about the use of personal data. I don't know if in the end it's going to be, I don't know 220 million, but it shows that this software of GDPR is working. So we are concerned that we have to act now. We are listening to thinkers, thought leaders of even of US. And they're calling of eager to act faster in order to have this... They have this higher expectation that you should set this rule book and then having US on board, of course. So I think this would be, our ambition is not to compete, is to collaborate with you. I think this is what we have to start discussing now. And I have the feeling that as of this year, also with the new administration we have to start again discussing how we can make this happen.

Michael Kratsios (37:13): Mm-hmm (affirmative), I think you're very right. I think GDPR is a great example of a policy domain where there is almost sort of universal agreement that something needs to be done in the space and the struggles in the US to actually have anything that was able to get through Congress, or even get to the President's desk it just has not materialized on an issue that has been front and center in the internet policy debate for decades plus and the European Union stepped up and, and kind of filled that void. And in some ways I think there was a lot of sort of theatrics or fireworks about what the implications would be. And we see here today that you're years out of it and the internet is still alive and companies are still doing their thing. So it shows that some of this action can make a big difference.

Eva Kaili (38:11): But Michael, I really think that besides the governance model for the internet and the rules we want to set, I think it's very important to think what kind of future we want to have and how we can give more control and choices to citizens. So I'm not pro-banning technologies, but I would love to give them choices to choose how to use technologies, to decide if they want to be subjects like data subjects and generators. And if they want to be rewarded for the use of their own personal data, they should have these different choices. So they can also have the right to refuse to use their own data by companies. So I have the feeling, this is what we should try to achieve, to give choices and control to the citizens. Because we are still aware of what's happening, but I don't know if we will be aware of how, how these systems are developing.

Michael Kratsios (39:17): Yeah, you're totally right. I think as AI becomes more advanced and these applications are more widespread, it becomes less and less clear to individual users or citizens where these technologies are being applied. And in what ways, and I think those disclosure requirements and other ways of being able to sort of inform the public about how this data and this technology being used is something that most people can seriously get behind and support.

Eva Kaili (39:50): If you think that we already start discussing about something that still seems like science fiction, like predictive policing, or control of thinking of your thoughts or reading your mind and trying to go back and recover if you had thought of something, or if you have done something. We can, I think, see the challenges that lay there and also credits scoring. We've seen black mirror stories on that and how this can be used in a way that could not be fair and could create to more inequalities and discrimination. And this is a society that will not be able to use AI for good. It's going to come as a disruption that it's not going to be positive, and this is not just a transcend planning effort. It should be an effort at a global level at maybe UN level, and to see how we can enforce these principles.

Michael Kratsios (40:57): Yeah, I think you're so right. I think the proliferation of AI technologies is going to expand far, far beyond Europe and the United States in the years ahead and in helping bring along others to understanding how sort of these fundamental principles and applications are critically important, I think is something that many governments are thinking about and worrying about. And as you mentioned, we've seen authoritarian regimes sort of twist AI in a way that essentially is sort of creating these social credit scores being used to sort of target ethnic minorities and many other things. And those are the types of use cases that we're obviously are uncomfortable with in the west and being able to sort of create frameworks that kind of allow AI to be used in the ways that we all hopefully intend for it to is kind of the core goal.

Eva Kaili (41:51): Yeah, I think we can all agree that when we hear about emotional recognition like to recognize the sexuality of someone, the political orientation to exploit these vulnerabilities in order to make profit. We are all concerned and we understand it goes beyond succeeding. As a company, it's an existential question of what the future we would like to have should be. And if we should act now and towards which direction and I also believe since you were very young when you took over so many responsibilities, I think you had a better understanding on how these technologies basically can influence our decision making process, and they can also unconsciously change how we think about things, even our political decisions or promote specific behaviors.

Eva Kaili (42:54): So I think since we've seen the role that they play, and since we all hope that with new technologies, we manage to overcome these global challenges, we cannot not at least try to do something about it, especially, because if we do that, we could also exchange faster data for good. I remember in the beginning of the pandemic, inside the EU, not transatlantically, we couldn't exchange data because the data that were being collected, they followed different standards. So they couldn't be merged, and we couldn't understand how the pattern of the disease was progressing and what were the alleged treatments that were working better because we had different languages, legal systems. We were collecting in different standards and this was very difficult to use for any AI system to be able to come up with a solution or even a decision in the future.

Michael Kratsios (43:56): No, it's true. I think we saw that in the US as well the necessity of starting to break down barriers that were sort of between individual agencies for data sharing reasons around the pandemic. It was something that probably could not have been done without the pandemic in place, but I think it's shown kind of the intense value associated with data sharing and we've seen, and I'm sure you've seen that NAS security commission on AI here in the United States. They have quite a number of recommendations that came out earlier this year. And one core one was around sort of better data sharing between sort of NATO countries, essentially the EU and in the US and what kind of a difference it can ultimately it can ultimately make. But thank you so much for joining this conversation. It's been an absolute pleasure to kind of chat about these issues. I know you have an incredible set of responsibilities ahead of you as you try to sort of ingest all the public comments and come to a conclusion. But thank you so much for taking the time today.

Eva Kaili (44:57): Michael, thank you so much for such an interesting discussion because as you see, we are trying through these kinds of discussions and similar platforms to understand also what should be the way to proceed. And let's say begin to have an AI act because the dynamic process and the exponential technologies, they develop so fast that whatever we do now, even GDPR is outdated and we have to come back try to understand how it develops and what are the new challenges there. So I think trying to empower citizens with these technologies and try to benefit society, should be the priority. And of course to have monetizing techniques and new business models should be allowed in a certain way to take place. So I hope we will manage also to physically meet soon as long as it's more easy to fly around. And I hope we will be able to have you also in the European parliament soon before the end of this legislative process.

Michael Kratsios (46:10): I would love that. Well, thank you so much and it's been an honor chatting with you.

Eva Kaili (46:16): Thank you. Thank you so much.

+ Read More

Watch More

Posted Oct 06 | Views 1.5K
# TransformX 2021
# Breakout Session
See more