Scale Events
+00:00 GMT
Sign in or Join the community to continue

Panel: Why Do Businesses Fail at Machine Learning?

Posted Oct 06, 2021 | Views 2.7K
# TransformX 2021
Share
speakers
avatar
Drew Conway
Head of Data Science @ Two Sigma Private Investments

Drew Conway is a prominent data scientist, entrepreneur, author, and advisor to startups, academic institutions, and government agencies. He’s most widely known for his creation of the Data Science Venn Diagram, which was foundational to the field. Drew currently leads data science for Two Sigma Private Investments, where he drives differentiated investment decisions across private equity, venture, real estate and ESG investing. Prior to joining Two Sigma, Drew was the founder and CEO of Alluvium, an enterprise AI company and co-founder of DataKind, a global non-profit of pro bono data scientists.

+ Read More
avatar
Cassie Kozyrkov
Chief Decision Scientist @ Google Cloud

As Chief Decision Scientist at Google Cloud, Cassie Kozyrkov advises leadership teams on decision process, AI strategy, and building data-driven organizations. She is the innovator behind bringing the practice of Decision Intelligence to Google, personally training over 15,000 Googlers. Prior to joining Google, Cassie worked as a data scientist and consultant. She holds degrees in mathematical statistics, economics, psychology, and neuroscience.

+ Read More
avatar
Deepna Devkar
Vice President, Machine Learning & Data Platform @ CNN

Deepna Devkar is Vice President of Machine Learning and Data Platform Engineering and heads up CNN’s Data Intelligence team. She works to understand the CNN audience across devices and build recommendation systems that increase user engagement across all CNN brands.

Prior to joining CNN, Deepna has held positions at Viacom and Dotdash (formerly About.com), building and leading data teams in digital media. Before her transition into data science and engineering, she spent over a decade in scientific research, studying the psychological and neural underpinnings of human behavior.

Deepna is a founding member of Chief, a private network of women executives aimed at putting more women in leadership. She was recognized as Folio's Top Women in Media, Corporate Champion in 2018. She has enjoyed working on projects ranging across audience segmentation, content recommendation and personalization, search engine and revenue optimization. She is most passionate about evangelizing data and building the next generation of tech leaders.

Deepna received her Ph.D. in Computational Neuroscience from the University of Texas, and also holds a master's and bachelor's degree in psychology. She moved to New York for a post-doctoral fellowship at New York University. She ended up leaving academia but has made New York her new home.

+ Read More
avatar
Jaclyn Rice Nelson
Co-founder @ Tribe AI

Jaclyn spent the majority of her career at Google partnering with enterprise companies and incubating new products. She was a founding member of the growth team at CapitalG, Alphabet’s growth equity firm, where she advised growth-stage tech companies like Airbnb and Stripe on scaling their technical infrastructure, data security, and leveraging machine learning for growth. In 2019, Jackie and Noah Gale founded Tribe AI to make AI more accessible to companies of all sizes and industries, while also building a new type of career path for top technical talent that emphasizes specialization and freedom. (Read more about it here.) Now, Tribe AI is a highly selective community of 150 machine learning engineers, researchers, and data scientists from industry leaders like Google, Tesla, and Netflix, helping companies solve their toughest business problems using ML.

+ Read More
SUMMARY

Hosted by Tribe AI. Poor data quality. Inability to access the right talent. Failure to get models into production. When it comes to moving up the AI adoption curve, what's really holding businesses back? In this panel, you'll learn how technical leaders at enterprises like Google, CNN, and TwoSigma think about building higher performing teams and operationalizing machine learning projects to deliver business value in production.

+ Read More
TRANSCRIPT

Jaclyn Rice Nelson (00:32): Hi, everyone, welcome to our session on why businesses fail at machine learning. I'm Jaclyn Rice Nelson and I will be your host for today and the co-founder of Tribe AI, a community of top machine learning engineers who partner with companies to build innovative applied machine learning solutions. And I'm so excited to introduce my guests today. We are joined by some of the most impressive people in data science and machine learning. Cassie Kozyrkov, Drew Conway, and Deepna Devkar. They're all data celebrities in their own right, so you may already be familiar with them. But in case you're not, I'll take a stab at embarrassing them before we dig in.

Jaclyn Rice Nelson (01:11): Cassie is our first victim. She is the Chief Decision Scientist at Google where she designed the analytics program and personally trained over 20,000 Googlers in statistics, decision making, and machine learning. Damn, am I allowed to say? She's also an active writer, a public speaker, and she's incredibly gifted at making complex concepts incredibly easy to understand. If you haven't already, I highly recommend checking out her content. She's a master of one liners, definitions, and analogies. One of my favorites is her description of machine learning as thing labeling, which I can't get out of my head, and I hopefully get to many more incredible sound bites today.

Jaclyn Rice Nelson (01:53): Our next victim is Drew Conway. He is lovingly referred to by our tribe community as the Lebron of data science. And like Cassie, he's great at breaking down complex concepts. He famously designs the Venn diagram of data science explaining it as a combination of hacking, math, stats, and domain expertise. A lot of that I hope we dig into today. He's also a former entrepreneur turned data leader at Two Sigma, leading all of their data initiatives. They are a leading financial services firm, and his team uses data science to drive differentiated investment decisions across private equity, venture, real estate, and ESG investing. So, lots to dig in on.

Jaclyn Rice Nelson (02:41): And last, but definitely, definitely not least, Deepna, who is the VP of machine learning and data platform engineering at CNN Digital. She leads a cross functional data intelligence team that is developing a more holistic view of CNN users and increasing engagement across all CNN brands. More specifically, they are building a platform to serve as a foundation for all of CNN's data needs. A truly tremendous undertaking that I'm really excited to hear more about. And on top of that, she's building a recommendations approach to help CNN users, I'm a grateful user to stay informed on the issues they care about most while also respecting their privacy. And then lastly, another topic I really hope we dig into is developing technical talent to have a strong sense of business acumen. And in doing so really building the next generation of data leaders.

Jaclyn Rice Nelson (03:33): I am incredibly grateful for these efforts, and just really excited to dive in across all of these themes. And so, before we start, let's get back to the theme of our talk why we're all here. Cassie really inspired the title, why businesses fail? And so, I'd love to start with you. Can you tell me a bit more about why you think so many businesses fail at machine learning?

Cassie Kozyrkov (03:58): Thank you, Jackie. It is a great honor to be here. I'm excited. And yeah, I'm excited to preach this whole how not to fail at machine learning. I think the secret is that really people think that there is one machine learning. It sounds like one discipline. And actually there are two and they are completely different disciplines. They're as different as cooking that's innovating with the recipes in the kitchen, and building microwaves and other appliances from scratch. So, if you think about something that might call itself kitchen science, what do we do there? Are we building the kitchen? Or are we innovating in recipes and serving dishes at scale? And so, this analogy holds for machine learning and AI as well.

Cassie Kozyrkov (04:42): The research side of things that's the side where we're building newer and better algorithms. If that's what you're going to do, of course, you need that PhD level understanding of how those algorithms work because how are you going to build a newer and better one if you have no idea how they work? But a lot of these things have already been developed. And maybe that's not your goal to make and sell a new algorithm for other people's general purpose use. Maybe what you want to do is you want to innovate in what you use these things for. In which case, the only reason that you're going to build your own one from scratch is if you can't get one off the shelf that already meets your needs. But it is a super complicated and sophisticated discipline at scale in its own right, because, yes, serving amazing results to users and solving problems with machine learning. That's a non-trivial thing. And so, that's the applied side of machine learning.

Cassie Kozyrkov (05:40): When we confuse the research side, which is developing cooler and better algorithms, with the applied side, which is serving amazing models, predictions at scale to users, that's when things go wrong. If you start hiring the wrong kind of worker, you build out your applied machine learning team only with the PhDs who develop algorithms, What could possibly go wrong? So, understanding what you're selling. Understanding whether what you're selling is general purpose tools for other people to use, or building solutions and selling the solutions. That's really, really important. And you have to hire for that, you have to plan for that, and you have to orient everything around that. Hiring 20 of experts in the other machine learning is not going to help you with your goals.

Jaclyn Rice Nelson (06:36): So, Deepna and Drew, I'm dying to hear from you both, particularly as I know you are frequently hiring data scientists and machine learning engineers. So, Deepna, let's start with you.

Deepna Devkar (06:48): Yeah, sure. So, I definitely agree with what Cassie said here. The things that I will add is that I think in addition to those two sub-disciplines, which not every company is going to need both of them. It's also very important within those sub disciplines to define the business problem. Because oftentimes when I get interviewed as a data science leader, or as an engineering leader, to build something out from scratch at a company. Usually, people will go immediately to the solutions. We're looking to build a recommendation engine. We're looking to build a fraud detection system. These kinds of things are solutions, not the problems. I think oftentimes, companies, when they're hiring for their team, whether it's the data leader, or the ICs, that are going to actually do the hands on work, they often go to the tools and the solutions rather than defining the problem. And that does not set people up for success, and does not set the company up for success either. So I think it's very important to start from the problem, and then let the experts who you hire define what is needed to build that out and solve it.

Jaclyn Rice Nelson (08:02): I see so much smiling from Cassie. So, I know we're all kindred spirits here. I'm curious if you can define a little bit more just what the problem well is.

Cassie Kozyrkov (08:14): I just want to add to that. Deepna, you're spot on. And also, you're exactly right. Let's remind people that they shouldn't even assume that machine learning is the thing they need at all. Hey, regardless of how many machine learnings there are. One, two, or 17, the job is to understand what your problem is, and then solve it the right way with the right tools. And frankly, if that's adding two and two together, getting four and leaving it at that you should be happy with having solved the right problem in the right way. So, there's a lot of this data fetishism. There's a lot of this AI fetishism, like let's all be companies that say that we do AI, and it's really painful to watch people misuse it. At best waste a lot of time, and at worst create the sub optimal solution for your problem.

Jaclyn Rice Nelson (09:14): Yeah, that makes a ton of sense. Drew, I think this is the perfect place to shift to you. Maybe just speak about how you think about what the problems are that you're solving. And then also getting back to some of these challenges that you've seen companies face and why they do fail.

Drew Conway (09:32): Yeah, again, I think what Cassie and Deepna have said. It reminded me of so many times in my life and my career where I've where I've watched organizations and groups go down the wrong path. But thinking specifically about this issue of addressing the right question. One of the things that I've seen, which may be a second order issue here is how do you actually get folks in the business to adopt the thing that you're building for them to actually implement the change in behavior that you want that machine learning or the result of that machine learning will provide.

Drew Conway (10:05): What I found is that oftentimes technical focus goes a lot to what Deepna was saying before. They're explained at a high level when a business processes. They map that to a set of data that they may be familiar with. And then they combine that with a set of tools or methods that they know well, and they say, "I can build that classifier, or I can model that forecast," and provide you with piece of information without having any even remote understanding of what is the business process that this is going to fit into? And how is the user at the end going to take that information to inform a decision in a new way?

Drew Conway (10:40): Well, oftentimes, I've seen folks come in, and I say this in a loving way. But folks who are machine learning, or data scientists or even engineers, they have a lot of expertise. And they may parachute into an organization like the pocket protector brigade and say, "Listen, I know how to solve your problem for you. Even if you've been doing this for 25 years. I mean, let me show you this new piece of technology that will solve the problem for you." And you get zero adoption that way. The dovetail off of what Cassie was saying, when this works correctly it may very well be that we're not talking about a machine learning problem at all, we may be just talking about a visibility issue. And we're going to try and tie a bunch of datasets together that a user didn't have visibility into now. And now you're providing that, and even that alone feels like magic. Just like you're magically able to see things that they weren't able to see before.

Drew Conway (11:32): And then the last thing I'll say is when you really are truly talking about uniforming a business process through automation, so whether it's I'm building a risk model that's going to change the underlying risk calculation that you have as a business manager. I'm going to change the pricing model that you're using to price this thing that you care about. If you don't figure out the minimally invasive way to present that information to an end user in a way that as much as possibly seamlessly integrates into the process that they're already using, the cost of getting them to change how they're doing their work, particularly if for all of us, and in our experience, we work with a bunch of really smart people. They're good at what they do. And what we're trying to do is provide some augmentation on top of that decision making. You have to earn that place at the table with them. And if you try to come over the top, it almost never works [inaudible 00:12:27].

Jaclyn Rice Nelson (12:28): So it sounds like you have to start with understanding what it is you're doing, what type of machine learning you're doing, and who you need. You need to really understand the problems that you're setting out to solve, not jumping to the technical solutions quite yet. I think Drew was really building on that, which is going even deeper to understand who your users are? The way I like to think about that at Tribe is it's essentially product, right? You're building some kind of data product or solution. And so, we all have to put on our product hats, and really do that investigation work upfront to understand who are the end users? What are the problems they have? What are the things they need? What does the data look like? What are the constraints there? And it sounds like that is how you get started in the most effective way.

Drew Conway (13:25): I mean, I'll just say, I mean, I'll give a very short example from something that is probably at the outset thing of the least technical or machine learning oriented piece, which is many years ago, I was working with New York City in the mayor's office, and thinking about how we can apply data to the problem of building inspection and safety rating for buildings. This wasn't my innovation. It was the innovation of the folks who were there at the time. And it's not at all really that innovative when you think about it now. But before we put any effort technically into what we were doing, everyone had to do a ride along with a building inspector to actually sit in the car with a building inspection team and see the world how they see it. Drive through Soho and the East Village and out to Brooklyn, and observe what they would see out in them in the field.

Drew Conway (14:17): You would see them look at curb sides, where a building was a three story walk up, but had three times as much garbage as they might expect. And they would see that on the curb, and they'd say, "I bet you that building has an illegal apartment." That's not there. They're not part of the Department of Sanitation, but they see that. And then for you as a data scientist or someone working on the technical team, you can map that to something you can actually measure and if you didn't understand how they thought about the world you'd never ever thought of that.

Deepna Devkar (14:47): To piggyback off of Drew's comment, too. That's a wonderful example. I think it's also important because now I've been at companies that are much smaller in size and companies that are way bigger, and I think the silos get bigger as the companies get bigger. And so, what happens oftentimes at big companies is these kinds of introductions to who our users are going to be, the stakeholders are going to be, what are their actual requirements. All of this happens in the beginning. And then off we go in a silo, build a solution, come back six months later, and now the target has moved because all of us are undergoing quick transformations. And so, I think it's important to involve the users at every aspect of the journey and establish those checkpoints, just like we would do for any other external facing users as well.

Cassie Kozyrkov (15:42): Yeah, I was going to say, I will put it out there, that data scientists tend to have some amazing, sophisticated, impressive skills, sure. But they may or may not have the skill of understanding what is worth doing, and what success actually looks like. Now, some data scientists do evolve the skill, but it is possible to become a data scientist without ever learning this. And so, if you hire a data scientist, you have no guarantee that this person is able to understand the business. Not just understand the business, but also understand all the politics and personalities of the business, understand how prioritization works, understand how the market works, and really figure out what's worth doing, and how to measure success. And that should be your first stop. But a data scientist is not necessarily skilled with doing that.

Cassie Kozyrkov (16:38): So, your safest way is to identify the person who has that skill, and to let them kick off the project. So what I suggest at Google to data scientists is, look, you can do as much analytics as you like. That is looking at the data, and seeing if there might be something inspiring that you might want to alert a decision maker to look at it as much as you like. But let's not be trying to take any of it seriously, let's not be automating anything, let's make sure that if we're actually going to do something proper. That is because someone who was able to understand what's worth doing kicked that part off. So, everything is analytics until proven otherwise. And if you want to do statistics, or machine learning, or AI, that has to be someone with that kind of deep understanding of what's worth doing, who's kicking it off.

Cassie Kozyrkov (17:30): The other thing that very technical folks get caught up on is they love details. They love technical details. I've got this really silly little example that I teach in a workshop. And I say, "We're going to do a silly pet food example. We're going to ask do cats prefer Friskies, mouse, or organic beef for dogs? Silly made up things. And I say, "You've got three burning questions. What are these questions?" And then the engineers shout back at me. Wait, which cats? And wait, why are these foods? And then they want to get into the technical nitty gritty. And what they really should have said, and what they say so rarely is to what end? What decision would we want to make with this? Why is it worth doing? But instead you follow this beautiful, easy path to follow to go down this these technical details.

Cassie Kozyrkov (18:34): The trouble is when you've done that, it's really hard. You've sunk all this work in and you're sensitive to sunk costs, the sunk cost fallacy. It's really difficult to say, "Okay, I've spent two days talking about technical nitty gritty. Now, let me revisit whether it's even worth having had all those conversations." So, you just talk yourself into doing something that you ought not to have been doing. So, yeah, that's my take. Always have someone in the room who is able to understand what's worth doing whom you trust to do that. Make sure that they are the ones kicking off the project, helping design the metrics. Yeah, they might need some help in order to formalize their language, get those operationalizations right, put it in mathematical terms that a data scientist can work with. But they should be there, and they should be sponsoring and pushing that project. What are your thoughts, folks?

Jaclyn Rice Nelson (19:30): I tend to very much agree. And when we take on projects at tribe, we actually always pair data scientists and machine learning engineers or data engineers, whoever's needed technically for the product with a product manager. And the theory is that there's that pairing so you enable people to be specialists at the things they're best at. One thing that I think is so unique about this group is that you all are hybrids. And so, I'd love to just take a moment and dig in there. how do organizations get their Deepna, Cassie, or Drew? And Deepna, since this is a topic you've spent a lot of time thinking about, how do you help data scientists who might be interested because not everyone will be, but to build that competency and that business acumen if it is something they're interested in?

Deepna Devkar (20:24): Yeah, that's a great question. And honestly, the product managers who work with data science analytics and ML teams are truly the unsung heroes because they really do take a project that could have been completely unused, unadopted, and make it very packable and user friendly, and for the business. So, I'm very fortunate to actually have them on my team. I didn't previously when I started in companies that were more, or I guess, less mature, and were just starting out, we didn't have counter parts to each data scientist or engineers. Now we do. So, the way that my team is structured now is that for the ML side of the house we have a technical lead like a director of engineering, and then we have a director of product. Same thing goes for the platform side, same thing goes for the analytic side. That really makes a difference.

Deepna Devkar (21:21): We previously also had a delivery function, which was very much new to me. But all of these things, I will say, it's great to have when the company is in a very mature state. I mean, if I pitch the same solution to a startup, people would just say, "Well, no, we're not going to be able to afford three people or three leads for a problem that we don't know yet." So, it really depends on where you are in the company.

Deepna Devkar (21:47): But if you can't have all of these people, and all of these roles to do their individual jobs, then I would say then you really need a strong leader that has all of these skill sets. Because without that, you're really going to fail because it's not.... When you're building data products, and when you're in the business for building data products, you need a very holistic view into the business. You need someone who can understand the business problem, the user requirements, define success, define what the metrics are going to look like, and then actually do the technical work, then circle back on the delivery portion of it, which is actually stay with the users until they have adopted the tool, and then continue to optimize, iterate things like that.

Deepna Devkar (22:34): So, yeah, coming back to your question on how do you train data scientists to be this way? I think it's asking all of the right questions at every step in their journey. So, if they did some analysis, like let's say Cassie gave a really silly, but wonderful example where oftentimes you... All of a sudden, when you realize what she was actually looking for the data scientists will go, "Oh, well, yes, I wasn't even thinking about it." So, I think it's asking those questions at every meeting, every one on one, and pushing them to think outside the technical toolbox really helps them thing in that direction.

Deepna Devkar (23:13): In my experience, I've also found that the good kind of data scientists who really want to make an impact in the business. Even PhDs who've left academia for those reasons, because they really want to drive fast impact. They really lean into it. They actually say, "I want to get better at this, can you challenge me in this way so I can change my thinking into that?" So, I think it's important to make that part of the interview process as well, which I suppose we'll get to at some point.

Drew Conway (23:47): Sure, I agree wholeheartedly there. I think one of the things that I found to be useful, I mean, even for me, in my previous role, as a startup founder working in startups, and even now at a larger firm. Most of the times you are working with relatively small teams. I mean, even in a place like Google, which is huge and has many, many data scientists. Functionally, there are smaller teams working on projects. And so, what I found to be really useful even for junior data scientists who were hiring is really challenged them to own part of that problem.

Drew Conway (24:23): In my current role, we are very lucky in one sense in that we have a captive user base. We have a set of discretionary investors who have deal flow that they're looking at various... Whether it's real estate or in our impact as an ESP or private equity. They have their expertise, and in that business, the private equity business, the lanes are pretty distinct. It's like real estate investors invest in real estate, healthcare investors are investing in healthcare. And so, when I think about building tools that are going to support the decision making process of those investors, and I have a data scientist who I say, "Listen, what we really need help on is how to best underwrite this particular kind of deal. There's a component of that, that we don't quite yet have an understanding of, and I want to hire you, and I want you to own that."

Drew Conway (25:13): What that means by owning it is that the way that you will be ultimately judged as a data scientist and how we'll think about your annual review, or things like people care about in their professional world is how you were able to solve the problem for that business, and how effective they were at adopting it. I'm not judging you by the depth of your neural network, the width of your data set, how many different technical tools you're able to use. My expectation is that, and I will help you develop those relationships, help you understand the problem, help you understand the environment, we're developing this stuff in, and really try to give you agency, and everything that comes after that.

Drew Conway (25:53): And that I think really helps this question of not every data scientist wants to do that. I think that's true, and there's certainly room for folks who can be much more mechanical and how they think about for example, if our error rates are too high, can we improve the error rate of this and really go deep and test a bunch of different methods, or do a bunch of feature engineering within this context? For sure. But for those folks who I want to own thing. The technical side of the house is maybe 50%, it might even be less than that. It's really about understanding how you get into that decision making process and build that relationship with the user. Ultimately, you know, okay, this is how I can help that business get better.

Jaclyn Rice Nelson (26:34): So, it sounds to me like, that's really a screening question for you when interviewing.

Drew Conway (26:40): I do. I mean, sometimes I will ask it explicitly. Other times particularly for junior candidates where they may not have that expectation coming in. For more senior candidates I really want to get a sense of how they think through a problem like, and Jackie, I know this is true in your case. I imagine it's true for everybody. When my team gets asked to get involved in a particular project, or we're thinking about building something, the bounds of that problem are very loose. You get this ambiguous business problem from the business owner. We want to increase our throughput of businesses of this type by X percent. Even me saying that feels more well defined than typically a problem tip that I would get from an end user.

Drew Conway (27:22): And so, when I'm interviewing a more senior candidate, I will present them with a question that is as loosely defined as that and say, "Let's think about what are the steps that you would take to try to unpack that question." Most people and understandably will initially go down the path of a technical answer. Okay, let's talk about data. Let's talk about method. Let's talk about even defining the problem. But if I push a candidate and say, "Well, how would you understand how the user in this case is actually going to make a decision and influence the business from that." That's really where you get that separating hyperplane of candidate because some candidates will have known the process or have some experience in dealing because that's the ugliest part of this whole process is how do you get people to actually do the thing that you want them to. And others will sometimes just hand wave it away and say, "Well, I built this great thing. Why wouldn't they use it?"

Jaclyn Rice Nelson (28:13): That's so interesting. I think we could take this in a lot of different directions. I think we're throwing almost an extra wrench here, which is that we're looking for people who have any role, set of skills that you need to solve problems in your business? And so, I think, Cassie, what I interpret you to say is that if company one is what data science or machine learning means at their company, means something completely different in company two. Well, then naturally, you need different people for those jobs. And so, you can't just hire someone who says data science.

Jaclyn Rice Nelson (28:49): What that presumes is that someone in the hiring position has a deep level of understanding around what needs to be done. I think one of the challenges I see a lot is that people are often hiring that person to be the one who understands what to do. And therefore, you end up in this hilarious chicken and egg problem where how do you get in the right person, if you don't even really know what it is you're solving? And so, I'm curious to know if you all have seen that and any advice you might have for how to tackle that challenge, which is actually building the right team from the beginning?

Deepna Devkar (29:28): Yeah, I think it goes back to the two things, the first two things that we talked about, which is defining the right problem, defining the right skills. But as we look for that talent, I often see job descriptions that are going to be super generalized, or sometimes they're in big organizations, they're literally copied and pasted from some other team to just expedite the process and get the ball rolling. But I think that, really, again, ends up being a waste of time because then the recruiters aren't going to actually know exactly what to look for. The candidates who are getting those inbound requests are coming to you and saying, "Hey, I might be interested in this role," are just looking at the title, the level, and then the rest of it is just garbage, garbage, garbage. And then, again, just like data, garbage in, garbage out. You interview these people, and they don't exactly know what the role is going to be, and then you might end up in the wrong direction.

Deepna Devkar (30:25): I think the more clearly you can define the role without giving away whatever your company's secret sauces, and even going down to as much detail as in the first six months your expectations are going to be X, Y, and Z, that's better. Doesn't matter what it's called, because again, in big organizations, we have nebulous titles like data engineer two, or data engineer three, which could just mean several different job families. Data engineer, analytics engineer, machine learning engineer. So, I think the more clearly you can define that it sets everybody up for success. We're really trying to do that ourselves. And even customizing the interview panel, the data challenge, the sets of questions that we ask. Really thinking about all of these things in minute detail before even starting the interview process, and that has, I would say, helped us quite a bit [crosstalk 00:31:21].

Cassie Kozyrkov (31:21): I think it's also worth understanding that the incentives here are fundamentally misaligned. So from the perspective of the company and the project, it makes the best sense to be crystal clear about specifically what skills are required were. From the perspective of the candidate, the opposite is true. You want to be less specific because the more general you can claim your area of expertise is the bigger your remit, the more useful you seem. And so, on the one hand, you want to say, "Okay, we need just this, can you do this small thing?" But on the other hand, where is the individual growth for the candidate? And for the person's career to become large and broad, that tends to be a whip where people want to go.

Cassie Kozyrkov (32:16): And so, there are these pulls in different directions. And it's creating job letters that properly de-risk, for example, somebody very good with a very narrow... Not narrow, with a very unique blend of skills leaving and then all your projects explode. So, you try to de-risk that by having the narrower job letter where you can have more people who will be copies of one another who could fit that, but then you're not making good use of their special skills, and what's unique about them, and how they might be the data scientist who's also maybe a visual artist. Where do you fit that in? But then you lose that person, you can't do something that was enabled because you had a special unicorn.

Cassie Kozyrkov (33:05): So, really, this trick of designing for careers, and designing for skills, and designing for roles, it is completely non-trivial. A lot of the incentives just they make the problem harder, not easier. And though it has no perfect solution, probably best thing is not to be naive about it. I know we all are not. But I do see a lot of communication out there on the wild internet that's very wide eyed about what a machine learning engineer is, or what a data scientist is, or what it means to lead in this space. So, it's worth putting that out there.

Drew Conway (33:47): Absolutely. I mean, I one thing I would add to that. Jackie, I was just going to add one quick thing on that, which is I've never understood the pull to be very specific in terms of education background or skill set background in a job posting. I mean, I have a PhD in political science. And despite my best efforts, I've yet to see that as a requirement for any data science position out there in the world. While in practice, one of the best data scientists I've ever hired have come from a social science background. And so, I think having a much wider perspective on what it means to think creatively with data is a prerequisite.

Drew Conway (34:23): The last thing I would add on this in terms of the interview process, and this is more advice to folks who may be our peers who are actually hiring, and they're in the position of doing this interviewing. When I've spoken to colleagues in various roles, in various companies, when you're in a more senior position, especially if you're interviewing a more junior person, someone's going to be an IC. The tendency may be to try to get really close to the metal with them and start to talk really technically about getting into a specific kind of algorithm or methodology, and getting to know what do you know about this thing?

Drew Conway (35:00): But in practice, at least my opinion is if you're good in the management position, your role is not to be prescriptive in how someone, the tool that they're going to use and what they're going to know about it. Once they're sitting at their desk, they have access to Google, they have access to Stack Overflow, they can look things up and understand them. But when you're in that interview, try to get a sense of what you can learn from this person, and how you can help them in their career growth? What is the thing that the two of you can do together in this role? Because ultimately, that interview is, is it someone who I feel I can spend 50, 60, whatever plus hours a week interacting with and building something together? And so that by the time you get on the other end of that process, both of you and the whole team has grown with that. And so, having a bit more of an elevated view of how you want to have that conversation has always helped me in hiring good people.

Jaclyn Rice Nelson (35:51): I'm curious Drew, it sounds like you have taken what it feels like a very enlightened view, which is a more open minded, and I know Deepna you went through Insight, and I heard, Drew, you've been an advisor to Insight. And so, I know you've hired people from lots of different backgrounds. And it sounds like you've had success with that. I'm curious, also, if you've seen any trends across optimizing for industry specialty, or technical specialization, and how you think about the tradeoffs in the team composition?

Drew Conway (36:34): Sure. I can take a first crack at that one. I think there, I couldn't agree more with all the points that both Cassie and Deepna were making around the idea that you can't be this one all-encompassing data scientist, and I mean, the back of what Cassie said. The data science Venn diagram is often accused of being this impossible standard that no one could ever do. And that's right, it wasn't meant to represent a person's skill, but essentially, what goes into this new emerging discipline that we don't really fully understand yet? What I think is born out of the history over the last 10 plus years of thinking about this problem is that, to me the real crux of the problem is do you understand whether it's an industry or a problem set or a decision that's being made, what goes into that, and how you map that to data plus technology.

Drew Conway (37:23): And so, I think industry specialization is really important because you may come with a bunch of knowledge to the problem that accelerates your ability to be effective. And so, that is super important. And I think the degree to which it should be influencing your hiring process is a bit dependent on the business. Certainly, if I was hiring data scientists for a healthcare startup, I want you to really understand that because the cost of being wrong can be very, very high versus I'm on an online retail writing recommender systems. Yeah, you may really deeply understand this particular retail market. But the cost of being wrong is a bit deeper. And so, there's less of an issue there.

Drew Conway (38:04): The thing that I have found in terms of a skill set that has really become important to me in my hiring is, how well do you articulate that result to someone who's a non-expert? Can you stand in front of a room with a bunch of business decision makers, articulate the complex thing that you did, or the simple thing that you did, and the reason for making it simple in a compelling and convincing way? If you don't, and we try to... That's a very difficult thing to hire for. I mean, one of the early mistakes I made as a new manager was borrowing from my academic training and saying, "Well, the interview should basically be like a job talk. Stand in front and give us effectively a job talk on some work that you did." And that was terrifying for most people, and was an absolute failure.

Drew Conway (38:48): And so, thinking about how to lessen that intensity and bring it to something that is more just getting through conversation, how someone thinks, how they think about articulating a result, or even a problem back to you has been critical. I think in the job testing that really early. Giving people the opportunity to do that very early in their career. So, if there's a gap in what they need to do, you can address that pretty quickly.

Jaclyn Rice Nelson (39:15): Yeah, that all deeply resonates, and I think it's something that all three of you are just exceptional at. So, I learn from you all on this exact topic every time. I think I would have been very disappointed in us if we ended a Scale conference without talking about the data and the data infrastructure. So, I think that is a huge, huge point. And maybe just to jump off of one thing you said as well around feasibility, I think that's a great place for us to close out with our last round of questions.

Jaclyn Rice Nelson (39:53): And so, I think Cassie one of the things that I love that you said is in order to really get people and I'm interpreting and paraphrasing you so please correct me if I misconstrue this at all. But in order to get people excited about AI and machine learning, we had to make it futuristic. We had to make it sci-fi so that people would care. But the reality, the things that really drive impact, drive results are boring. They're things that exist. They're sort of more off the shelf. They've been done before. They're not necessarily the novel thing that's the latest cutting edge of research that's going to move the needle for the business. And so, I'm curious to hear from you all what sort of applications that might be perceived as boring or more run of the mill, do you find most exciting and move the needle the most either for your business or other businesses you've seen?

Cassie Kozyrkov (40:48): I find myself really excited by, I don't know, taking maybe an excessively long view of civilization. But for me, data science starts with the advent of writing because that's where we end up having as a species better memory. So, I like to joke that our tools, the reason that we use them, they're better than us at things. Pen and paper is better than me at remembering nine digit numbers, for example. And so, machine learning, AI, they're just another set of tools. What I enjoy thinking about is, where did we take some shortcuts, and start doing things that, for example, don't feel right with how we communicate?

Cassie Kozyrkov (41:41): Writing is kind of a silly thing. But what else were we going to do 5,000, or whatever it was years ago. As someone who does write for fun, writes a lot of blogs, I will tell you that, that is not the speed at which my brain goes. I would much rather talk through an issue. But for me to be able to, for example, took out an email, and then say, send, and be quite sure that no garbage got sent, that's still a stretch, unless I've got a human checking any kind of speech to text thing for me.

Cassie Kozyrkov (42:15): But any application where we might be able to get back to the most natural way of communicating, more visual, more speaking, less of those steps we took in between that involve the keyboard and the pen and the paper that were good for their time. But maybe don't need to be around forever. Wouldn't it be cool one day if our kids or grandkids are like what is this thing with all the buttons on it? Why is there a Q and a W? What even is that? I think it would be amazing to extend how we can communicate, and get back to what is most comfortable?

Drew Conway (43:06): Sure. I think maybe bring it somewhat selfishly to the practice of data science and what my team and I do on a pretty regular basis. This would be true in almost any role. When I think about the basic building blocks of solving a problem with data it often is bringing together multiple data sets, so that we can understand how the information contained in that reflects in some other outcome like a very basic linear model type of construction. And in that context, we are often faced with the fact that the way that an entity is represented in one data set is different than the way that it's represented in another data set. And so, resolving those two things together can be quite difficult.

Drew Conway (43:47): And so, of course, the sub discipline of named entity resolution is well known. It's something that lots of businesses put a lot of effort into doing. But now as the questions that we want to pose to machine learning and data scientists become more complex and add additional dimensionality, the problem becomes order of magnitude is much more difficult to solve. I'll give you, again, an example from some of the work that we do now. If I wanted to tell you based on a large number of bids. When I say large, I don't mean 100, 20 different data sets.

Drew Conway (44:19): All the information contained on them that resolves to a specific building in Manhattan. That is an extremely difficult thing to do because you not only have an X, Y coordinate, but you have a Z coordinate, and you have policy, and legal, and zoning information that goes in a fourth dimension to this. You have time and things change over time in that block. And so, having tools that allow for those things to happen in a more automated way, and to Cassie's point happen in a way that you have a lot of confidence in is something that I would love to see continue to be pursued and obviously something that we work on a lot in my current role.

Jaclyn Rice Nelson (44:56): Yes, I agree. And I would say that's a lot of the work our teams do as well, so echo that deeply. Deepna, excited to hear what excites you.

Deepna Devkar (45:06): Yeah. So, I also will selfishly tie it back to my current role and introduce a shameless plug. But I think the last, gosh, I want to say four or five years, if anything have just proven that news is very important in our daily lives. Having a trusted source is even more important in our daily lives. And so, the work that my team does personalizing the user experience and really personalizing content that caters to them while also keeping them informed about the big things that are happening in the world might seem like a simple problem because if you think about it from a data science or machine learning perspective, it's just finding similarity in text and looking for that content and serving more of that.

Deepna Devkar (45:53): But if you actually think about the business problem, and what the user might want, then it's quite a complex problem because it's not just about text similarity. It's about diversity in thought. Actually empowering your users with content that they might not discover if they just get trapped in this filter bubble of content. And all the while also keeping ethics in mind and make sure that we are not using AI or machine learning to actually send them down the wrong path, keeping all the content really good, and making sure none of the fake content actually shows up in their feed is quite an interesting problem, and I love working on it. I think there's never going to be a time where we will actually say, "Okay, we're done here," because there's always going to be ways in which we can improve in. So, I'm really deeply excited about that particular problem we're working on right now.

Jaclyn Rice Nelson (46:48): Well, as much fun sci-fi movies are to watch. This is the kind of stuff that gets me the most excited. And so, thank you all for sharing. Thank you for joining us. I am so grateful to get to spend the time with you. I know all of our listeners are as well. I'm hopeful that the four of us can get together in person in New York soon since we're all physically in the same place. And in the meantime, just want to thank you for all of your insight and contact information will be shared, and thank you. Thank you again.

Drew Conway (47:23): Thank you.

Cassie Kozyrkov (47:24): Thank you, Jackie. It was a lot of fun.

Jaclyn Rice Nelson (47:25): Bye, everyone.

+ Read More

Watch More

31:05
Transforming Drug Discovery Using Machine Learning
Posted Oct 06, 2021 | Views 11.5K
# TransformX 2021
# Keynote
A Machine Learning Infrastructure Playbook with Lambda
Posted Oct 06, 2021 | Views 2.6K
# TransformX 2021
# Breakout Session