Scale Events
+00:00 GMT
Sign in or Join the community to continue

Expert Panel: Combining AI and Human Insights to Accelerate AI Adoption in eCommerce

Posted Jun 30, 2021 | Views 1.5K
# Converge 2021
Share
speakers
avatar
Pranam Kolari
Sr. Director of Engineering @ Walmart Technology

Pranam is a Sr. Director of Engineering at Walmart Technology. He leads Search Algorithms for Walmart, and previously incubated and developed personalization and recommendation technology at Walmart. Pranam has over two decades of experience unlocking and perfecting machine learning to critical problems in product and content discovery, at scale. Previously, he was at Yahoo! and even earlier, his work was critical in tackling and driving awareness of spam in social media.

+ Read More
avatar
Sebastian Barrios
VP of Technology @ Mercado Libre

Sebas joined Mercado Libre in 2020 as Tech Vice President. In his role, Sebas leads all cross-technology teams including Infrastructure, Cloud & Platform, Architecture, Mobile, BI, Machine Learning and UX, among others. Sebas developed his entire career as an entrepreneur in the world of technology, founded 2 companies in Mexico: Zeb Studios and Yaxi, which he sold to Cabify. His last experience was at Cabify where he worked for the last 3 years as CTO at the headquarters in Spain. For Yaxi, he was recognized as “The youngest entrepreneur of the year”. Sebas still holds the record as the youngest Endeavor Entrepreneur in history (22 years old at the time). In 2020, he was also recognized by Forbes magazine in the “30 under 30” ranking that highlights Europe's youngest visionary entrepreneurs. Sebas is a Systems Engineer, graduated from ITAM (Instituto Tecnológico Autónoma de México)

+ Read More
avatar
Jason Sleight
Group Tech Lead @ Yelp

Dr. Jason Sleight is currently a group tech lead for the Applied Machine Learning group at Yelp where he focuses on the intersection of machine learning (ML) and systems engineering. He leads several initiatives to create platforms for ad hoc computing, data ETL, and ML as well as to collaborate with stakeholders across all of Yelp to apply these platforms towards Yelp’s business goals. Prior to Yelp, Jason completed his PhD from the University of Michigan studying artificial intelligence and cooperative multiagent systems.

+ Read More
avatar
Aatish Nayak
Head of Catalog @ Scale AI

Aatish is the Head of Content & Language products including NLP, Speech, Cataloguing, Classification, and Search Relevance. The team focuses on empowering customers in social media, e-commerce, and broader enterprise to get diverse human insight on content quickly and fairly. Aatish previously was an early engineer at robotics startup Shield AI, focused on building AI systems to protect service members, and Skurt, an on-demand car rental marketplace acquired by Fair.com (http://fair.com/). In college, Aatish ran Autolab.com (http://autolab.com/), a learning management startup used at CMU, Cornell, Rutgers, NYU, PKU, and others. He graduated with a B.S. in Computer Engineering from Carnegie Mellon University.

+ Read More
TRANSCRIPT

Aatish Nayak: Welcome, everyone. I'm Aatish Nayak, I lead our product and engineering teams for search and eCommerce. And really excited to welcome you to this panel today to accelerate AI adoption and eCommerce. I'm joined here by Jason from Yelp, Pranam from Walmart, and Sebastian from Mercado Libre and really excited to talk with these AI and ML leaders today. I'll kick it off to Jason right now to quickly get started with his introduction.

Jason Sleight: Sure. So I'm Jason Sleight, I'm a group tech lead for our ML platform and applied machine learning group. So I do a lot of ML ops things, building our ML platform for doing ad hoc computing in Jupyter Notebooks, spark ETL feature batches, training and serving models at low latency, high robustness, and then collaborate with all the different areas around Yelp for using the platform effectively and doing ML on all of our different use cases.

Pranam Kolari: I'm Pranam. You know, I come from Walmart, primarily I have been focused in doing product discovery for our consumers. Test on various different problems be it personalization, be it recommendation, be it research. I've been leading teams doing each of these in the past. And at this point of time I lead the search algorithms team.

Sebastian Barrios: I'm Sebastian, I work at Mercado Libre, as VP of technology. My role encompasses all of the cross technology in the organization with a very strong focus on machine learning AI from the platform perspective, mostly. So how we build the right tools for all of the data teams to get their models into production? It's a fun challenge.

Aatish Nayak: For each economy is I'm curious what kinds of problems is AI and ML helping solve. And then you can specifically focus on a few areas, that will be perfect.

Sebastian Barrios: Perfect. That is super broad. And as I'm sure you're aware as you're in the industry, the use cases are pretty limitless. And the common ones, of course, are around search recommendations. But we also use a lot of AI and ML on the backend operations for logistics for obviously, fraud prevention, to determine the optimal routing algorithm for our logistics network to determine an approximate size or weight of a package just from images or descriptions of it. Those are some of the maybe not so obvious ones that we get a lot of value from.

Aatish Nayak: So it seems like there's like a personalization and logistics, optimization component. and everything in between. Exactly. Jason I would love to hear...

Jason Sleight: So Yelp is fundamentally like a two-sided marketplace. It's really like the three different parts of it. So there's the consumer space of it, which is helping consumer users find the right businesses for what they want. There's a lot of things in search ranking, in making recommendations, a bunch of like content modeling stuff about like, which photos are the most beautiful, does this photo have a picture of a cheeseburger? What's the most popular dishes at these restaurants, all that kind of like content consumer modeling, then there's a bunch of business modeling. So trying to do ad targeting, trying to do price prediction, trying to predict and forecast how many visitors are going to visit this store and do like visit attribution for our ads to help like performance marketing on like really large scale companies. And then there's the third one just like Yelp strategic models, which are like trying to recommend budgets, trying to do fraud detection, abuse detection, all that stuff. And those are the three main areas we think about it as.

Aatish Nayak: Yeah. Pranam, I'm curious how Walmart

Pranam Kolari: So, for Walmart perspective again, right? I think when I look at the problems we solve, it's really around the discovery of products, right? That's one end, right? The other side is primarily around fulfillment, right? Once discovered how do we fulfill those items. From a discovery standpoint, I think the area of AI ML, I would just say it's really foundational, that being search, be it recommendations, be it personalization, be it something which cuts across everything adds like all under the foundations of catalog and content. I would say AI ML, I think as a solution is really foundational there. On the other side, on the fulfillment side, I would say somewhat in its infancy, not just at Walmart but in broadly, right? Overall, it's still early days, we definitely have a number of investments into utilizing AI and ML across a fulfillment network, as well as with the rest of the enterprise.

Aatish Nayak: Yeah. So clearly there's like a pretty broad application of analysis, one of the common points here is personalization discovery. So I'm curious, just going deeper here. What are some of the actual technologies inside ML being used across your company? Is it like, classic statistics, logistic regression, or is it actually getting to very intense, deep learning? And I'm curious how that's matured over probably the last two, three years in your businesses.

Jason Sleight: Everyone who hears about deep learning says that's what we want to do. And everyone's like, "AI deep learning all the way." But for us, at least in the real world, we'd like to start very simple, because when you're deep learning models, and some crazy thing, and you have no idea whether it's right, it's like sets you up to have really biased models that don't optimize your business objectives well. So we like to have our models start with a simple heuristics, maybe like somebody designs a decision tree by hand kind of a thing, and see what that does because it's really easy to convince the product managers that you should be able to roll this out when they can understand what it's doing. Jason Sleight: And then as you get more mature in the model, you can move up to logistic regression or gradient boosted decision trees. And then finally, one for like, really critical business problems, we're improving that log loss by half a percent really matters, then you can put deeper neural networks on it, when you really understand things. There's obviously some exceptions with like text and image processing, where it's neural networks are just so much better than everything else, that you just have to start there. But for a classical supervised problem, we like to start as low tech as we can. And once the product is more mature, and we understand all the nuances, then increase the sophistication where it's necessary.

Pranam Kolari: I think similar in spirit, primarily simple first, right? Overall, you try to go after fairly simple solutions initially. And then build on top of it. I think overall, the ML teams are still making a gentle transition around feature engineering to training data, that is this transition that you need to get into turn from building features and doing more feature engineering to being relentlessly focused on training data. Teams are doing that transition overall, as well, right? And this is not just Walmart, but overall, but it's definitely the case with our teams. And again, as you see more training moves more in the context of training that we also see areas where training data has evolved, we see more complex models, right? So that's where we see the transition champion.

Aatish Nayak: So you're basically saying that you don't go ahead and just create a transformer model out of the box? You really started simple.

Pranam Kolari: Yeah, I think there will be a point of time when I think even those models will be considered simple in some sense, right? Again, it's really relative. But at this point in time, again, right now there is a saying in the valley of logistic regression. It's a highly effective, powerful approach. So we continue to stick to that when we can but we do move into other areas as the training data becomes richer. Jason Sleight (00:59:22): Yeah. Pulling off the training data becoming richer. One thing we're starting to see is that we had done some very sophisticated feature engineering for our linear models to get nonlinear interactions in. And as we started to migrate those up to more complex models, like gradient boosted trees or neural nets, we have to like redo some of this feature engineering, because it was custom-tailored. And it's like, well, we don't need that anymore. Let's just give it a different representation and different coding of things. It's been an interesting transition.

Aatish Nayak: Yeah. And Sebastian, I'm curious if you're seeing this similar trend, where you start simple and then get more complex in implementation as you grow.

Sebastian Barrios: Absolutely. One of our core values at the engineering team and the entire company is to not only start simple, but try to keep things simple. So as much as possible, we will take a trade-off even with hopefully a small percentage difference for a system that we can control more off. So even I think Jason seems like maybe you have a deep learning model that is providing some really amazing results, but you cannot predict it. And there's something that will talk to your users maybe. And we've all seen the crazy things that happen when they release stuff that learn from Twitter, and it's free to run around the world. So we definitely maybe even take it a step further, where we're willing to make a trade-off for simplicity, and for really understanding what we have in our hands rather than squeezing that 0.00001% improvement.

Sebastian Barrios: There are cases as Jason was mentioning, where obviously, like more complex approaches are going to be like orders of magnitude better than their old school counterparts. Or maybe calling them old school is not fair when your networks were discovered or proposed a really long time ago. But from going from orders of magnitude, it's worth it, when it's not orders of magnitude, we really take a look at whether we can keep it simple and not just start simple.

Aatish Nayak: And this is a good transition. Because one thing you mentioned that is that just like ML and eCommerce and marketplace, particularly is a classic example of a long tail data problems because you have so many different products, so many different listings, all these different attributes. For those products, you have all different types of global users. And so I'm curious if that like global scale at that scale data you will have, how do you know you're serving both sides of the marketplace. Effectively, what considerations as you're taking into account for these global markets.

Sebastian Barrios: I think you're going to like my answer. Because for a lot of those long-tail or very edge case scenarios, we actually have humans-in-the-loop. And I think that's an interesting topic to open up now, for example, for us to have a proper catalogue of everything that we sell, or we want to sell, or that our merchants want to sell. We obviously employ a lot of automatic algorithms. But there's also a human component when maybe two things are very alike, where our model cannot determine whether this fits here or here. We have a whole system and UI. And that's not going to be news to you. But we have a whole system where our humans, help our algorithms, both train obviously for the future, but to make those specific decisions, where the models is failing.

Aatish Nayak: Jason, Pranam any similar observations from your organization?

Pranam Kolari: I think fairly similar, I know at the extreme there are these cases where humans are no exceptions, and so on, humans do fill in. And that directly feeds into training data, again, because these are the difficult examples. Right? That definitely is the case. Even more in the context of I think godly marketplace, the biggest challenge definitely, in the context of retail, I would say with product discovery is cold start. It's a very, very critical thing. One might argue that cold start is less of a critical problem in the context of classic like web search and discovery in other areas. But in retail, I think that's a very, very critical problem to... I don't think the industry has done a good job. And that's one area that we need to continue to do better.

Sebastian Barrios: Sounds very interesting, because it's an area where the physical stores are doing a great job and they have a bunch of experience of people just walking into a store without knowing specifically what they're looking for and leaving happy. That's something that's easy to replicate when you have an infinite shelf and then infinite inventory. And sometimes you don't even have information about your customer. It helps when you do when it's like a cold start from someone that you know, but obviously because we're all growing so much and the only maybe positive side of the pandemic has been that that driver on extra growth for eCommerce. That is a very, very hard-to-solve problem. All these new users. No data, no information. What's on the first page.

Jason Sleight: I think that they were investing a lot in is trying to determine the user's intent when they are doing things because it's not like they just make a single interaction and do nothing. They execute a bunch of queries, and how do we refine their query to get them down and find what they actually want, as fast as we can. And the idea is to make that long-tail be shorter and put more emphasis and push people into queries and like user experiences, where we've put a lot of effort to make it be really seamless. Obviously, the biggest challenge there is how do you scale it to keep it... You only have a finite engineering workforce, and you have a huge number of possible things you can optimize for.

Aatish Nayak: Absolutely. And, well, just to give a little more context to this cold start there, Sebastian, could you walk us through it? What do you mean by the cold start problem? What is the scenario that actually does come up in the context of your business?

Sebastian Barrios: Yeah. It's actually fairly simple. And I could start with the toughest, right? So you have a user that heard of our website for the first time, maybe through an ad or a friend recommendation, they come into our store, through our app or via the web. And we have to decide what are we going to show? What's going to be the product recommendations? And we have some signals around, okay, what time of day, is it? Even if there's different profiles, whether you're shopping the day at night, weekends, the week. So we have some signals that we try to use. And actually the way we solve this is we have a determination on what are the products that maybe are not the ideal for you, but the ones that are going to give us the most signal from what you either decide to click on or don't click on and then start to build the more personalized experience from there.

Aatish Nayak: Great. Now, these problems there are it seems like pretty real. I wanted to touch on the things Sebastian mentioned around how COVID has really accelerated eCommerce. I'm curious, can you talk a little bit about how COVID either accelerated or decelerated technology, adoption and eCommerce, and particularly in the AI?

Pranam Kolari: Yeah. I think maybe I can go first. Again, if I view this as discovery and fulfillment to pieces, from a discovery angle, I would definitely say it's reaffirmed the value of AI and ML in the context of discovery as traffic to these websites have increased online experience have increased, I think it's really reaffirmed why its value. Of course, I do know, training data has increased, just by the context of an increase in traffic. From a fulfillment standpoint, I think you know we all go back to the classic, the woman is running out of paper towels. I think that is a classic thing that besides part of COVID. I think fulfillment side, I would say it is definitely emphasized the adoption of general data across the supply chain network, adoption of that feedback loop and in some sense, ML also, is part of that. We are definitely seeing more of the robotics in the supply chain across the industry, that is happening as well. So I would say in general, I would say it has informed the value of ML in the discovery context. And it is really catalyzed the use of ML and just generally data and ML robotics and so on in the supply chain of ]

Aatish Nayak: Yeah. I imagine, there's a pretty big user discovery problem where I think that one of the classic examples here is if you search face mask prior to COVID you'd get some beauty or makeup mask but then afterwards, you need to rank like actual face mass higher, possibly in your algorithms, right? So are there any instances like that where your algorithms are like, what does this mean? How do you handle this type of user behavior?

Jason Sleight: I think the big thing that comes up there is just making sure that you're retraining your models frequently and pulling in new data to make sure that you're staying up to date with the latest trends, like your example of searching for face masks is exactly right. Where your query recommendation from face mask before was going to be into like some beauty product and now it's not. And if you didn't retrain your ML model rapidly or on a regular basis, then you were never going to pick that up. And there's a lot of those little kinds of trends where like... certainly at Yelp things like going to restaurants and doing reservations went to near zero last April when the entire country was in a lockdown and things like delivery and takeout went like 10x what they were before some ridiculous number, right? So there's a lot of just big shifts. And of course that changes the kind of things that you want to show. And your ML models can tolerate this to some extent, but you really need to be retraining frequently and have that pipeline set up to make it work efficiently on a regular basis.

Aatish Nayak: Sebastian, do you see some similar trends in terms of rapid shift and user behavior.

Sebastian Barrios: Yeah. And luckily, in Spanish, the term is different for the beauty product and the covering for preventing transmission and what not. So that specific case, we did not see. And I would say the shift was so sudden and so quick, that if you were not doing things properly, before the traffic started flowing in, I think it would be hard for... there are teams that can move very quickly. But if you're already a big organization where it takes time to get things changing, it could really hurt and luckily for us, medically, we have been working, for example, on our logistics network for quite a few years now. And we're prepared to deal with the search without having to modify in crazy ways the operations or the technology or the stack, no. So I'm pretty proud of what we accomplished. And to Jason's point, I think, if you're not already retraining your model quickly enough, by the time the traffic starts flowing in, it was too late to make the change, then unfortunately, the pandemic lasted so long that I'm sure everyone got their act together. But it's definitely for us at least a competitive advantage to have been ready for such a shift thanks to our execution from previous years.

Jason Sleight: I think an interesting challenge that COVID is brought about now with things starting to return more it's like their former baselines is how we deal with that period of super anomalous data. I don't know for a bunch of our like experiment reads we just like explicitly ignored this and like this data is garbage, it means nothing, throw it away, it doesn't matter. But we're having a lot of problems where we're trying to look at our historical data and say, "Oh, well, when was it acceptable? And how weird is it to use data from nine months ago to train our ML models now and get this longer-term trends? When it was such an extreme events?"

Pranam Kolari: Yeah. The other thing in the same like and I would like to add is, artists, you did bring up this thing around expanding to different geographies, countries, languages and so on. See, there are certain countries like the US, which is highly multilingual, right? And there are multiple languages being spoken in the country? So as part of COVID, you would also see, folks, if you think of retail from an omni way, folks are walking into the store are speaking a different language now, using the online experience more. So you see some of these things, you will see more Spanish queries, potentially in the same interface, and so on. Again, the simple short term, the way it is primarily for the models to be more sensitive for your features to be language-independent, where it can. There are a bunch of things that happen, as well as your signals to be more sensitive to more recent data. But we saw a bunch of this as well, overall as part of the macro effect.

Aatish Nayak: How does that affect the actual technology in terms of these global and multilingual markets. You start with multilingual birth for Korea understanding, like, how does that go into the granular building block level? Pranam Kolari (01:14:01): Yeah, maybe I'll briefly touch on it and then... it really depends, like for certain areas and certain systems, we do that for certain areas, and a few other systems we don't. It really depends, and we take it case by case. That would be my take.

Sebastian Barrios: I think it also depends on how different the languages are. We are also lucky that our main markets are Portuguese speaking and Spanish speaking, not the same language, obviously, but there are enough similarities where we can reuse a lot of our apps and others are built specifically for the different languages.

Aatish Nayak: Going to shifting gears a little bit to what Sebastian you had mentioned earlier on, of building things the right way is one of the common trends and problems we see is that we hit this critical mass of utilizing AI in production inside a large organization. How do you actually ensure all the data management and annotation models deployment production position is standardized and taking advantage of that economies of scale. And so a typical path we see is like coming to take moving from a decentralized ML DS teams to really centralizing the common building blocks, or there's like cohesive ML Ops platform. I'm curious how do your own experiences match or differ from those?

Sebastian Barrios: It describes very clearly what we're going through, we have a very robust internal platform called fury, which basically acts as a platform as a service. But even more so just like the day-to-day interaction from 90% of our development team happens within our own platform, our own tools, which is then connected to a bunch of services, and AWS, and GCP. And we do rely a lot on the cloud. So we have that model for deploying code just for web serving code or for batch processing. Then when we started with machine learning, it was all very distributed. And everyone had their own notebooks and different ways of getting those models into production. So we decided to extend that platform, integrated fury data apps, just basically our ML Ops, end to end pipeline and it goes from getting the data very easily for both notebooks and the final production and training delivery systems. Then moving through the entire pipeline of our data teams and helping them along the way and every step, even on the monitoring of the models, right? So we go from design the notebook, put it into production, and then how are you monitoring and obviously not just the web metrics, or CPU utilization, memory, whatnot, but the actual performance of the model and how to report that back to the teams is a fun challenge that we're going through right now.

Jason Sleight: I think that's going to be a fairly universal story at this point, if you're doing AI ML kinds of things, you have to have an ML platform at your company now, there's really no reason that you shouldn't, if you're at a large company with lots and lots of ML developers, then maybe you'll build something in house. Otherwise, there's lots of solutions, you can just go by on AWS or GCP. I feel we're using ML flow and spark-based things. And the open-source technologies or the paid products are really good now, ML Ops is a vital part of doing AI at an industry level. And you have to.

Pranam Kolari: Yeah. The same here to have gone through those phases of the lifecycle in terms of how ML development just generally works end to end, starting with model training, tail, two monitoring systems, very similar for Walmart scale, we do have something internal as well, to make it work.

Aatish Nayak: Yeah. And as leaders working on these platforms, how do you get a high-level measure of success or like productivity, improvements from co-centralizing or building really robust in all of our systems?

Jason Sleight: I think at the very beginning, like when you go from not having an ML platform to having one, it's super easy, like, without an ML platform, it takes months to build a model. And when you have an ML platform, it's like, "Oh, I added a new feature to my model, and it took me a day." I'm like, "It's black and white." Clearly, you're saving lots and lots of developer time that's measured in like 10s of 1000s, or hundreds of 1000s of dollars easily tasked with, right? Once you get beyond that, I think you start measuring things in like terms of how frequent are you pushing code? Like how frequently are you pushing new models? How frequently do you have to roll them back? What's the business KPIs that you're improving on those things? are you targeting the right users to like build the right models? And are you using like your AI experts efficiently to solve the right problems? Things like that?

Pranam Kolari: No, I think similarly in spirit right now, primarily as with classic engineering and how it's evolved, I Just generally code updates, right? You might start with monitoring the systems and then figuring out how to accelerate the code updates, right release management, release processes and so on. Similar right in ML is still fairly early from that perspective. Similarly, we are monitoring your way to update the model. But with ML, the main difference is it has a very, very strong experimentation book which is primarily before the updates how does experimentation look? So experimentation becomes a first-order citizen in this primary loop with ML, I would say, and primary the way we evaluate how well our ML Ops and ML way of working works is primarily around, how well are we experimenting? How many experiments are we running? Are they successful? How well are we learning from these experiments? Now, that's a fairly big KPI metric in terms of the way we evaluate how well is our ML way of operating.

Aatish Nayak: When you say running experiments, you're talking about, like running AB experiments on traffic or offline evaluation kinds of experiments stuff?

Pranam Kolari: Yeah. So there are two forms of experiments, online experiments, which would be testing on the different forms of online experiments. There would also be offline experiments, right? We call it the inner groups of experiments, you know what's happening.

Jason Sleight: That's one thing that I'm actively thinking about is how much do we need to do online AB experiments for things, especially for problems that are really well established? And we know that the offline log loss is strongly correlated with the business KPI that we want to move, like, do we need to AV test that? Or can we just know that? Oh, it did log loss good enough. We could just deploy it like a code deploy kind of thing?

Pranam Kolari: Yeah, as long as I think a strong transfer function can be established and as that function can be well aligned, because it then gets the transfer function, I think it would...

Aatish Nayak: Going back to one of the earlier comments around of human loop and high-quality data, just for the audience, could you explain for your perspective is why high-quality data catalogs are so important, particularly in these marketplace businesses?

Sebastian Barrios: Yeah. It's an interesting question, because my first reaction is that it seems so obvious, right? Like, it's what we offer our users. And obviously, the Walmart cases is very similar to ours with I can imagine for Jason as well, right? Like, the catalog of restaurants. And basically, it's where everything starts. And so if you don't have good quality there, it's like an actual restaurant with bad food.

Jason Sleight: Yeah. It's to make sure you're optimizing for the right thing, right? So there's a bunch of little trivial things that you have to think about, like, Oh, well, there's the Google bot comes in runs a bunch of searches for us. And you have to make sure that you filter those out. Because we don't care about optimizing search for Google bot, right? We care about optimizing search for actual consumers. And if you don't have good data, then you just overlook all of these little things, and you end up optimizing for the wrong thing. You deploy your model, it's like, "Hey, why are we losing a million dollars?" Oh, because we weren't optimizing the right thing trying to roll back our model and fix the data quality bias issues.

Pranam Kolari: Yeah, and I think I've just been supercritical, like the quality of catalog are supercritical. But really, I think, broadly, I think the catalog quality needs to improve overall, but it's really, I think, right now due to this whole texture modality in terms of how consumers interact, customers interact with products may be somewhat to the folks walking to a physical store modality, but less so. I would say, it's a very wide open space on how does catalog description products description of items change, as both for the existing modalities is very prominent, but also for future modalities. I think that's a very critical area.

Aatish Nayak: Yes. When you mentioned modalities, there's a lot of text content, there's product reviews, customer reviews, there's descriptions, titles, and so on. Are you talking about also how to combine images like images of products, images of content with that textual modality and insights you get?

Pranam Kolari: That's right. The way customers shop, going forward, we will continue to change and evolve significantly. So it's about primarily those other ways of describing the product. Right now, it's really textual with a set of structured views, that's how it's set up right now with images. I think it needs to... it will evolve significantly going forward.

Aatish Nayak: Yeah. And we see as to we're really multimodal AI, graduating from the research labs and opening eyes of the world and actually being starting to use in production with clip and not so, that's interesting to hear. To close out, it's really just to get your opinions. So you're so where over the next five years, I'd say where are you most excited to see AI's potential impact outside of course, eCommerce and the marketplace?

Jason Sleight: So speaking holistically I'll say like, I think healthcare and transportation are probably the areas I would pick where there's a huge runway and obviously huge potential to impact and improve human lives by making AI healthcare and AI transportation things.

Sebastian Barrios: Yeah. I would even tell you that for us even not being as altruistic like transportation is a huge one, because we have to transport. We usually think of the self-driving cars for ride-hailing, which is actually part of my background. But most of the transportation is actually of things, and there are a ton of improvements, both in quality of life for drivers, but also in the economics of how shipping stuff around the world works will be revolutionary for sure.

Pranam Kolari: Yeah. I think short-term medium term, again, just to echo Sebastian, mostly on the proofing website, maybe towards the late medium term. I think discovery has significant areas where discovery of products can be improved. But I think broadly long term I think, is really healthcare is a wide-open area, right? When used the right way, I think it really will do magic to humanity. So I can rest.

Jason Sleight: Especially like increasing accessibility to underrepresented areas that don't have access to healthcare experts. Being able to deploy AI as a triage for things is game-changing. Pranam Kolari: Right. Yeah. I totally agree.

Aatish Nayak: What is the hardest challenge you're working on right now?

Sebastian Barrios: An interesting one for us. Definitely discovery is one, the other one is actually a very classical problem but when you start getting into the granularity that we need is forecasting, demand forecasting, supply forecasting, how you make them match up, how do you go from one fulfillment center to another? What should you swap around? Typical Traveling Salesman Problem combined with what are people going to buy?

Jason Sleight: It's building up shareable features for all of our models. So we have hundreds, if not 1000s, of ML models running productions for all kinds of little things. And there's probably a lot of benefit for sharing huge classes of features amongst the different models that have been built up independently over the last decade, cross-pollinating those ideas more globally to benefit everything ends up being very challenging to get all the data connected in the right way so that it's accessible in all the different real-time services and things like this.

Pranam Kolari: Yeah. I think definitely another similar feedback loop at the extreme, no one problem is the cold start. But generally, more broadly, there's fairness. There are other areas that also come in, right? Those are open areas, we need to significantly increase our focus in those areas. And we will.

Aatish Nayak: Thanks so much for joining us here today to hear from Sebastian, Jason and Pranam. I'm Aatish Nayak, and this is Scale Converge, have a great rest of the day.

+ Read More

Watch More

Panel: The State of AI in eCommerce
Posted Oct 06, 2021 | Views 2.7K
# TransformX 2021
Building a Framework to Accelerate the Adoption of AI for National Security
Posted Oct 06, 2021 | Views 2.8K
# TransformX 2021
# Fireside Chat