Mike Harasimowicz, is the Director for Artificial Intelligence Applied Research at Lockheed Martin’s AI Center leading Cognitive Application Development and Ethic use of AI. Prior to his role at Lockheed, Mike served in the Department of Defense’s Joint Artificial Intelligence Center in AI Product Development and initial cadre of the JAIC’s Responsible AI Champions. Additionally, Mike was the Managing Director of Data Analytics and AI Development in the Intelligent Solutions Group at J.P. Morgan Chase. Mike retired from the U.S. Air Force in 2015 as Wing Commander of the 688th Cyberspace Wing after 25 years of innovating new cybersecurity, intelligence and warfighting tactics and technologies.
Mike Harasimowicz, is the Director for Artificial Intelligence Applied Research at Lockheed Martin’s AI Center leading Cognitive Application Development and Ethic use of AI. Prior to his role at Lockheed, Mike served in the Department of Defense’s Joint Artificial Intelligence Center in AI Product Development and initial cadre of the JAIC’s Responsible AI Champions. Additionally, Mike was the Managing Director of Data Analytics and AI Development in the Intelligent Solutions Group at J.P. Morgan Chase. Mike retired from the U.S. Air Force in 2015 as Wing Commander of the 688th Cyberspace Wing after 25 years of innovating new cybersecurity, intelligence and warfighting tactics and technologies.
Ms. Rachael Martin is currently the NGA Lead for Artificial Intelligence, Automation, and Augmentation (AAA) at the National Geospatial-Intelligence Agency (NGA). As the NGA Lead for AAA, Ms. Martin is the Agency’s chief proponent for implementation of AAA activities – a leading force for NGA and the greater GEOINT community on the path to AI and automation. AAA works to accelerate the speed at which NGA provides insight, to refine the precision of GEOINT assessments, and enhance enterprise capability to meet the Director’s Intent. Ms. Martin’s efforts drive NGA’s AAA framework in coordination and alignment with the Department of Defense (DoD) and the Office of the Director of National Intelligence (ODNI) Ms. Martin’s past assignments reflect her strong belief in the power of data, advanced analytics and AI to transform the intelligence enterprise. Her many years of diverse assignments, including mission management, program management and oversight of all-source intelligence functions, strengthened her analytical knowledge to influence DoD and Intelligence Communities (IC) Artificial Intelligence, Automation and Augmentation applications to benefit the warfighter. Prior to her appointment as a Defense Intelligence Senior Leader (DISL) in January 2021, Ms. Martin managed the Business Process Transformation (BPT) Mission Initiative for the Joint Artificial Intelligence Center (JAIC), underneath the DoD’s Chief Information Officer (OSD-CIO). The BPT Mission seeks to transform DoD business processes through the application of AI and automation technologies. As BPT Mission Manager, Ms. Martin managed a range of activities related to AI capability development, in support of DoD and Service functional business areas. In her previous position Ms. Martinwasthe Program Director for the Integrated Maritime Domain Awareness (iMDA) Program at the Office of Naval Intelligence (ONI). The iMDA office supports ONI’s intelligence efforts in fusion and analysis, collection management, network integration and outreach. It is also the home of ONI’s Advanced Analytics Office (A2), which uses maritime activity based intelligence (MABI) and innovative data analytics to integrate multiple intelligence disciplines in support of USN intelligence requirements. Earlier tours of duty include managing the Middle East and Africa Naval Analysis branch and the Director of Advanced Analytics within the Nimitz Operational Intelligence Center. Ms. Martin also served as an intelligence liaison and embedded analytic support to the Intelligence Directorate (N2) at CNE-CNA-C6F in Naples, Italy. She was the senior analyst on Counter Narcotics and Maritime Security issues, as well as the team lead for Naval Forces Africa Intelligence Engagements. She began her career as a Latin America Counter Narcotics analyst, where she deployed worldwide in support of Drug Enforcement Administration operations. In her time at ONI Ms. Martin has been fortunate to receive a DoD Meritorious Civilian Service Award, an ODNI Intelligence Community Award, and a Navy Group Meritorious Service Award. She was previously a Certified All-Source Defense Analyst (CDASA-1), and has received her Intelligence Community Advanced Analyst Program Certification (ICAAP). Ms. Martin graduated with honors from Johns Hopkins with a M.S. in Government Analytics. She also has a M.A. in National Security Studies from American Military University and completed her B.A. at the University of Pennsylvania, with distinctions in International Relations.
Ms. Rachael Martin is currently the NGA Lead for Artificial Intelligence, Automation, and Augmentation (AAA) at the National Geospatial-Intelligence Agency (NGA). As the NGA Lead for AAA, Ms. Martin is the Agency’s chief proponent for implementation of AAA activities – a leading force for NGA and the greater GEOINT community on the path to AI and automation. AAA works to accelerate the speed at which NGA provides insight, to refine the precision of GEOINT assessments, and enhance enterprise capability to meet the Director’s Intent. Ms. Martin’s efforts drive NGA’s AAA framework in coordination and alignment with the Department of Defense (DoD) and the Office of the Director of National Intelligence (ODNI) Ms. Martin’s past assignments reflect her strong belief in the power of data, advanced analytics and AI to transform the intelligence enterprise. Her many years of diverse assignments, including mission management, program management and oversight of all-source intelligence functions, strengthened her analytical knowledge to influence DoD and Intelligence Communities (IC) Artificial Intelligence, Automation and Augmentation applications to benefit the warfighter. Prior to her appointment as a Defense Intelligence Senior Leader (DISL) in January 2021, Ms. Martin managed the Business Process Transformation (BPT) Mission Initiative for the Joint Artificial Intelligence Center (JAIC), underneath the DoD’s Chief Information Officer (OSD-CIO). The BPT Mission seeks to transform DoD business processes through the application of AI and automation technologies. As BPT Mission Manager, Ms. Martin managed a range of activities related to AI capability development, in support of DoD and Service functional business areas. In her previous position Ms. Martinwasthe Program Director for the Integrated Maritime Domain Awareness (iMDA) Program at the Office of Naval Intelligence (ONI). The iMDA office supports ONI’s intelligence efforts in fusion and analysis, collection management, network integration and outreach. It is also the home of ONI’s Advanced Analytics Office (A2), which uses maritime activity based intelligence (MABI) and innovative data analytics to integrate multiple intelligence disciplines in support of USN intelligence requirements. Earlier tours of duty include managing the Middle East and Africa Naval Analysis branch and the Director of Advanced Analytics within the Nimitz Operational Intelligence Center. Ms. Martin also served as an intelligence liaison and embedded analytic support to the Intelligence Directorate (N2) at CNE-CNA-C6F in Naples, Italy. She was the senior analyst on Counter Narcotics and Maritime Security issues, as well as the team lead for Naval Forces Africa Intelligence Engagements. She began her career as a Latin America Counter Narcotics analyst, where she deployed worldwide in support of Drug Enforcement Administration operations. In her time at ONI Ms. Martin has been fortunate to receive a DoD Meritorious Civilian Service Award, an ODNI Intelligence Community Award, and a Navy Group Meritorious Service Award. She was previously a Certified All-Source Defense Analyst (CDASA-1), and has received her Intelligence Community Advanced Analyst Program Certification (ICAAP). Ms. Martin graduated with honors from Johns Hopkins with a M.S. in Government Analytics. She also has a M.A. in National Security Studies from American Military University and completed her B.A. at the University of Pennsylvania, with distinctions in International Relations.
Scale’s Head of Federal, Mark Valentine, will explore AI and ML applications for DoD and the IC with Lockheed Martin’s Mike Harasimowicz and Rachel Martin from the National Geospatial-Intelligence Agency. The panel will discuss the problem sets the Government seeks to use AI/ML to solve, review the current state of government use of AI/ML, set a vision for the future, and examine the path to achieve it.
Mark Valentine (00:00): All right, good morning, everyone. Or good afternoon, depending on where you are and welcome to TransformX. I'm Mark Valentine, I'm the head of Federal here at Scale AI. And I am super excited to talk to you today and do a deep dive into two of my favorite topics and that's national security and artificial intelligence. So to help us through this conversation. We've got two experts who are going through this journey as we speak, and I'd like to introduce them right now and get started. So first we have Ms. Rachel Martin. She is a defense intelligence senior leader, and is currently the lead for the National Geospatial Intelligence Agencies, AI, automation, and augmentation practice, also known as AAA. In this role, she leads a large diverse team to accelerate the speed, which NGA can provide insights, refines precision of geo intelligence, and enhances the entire enterprises ability to accomplish our mission.
Mark Valentine (01:28): Prior to this role, she led data centric initiatives across multiple programs and initiatives in NGA in the office of Naval Intelligence. She has a BA in International Relations with distinction from the university of Pennsylvania and MAA in National Security Studies from American Military University and a Master of Science and Government Analytics with honors from Johns Hopkins. So Rachel, welcome.
Rachael Martin (01:50): Thank you. Glad to be here.
Mark Valentine (01:53): Excellent. We also have Mr. Mike Harasimowicz who I know have rabbi from our service together in the US Air Force, but Mike is probably the director for AI Applied Research at Lockheed Martin's AI Center. In that role, he leads Cognitive Application Development and the ethical use of AI. Prior to this role, he was the lead product development or he'd led product development as a member of the Initial Cadre. The Department of Defense's Joint Artificial Intelligence Center, which most of us here knows JAIC. Additionally, he was the managing director of Data Analytics and AI Development at JP Morgan Chase. Where Mike and I knew each other the best, however, was 25 years of service in the US Air Force where he was an intelligence officer and commander and a cyber professional. So Mike, thanks so much for your time.
Mike Harasimowicz (02:39): Thanks inaudible 00:02:41 It's good to see you again. crosstalk 00:02:43
Mark Valentine (02:46): All right. So to get us started, I'd really like to get your thoughts on this. So I cannot pick up a newspaper today or read an article online about the Department of Defense or the Intelligence Community without also seeing the words, "Artificial Intelligence," or "Machine learning," attached to it. So my question to you all is so clearly DOD and the Intelligence Community are interested in building and integrating Artificial Intelligence and machine learning, but to what end? So why is AI/ML so important to the DOD and IC community?
Mike Harasimowicz (03:21): So Drifter, it's a great point. Here we have our national security challenges are really getting to the point where they're beyond human scale. And I say that because of the amount of data that is flowing in and around all of our domains is just insurmountable for just a commander. And that level of support required to manage that at the tactical and at the strategic level really drives us to have a level of assistance. So artificial intelligence can provide that backdrop for us to fully understand or better understand context and really support us in that decision-making. I see this as a fundamental move for us to say, "Where we are going, we can't get there alone." And artificial intelligence serves as that critical step towards future and managing a modern warfare.
Mark Valentine (04:20): Yeah. Got it. That's great. So, Mike, are there any examples of specific problems that you see DOD or the intelligence community leaders trying to solve with AI/ML? When I look across the commercial space, I see this technology being used to target ads and things like that. So can you give us an example of perhaps what some of the DOD and IC leaders are looking at, some of the problems they're trying to solve?
Mike Harasimowicz (04:48): Absolutely. And I'm going start deliberately at the more digestible concepts that everybody can wrap their mind around. For training, here's a chance for us to look at augmenting our trainings through, let's just call it, enhanced memory. I'm not going to forget the tactics that I used, I'm not going to forget how we're doing different courses of action. Assistance like that, and artificial intelligence can provide that backdrop of better understanding of context and better understanding of what decisions are available to us.
Mike Harasimowicz (05:18): I'm not saying that an artificial intelligent agency is going to come up with the Inchon Landing. That level of understanding of context and timing, those are uniquely human moments, but artificial intelligence can provide that wrapper for better contextualization. So training is an easy one for us to say, "I understand what's happening." And an intelligent agent can actually expose new ways of looking at an old problem and allowing someone to have their eyes open to new possibilities. For instance, the Alpha Dogfight trials, which you and I were involved in this linked-in conversation of who's going to win an intelligence agent or an F-16 fighter pilot and crosstalk 00:06:03
Mark Valentine (06:03): crosstalk 00:06:03 trained by the way.
Mike Harasimowicz (06:06): Say it again.
Mark Valentine (06:06): The F-16 pilot that I have trained, by the way.
Mike Harasimowicz (06:09): Yes. So the connection was strangely coincidental, but we're watching this where the best part about it was the commentary during the whole thing where a pilot said, "Oh, I never thought of trying that. I'll try that during my next sortie." So here the agent is showing new ways of looking at old problems and opening up doors for new ways of training. I love that, and that's an easy way to adopt and understand and build trust, but from that, and you can go, the military is spending time with AI in healthcare and medical health or military health, excuse me. There's a level of understanding of how we can respond to COVID. There was AI applications that the military was involved in.
Mike Harasimowicz (06:51): Also, classifying malignant cells, the department of defense has years worth of data that they can exploit, learn from, share with the military, but also share with the entire population of the world. And you can see an improve humanity. Those are easy to wrap our minds around is why not pursue that? We can talk about communications and logistics and security, AI can help with those as well. And then you get to the war fighting functions, which of course there are AI applications. I don't say that to put any fear into people, but there's an essential nature of it. HyperWar requires hyper decision-making and you need AI to augment that.
Mark Valentine (07:36): Excellent. Yeah. Thanks Mike. Appreciate it. So, Rachel, welcome back. Sorry, we had some technical difficulties, what we were just expounding upon with the initial line of questions that we started, and we're trying to explore in the commercial world, it seems like AI and ML are being used to solve relatively utilitarian problems, targeting ads, things like that. What we're exploring right now are what specific problem areas our DOD and intelligence community leaders are actually trying to solve with AI and ML. So like to get you're your insights on that.
Rachael Martin (08:10): For the war fighter, and I did catch the end of Rabbi's comments. I think that the challenge is now that the new war fighting domain is time. And when you're trying to act at machine speed, you need to be able to have trust and confidence that what the machine is telling you is correct. And we know that our adversaries are investing in those capabilities. And so we need to be prepared to match them within that area. And so, particularly for an agency like NGA, that very much relies on overhead imagery and data to feed that. Decision-making, it's really important that we do everything we can to improve that workflow, that process, that gets data from where we get it to the war fighter. And we do think that AI and automation is really the best way of accomplishing that.
Mark Valentine (08:57): Excellent. I really liked the way you put it that the time is the critical variable in this new war fight, it hearkens back to the discussions we used to have about John Boyd inaudible 00:09:15 So I can instantly understand that. So thank you. So let's move on a little bit. So based on what you all have seen him across the different disciplines in which you deal, are there any areas in AI and ML, in your mission space, that showed more promise today than others, or conversely, are there any areas that you think need more time to mature? And Rachel, let's start with you now.
Rachael Martin (09:39): Sure. areas that are showing promise, I don't like to say bad things about the department, but I think, we're getting to automation in a lot of places. And I think that is setting some groundwork for us to get to AI eventually. And so I think that our ability to be better at automating and to build some foundational infrastructure capabilities, move towards a service model, as opposed to older methods of delivering data or software. These are all theories, I think where DOD is making some really great strides in improving how they do business. I think, areas that need more time to mature, the exploitation of unstructured data, I think will probably continue to be a challenge for a while. We have, and make enough, issues managing structure data that we get in quite large volumes.
Rachael Martin (10:42): And I think there's a lot of hidden information in ... not hidden, but there's a lot of embedded knowledge. Particularly in the intelligence community that needs to be extracted from old flat text files or reports that may have gone out on outdated message systems. And so all that information is I think to some extent, not accessible right now to a machine or to any kind of model that you might want to build. It takes quite a bit of work, a lot of data work to get to where you actually have a usable example. And so I think that's an area we can continue to invest in and improving our foundational ability to manage and understand the data that we have.
Mark Valentine (11:29): Excellent. And I've heard many people say data's the new oil, and it sounds like you're definitely confirming that. Excellent. Thank you. So Rabbi, any areas in your domains that you're seeing, that you think are showing you a lot of promise today, or any areas that you think need more time to mature?
Mike Harasimowicz (11:44): Well, I want to give a shout out to Rachel because I watched her at the JAIC really take automation to a new level and really look at the Pentagon as a chance to refine processes. There are ways to cut through red tape and exchange data and move information and make decisions more effective by just leveraging that super level of automation. That's an entry-level start to where we want to go, but it felt like low hanging fruit for us, let's tackle something and create our commanders the chance for, "Hey, we're doing things different now." Do you feel it? Do you trust it and then let's move forward to it. Some of the things that we're spending time with is deterministic AI. So it's actually driving towards data model.
Mike Harasimowicz (12:30): And I think deliverable that level of understanding creates a level of how much explainability can we extract from there, are you building trust as you're fielding and putting those capabilities into the hands' of our commanders. And then all of a sudden you can move into areas where it's not deterministic, where we're dealing with uncertainty deception on the battlespace, a lot of concealment behavior that's meant to confuse humans as consumers of this information that's flowing, but also confused artificial intelligence agents. There's a level there where we're slowly moving to that level of understanding of what's in the realm of possible, fully understanding what's the risks and then work in mitigation strategies to make it more effective.
Mike Harasimowicz (13:17): So that timeline, that journey that we're on, start with the data and make sure that it's healthy and refined to be used for visualizations or automation. And then let's get into the artificial intelligence world where we actually can extract large volumes of decision quality information. That really is something that we're still reaching for.
Rachael Martin (13:40): I really liked Rabbi's point about the process transformation. Obviously, I think everyone would agree, DOD business areas are ripe for that, and they are aggressively investing in that kind of capability. But I think that writ large, one of the things that we'll have to watch out for is this idea that you can just sprinkle some AI on something and that it's going to work better. In fact, that's not how it works, usually, it works worse when you do that. What we're looking at really is a re-imagining and fundamental re-engineering of how we conduct business in a range of areas, whether it's intel or something as basic as accounting for DOD. And so you can't just try and drop some technology on a problem area and think it's going to work. You really need to examine the problem from really end to end perspective.
Mark Valentine (14:30): Yeah. Rachel, I find this conversation fascinating because you describe it as a process, and I think you're exactly right, because the more and more time I spend in the AI space, the more and more I realize that many people think of AI as a box software product that, "Hey, you need to have these AIs and all my problems are solved." But I think you're exactly right. When I think back to my days in the military, I don't think anyone ever expected to send a soldier, sailor, an airman, or Marine, or guardian, or an intelligence professional to basic training, and then never train them again.
Mark Valentine (15:06): We always recognized that this was going to be a recurring process throughout their career to continue training them for whatever the next mission might be. So I definitely think you're onto something there. So let's move from talking about the what and the why, and look at where we are currently. So Rabbi, if you can give me a sense of where you think the Department of Defense, the intelligence community, the national security community writ large, what is the current state of technology or technology adoption in those areas today for AI and ML?
Mike Harasimowicz (15:40): So the greatest part about the advances currently is that the availability of compute gives us a) an opportunity to put that data somewhere and to process it in ways that you can afford to be wrong. You can afford that to really exercise that scientific method of seeing what does this data really want to tell me. So that ability to do that, that access to high performance compute is really, really fantastic. You marry that up with data that we are in the process of refining, no easy task, I never want to under emphasize how hard that can be, but you marry that up and you give us a chance to really do something remarkable. Those two things have gave us the honeymoon phase. And I think we're quickly stepped out of that to go in the realities that these are hard problems.
Mike Harasimowicz (16:29): These take a lot of effort. And what I like about it is that it's a balance between subject matter expertise, people in the field that are doing the mission, they understand the mission, whether it be an electronic warfare, undersea warfare, there's that expertise, and you marry it up with the kind of technical expertise our AI/ML practitioners that are actually marrying not problem, to solutions and optimizing that. I have to reference the Einstein quote where, "If I have an hour to study to do problem solving, 55 minutes should be on understanding of the problem. And then five minutes of the solution." Now, director, you know that we're in an incredibly impatient culture. So we want results fast. And until you brought up sprinkling AI, that magic, that just make it happen. It takes time and it takes expertise. And it takes a combination of those two elements that come together to really bring solutions to the forefront.
Mark Valentine (17:31): Yeah. Thanks. Rachel, any thoughts on what you're seeing across the intelligence community or the Department of Defense and the current state of technology adoption?
Rachael Martin (17:40): Yeah. So I guess rather than focus on the actual, what do we have from a capability perspective? I think what I'd like to focus on is, what I see, what is very, very heartening is the time investment and the leadership investment in really being better at bringing in AI into the mission space. Whether it's IC or DOD, I think that the leadership of those communities really understands the importance of AI to our mission and to the future from a national security perspective. And so seeing, really that willingness to be part of the solution and to drive innovation has been really fantastic. And I think everybody understands the AI is important and why it's important and that kind of consensus is making it a lot easier to try and drive innovation and change more broadly within the community. I think. Yeah. Yeah.
Mark Valentine (18:46): Oh, excellent. That's great. So, are there anything that you all seen leaders within the department for the different agencies of the federal government that are helping speed or accelerate the adoption? Rachel, you mentioned a few things and that's focused on the mission. Any other things in your view that you think that our department leaders are doing well to help us accelerate this adoption?
Rachael Martin (19:13): Yeah, actually one of, and this is so completely unrelated to technology, but so important from a DOD perspective. I think some of the best things they've done is untie our hands in terms of being able to acquire and integrate technology faster than we have been in the past. Definitely an area where I'm happy to continue to see more untying of hands, but things like the software pilot that DOD has that allows us to be more flexible in the way we use funding to develop software and integrate that into our capabilities. I think those kinds of things that remove some of the burden of bureaucracy are almost as important as the technology themselves, because it doesn't matter how great commercial tech is. If it literally takes us five years to bring it on board, it's no longer the greatest commercial tech anymore. So I think it's a precondition to actually ever achieving any kind of technology dominance, particularly from an AI perspective.
Mark Valentine (20:11): Once again, we're introducing the variable of time in the equation. You're right. Rabbi, anything from your specific viewpoint?
Mike Harasimowicz (20:19): I would love to highlight that Deputy Secretary of Defense Hicks and her announcement about AI readiness teams or data readiness teams, to go out to the COCOMs to do that level of ... let me just investigate and evaluate where are their proper candidates for AI. I think that level of expertise, for me, it sounds like the folks that I worked with at the JAIC will laugh at this because I said, it's way too many times there, call it before you dig, before you dig into your data, get some expertise to go, what are the best ways to look at this? That kind of understanding will really set the tone and expectation for our commanders and say, "Hey, you are six months out from an AI capability, that's really going to help you to make decisions, or we've looked at your data and you're two years out and we're really going to have to retool how you're doing business."
Mike Harasimowicz (21:07): That alone, in and of itself, is going to get people more comfortable. You're going to spread around some expertise and you're going to learn as you go. That is a huge accelerator. And then also the fact that she is developed the Raider concept, which is going to actually hold back money and say, "Hey, there's some things that we want to get done. And I'm going to incentivize you with some extra budget to get moving forward." So you really have a technical approach sharing the expertise, and you have a monetary incentive that really brings the combination together in saying, "We're serious about this."
Mark Valentine (21:42): All right, I recognize this one is a bit of a Kobayashi Maru question, but what role do you think culture plays in this and within the national security community?
Mike Harasimowicz (21:53): So I'll take that one. inaudible 00:21:57 One of those situations where, it's where we're comfortable in AI and fundamentally makes the case be uncomfortable. You're expected to trust the system at a higher degree than you were before. You're also expected to share your data more than you're comfortable with. So across the services, across agencies, we've gone through this before as a nation of data sharing means all ships rise, we're smarter because of it. It also creates a surface area for security dilemmas that we face as a nation as well. So there is that, again, that balancing act of how much do we share. AI with enrich data sets is going to produce better, more believable decision assistance than without it. So for us to operate in comfortable cultural silos, whether it be in our service, whether it be in lines of business and the financial services area, or whether it be business areas in the commercial world, there's a level of, we have to share, we have to understand, we have to work together, to really open up the potential of artificial intelligence.
Mark Valentine (23:07): No, it makes total sense. Rachel, any thoughts on the culture question?
Rachael Martin (23:11): Actually, not all that difficult of a question. I think it's a really important one and I think culture absolutely matters. And I think that's why, particularly from my office, when we first stood up, it was stood up with the intent of being a cross-functional team. Where it was not just a group of folks in the IT shop who are trying to bring in some new capabilities and then never talk to anybody who was actually executing mission about what they wanted in the first place. And so, we built a team that included people with experiences from across the agency. We knew that we had to have people from policy. We knew we had to have people who were from the CIO's office. We knew we needed to engage across a range of different functional areas.
Rachael Martin (23:51): So that as we try to bring in capabilities, we were really doing it from an angle that would make it most ... to make it fit the best into the agency and make it really be truly something that would take off and be able to scale in the future. So, I don't think you change culture ... you have to change culture by getting people involved in the change. And that's been the approach that we've used at NGA and frankly, it's one that the JAIC used as well, when I was there. You need to have mission users involved in your development from the start, and that helps drive the cultural change.
Mike Harasimowicz (24:28): Drifter, can I add a little bit more to that?
Mark Valentine (24:30): Yeah, please do. I like that.
Mike Harasimowicz (24:32): Build on that is, there's a concept of, you have to build it yourself. And that's something that we take a lot of pride in it, but I think fundamentally there has to be, build once, use many. There has to be that level of sharing across where, what I'm learning, you're benefiting from. I've had too many occasions where we all went away and we did our own projects. We emerged from the project, they all look the same. They weren't competitive, they weren't complimentary. And ultimately the CEO looks at it and said, "You wasted my time and my money. Why are you not communicating and sharing? I want you to be a little lazier. Don't work so hard."
Mike Harasimowicz (25:15): Think first, share, create a level of reusability, both in the foundations, where what components am I using to store and manipulate the data? What areas am I using for access to the data in terms of indexing and how do I create a normalized and serialized data set? So we can all use that. And then how do I create the level of availability of those key software packages, whether it be Python libraries, or Cloud Service Provision. Areas where you go, I know that there are already a pre-approved. If the DOD is already, and the JAIC, is working on this, getting pre-approved software packages that now people can consume. That's less work for everyone. And it's done once and it's done the right way and now we can consume it.
Mike Harasimowicz (26:00): And then you can get into user access through a inaudible 00:26:04 where all of a sudden, what would seem to be hard to reach a lot of the work is already done. That's the joy about this. That while it is a hard problem, that the components, the ingredients, are at our disposal. So the more we share, the more we think this through, the more we tend to be a little bit lazier and not just build it ourselves, or if it's not built here, it's not worth anything we have to get out of that mindset. There is value for us to be collaborative and really build this level of increased capability together.
Mark Valentine (26:35): I think you're exactly right, Rabbi, thank you. So Rabbi, I'm not going to let you answer this next question because you're no longer in uniform. So Rachel, this one is for you. Are there any specific topics, ideas, things in the AI/ML space that you think industry can help push?
Rachael Martin (26:57): Sure. Our focus right now where we know we're going to need more capability in ... well, we could use the capability now, but which we think we might get in a few years is around the area of ML ops. Machine learning operations, AI operations. We don't really do that now. And I think industry is probably has the best, most practical hands on experience actually using AI, like you said, on a day-to-day basis for really mundane things, that should be how we're able to use AI, for mundane things, because it makes our lives easier. And we should be able to manage that as easily as it occurs in industry. And so for us, that's what we're looking out a few years is we're looking at, how do we do that? What do we need to change, or build, or buy, or experiment with to get to a place where we can be flexible and agile in the way we manage any kind of AI model that we might want to deploy to production.
Mark Valentine (28:00): Got it. No, that's very good. Thank you. So Rabbi, you're not completely off the hook. I'm just going to ask the question from a different point of view, but it's going to be very similar. So you've now served on both sides of the fence, both in uniform, and now within industry, if you could go take the knowledge you have now and go back to your time in government, what would you do differently?
Mike Harasimowicz (28:23): So, the common data fabric is one of those building blocks that just didn't exist when I was there. As a commander of the server protection teams, we had missions going out and defending certain parts of our infrastructure and they did it in a way that was inaudible 00:28:41 You start here, you end here, you debrief, you package that as a mission complete, you store that data somewhere. And unfortunately the fabric that connected all those things into a common picture was occasionally lost. I see that same thing in the ISR ED world, where different sensors, that cohesion of data, did create a level of ... it was inconsistent, there was difficulty tying it all together. Now I say that with the caution of, we also have to be aware that the concept of bringing all the data back to a central location and having everything, all the algorithms, just nug out solutions, there is just not tenable.
Mike Harasimowicz (29:29): And here's why, because the value of that data, to move the data through a disrupted network is at risk. So there has to be a level of comfort of pushing decision-making and this could be an AI agent or a inaudible 00:29:43 flight lead, pushing conditional authorities, getting comfortable with that. And so that takes commanders a new way of thinking through, what kind of risk am I willing to accept? I brought up the hyper-speed, time is the new domain. You have to make decisions faster. And sometimes the better intelligence is where that on-scene commanders making the calls and not the guy in the seat, thousand miles away. There has to be that strong connectivity between what's available out there to think through and act on. And what's consumable back at the JTF level. That understanding is a way if I could infuse that, if I could evangelize that when I was in active duty, I would have been right up there next to crosstalk 00:30:31
Mark Valentine (30:30): Yeah. It sounds like almost the ultimate realization of this idea of mission command, which has actually been around for quite a while.
Mike Harasimowicz (30:37): Exactly.
Rachael Martin (30:37): So I'll add onto my last answer then just to sort of ... because Rabbi sort of sparked this in my head, is that I think the other thing that we're wanting to focus on or understand how we can do better as AI at the edge and for the exact reasons that he just talked about. We expect that we will have to be capable of executing our mission, where we may not have the ability to communicate back to headquarters. So in that kind of denied environment, how were we able to use our AI and automation at the edge, right in the hands of a war fighter, as opposed to back in the DC beltway area.
Mark Valentine (31:18): Excellent. So this is actually a great transition point to our final topic. So we kind of started with the what and the why of artificial intelligence and the subset of machine learning. And then we covered a little bit about where we are right now. So I'd like to pivot now to look at where's the thing end up? Where does the journey end or what is our at least near term goal here? So Rachel, we'll start with you. What does successful adoption of AI and machine learning look like for your mission owners across the DOD and the intelligence community?
Rachael Martin (31:52): So I, 100%, stole this from the director of the JAIC from Lieutenant General Groen and success in the future means AI is like electricity and it's ubiquitous, it's common, it's plentiful. We don't worry about what it's going to be there. We trust that it'll be there when we need it. And we can see how much we use and then we know who to call, if it goes away. I would hope that in the future, that it'll just be part of our normal workflow. It'll be built in much like the idea that having an email account on your computer is an assumption that every analyst has, when they show up on day one. If you go back 20 years ago, that's not an assumption you could have made showing up in the office. And there was lots of efforts focused around enterprise, email systems, 20 or 30 years ago. Well, 20, 30 years, hopefully nobody's really worried about what enterprise AI looks like. Because it should just be part of what we do every day.
Mark Valentine (32:55): Yeah. That's great. I'm going to steal that as well. I'm trying to write that down. So Rabbi over to you. What's this journey end up looking like? What's the successful adoption of AI and ML look like for your mission stakeholders and DOD and the IC?
Mike Harasimowicz (33:11): So, if I could get everybody in their own personal Jarvis to help them with their decision-making, that'd be a nice way of moving forward. I do look at AI as just another tool in someone's toolkit that they know how it can be used, they know when it's the best fit and they're comfortable with going forward using it. That concept, for me, creates, I want to take a little bit of the illusion and the mythology around it, and I want to make it functional. And you can think about it as a tool that it's not one size fits all. You have to make sure it's the right tool for the right problem. And people are comfortable with that selection criteria and pulling it out and actually using it in a way that's effective.
Mark Valentine (33:59): Excellent. Well said. So Rabbi, we'll start with you for this next question, which was kind of a follow on, are there any policy or regulatory hurdles or barriers that you see that are currently inhibiting our national security customers, adoption of AI and ML?
Mike Harasimowicz (34:15): Well, I think the acquisition process, which Rachael touched on is one that the JAIC's tackling quite well. They're being very innovative in terms of how to move capabilities out of niche companies and into the forefront, how to make sure that there is a well-rounded approach in terms of competition, and also there's a level of understanding that this is not a waterfall project that's going to last 15 years. We have to be incredibly quick in terms of adoption and understanding of what has value. So that acquisition point is a critical piece.
Mike Harasimowicz (34:48): I also think the understanding of using open source capabilities, there is risk associated with that, but at some point what that affords us is, again, a quicker way to leverage the collective brains of the folks that are in the developmental world and we can take advantage of that. The DOD is not particularly quick to adopt those, but there are ways to get through that, that we can mitigate risk, but it also can take full advantage of what's being made available in the open source world.
Mark Valentine (35:21): Got it. No, thank you. That's very good. Rachel, anything on your end, that you can see policy regulatory hurdles that are kind of, in your view, slowing you and your operation down?
Rachael Martin (35:32): I think one of, I believe this was called out in the NSCAI report, and like I mentioned, the department's already, I think, trying to take some steps and solving this problem, but this idea of splitting our money into two or three different kinds of money, operations and maintenance or RDT&E or, military construction, is an artifact of an industrial, bureaucratic process that does not serve our country in any way, shape, or form. In terms of being able to integrate technology and then keep it updated and performing at its best longterm.
Rachael Martin (36:12): The software pilot, I mentioned previously that is one attempt the department has made to try and start breaking down those barriers. And I think it's been very successful in doing so, but I think those kinds of programs probably need to be expanded. And there needs to probably be some understanding of how trying to limit the management of software development program to these kinds of types of money is really just not helpful in getting us the best product at the end of the day.
Mark Valentine (36:42): Yeah, you're right. It is a little odd that we have inaudible 00:36:46 that, and a process was designed to build nuclear aircraft carriers, and ICBMs that we use to purchase AI algorithms and software. That does seem a little clunky. crosstalk 00:36:55
Mike Harasimowicz (36:58): When I think about, how we're actually moving through this AI world, there is a better understanding of really what's in the realm of the possible,. When we're looking at, before it was building platforms, and now we're moving out in terms of more of a decision centric approach to business, which does challenge, the way we were doing things in the past. I like that about the inaudible 00:37:23 two strategy that it's forcing us to work across domains with a quick understanding of how critical command and control is. And that's what I like about my approach currently, is that we see the platforms, we see the connectivity, we can share the data internally and create solutions that at some point are harder for other companies to see, are harder for even the government to see at times. So I like that ability that we're breaking barriers of a platform centric, approaches to solutions, and really bringing it all together.
Mark Valentine (37:56): That's great. That's really good, Rabbi. So Rachel, you mentioned the National Security Commission on AI report. In that report, that commission challenged the Department of Defense and the intel community to be quote unquote be, "AI ready," by 2025. Now I have to admit, I was happy to see a DOD document that didn't mention 2050 or 2060 in it, but 2025, do you think that's the right time horizon?
Rachael Martin (38:31): I think that it has to be the right time horizon. I don't think that we can really afford to be tied to our five-year acquisition cycles in integrating AI into our work and to support mission, particularly in the area of national security. So I'm glad they've challenged us to do that. And I think it's a challenge that most of us are invested in meeting because we do sincerely believe that this is something that is going to be critical to the war fighter, to our national security down the line.
Mark Valentine (39:03): Excellent. Rabbi, any thoughts from you on that?
Mike Harasimowicz (39:07): Absolutely. Our competitors, our adversaries are challenging themselves in aggressive ways. And we'd have to be as, if not ... although probably more so aggressive at 2025 year does put us in a position where we have to go faster. We have to find ways to solve solutions in a inaudible 00:39:33 and weeks, as opposed to years. And fight cycles so there's a level of intensity that this requires that is, again, it's driving a better national conversation. It's driving smaller companies to pop up all over the place with ideas and concepts. And it's allowing us all to harvest those in a way that can bring together teams, like the Avengers, that can stand side by side with, with commercial industry and say, "Hey, here's what we can bring to the table to solve DOD problems and IC problems." For me, that's exciting because it does allow you to pick your team and move forward in a way that can make a difference fast.
Mark Valentine (40:15): Excellent. Well, that's great. Hey, I want to thank both of you for your time, but before we go, I'm sure there are a thousand things that I forgot to say, I forgot to ask. So I'd like to get each of you a couple of minutes for any closing thoughts you have on artificial intelligence, machine learning, the national security community, and how we're going to make all this come together. So Rachel?
Rachael Martin (40:43): So my closing thought would be, it's all well and good to start bringing in AI capabilities into our mission space. In fact, I think that it's essential to us moving forward and actually accelerating AI adoption. I guess my one observation would be, or last thought that I'll leave everybody with is, as we're developing these wonderful AI solutions, let's not forget about the people who are using them and what they need in order to use them effectively. I've used this analogy before, but I'll use it again here because I like it. And I think it's pretty evocative of some of the things that we need to think about very holistically. When we look at it at integrating AI capabilities.
Rachael Martin (41:30): You can build the best AI model in the world. You can do all the data work you need to do. You could have a super, super, highly accurate, highly precise model that you can deploy. But if you don't have anywhere that you can run it, if you don't have compute, if you haven't planned for how you're going to manage it at scale, then you have a Lamborghini driving on gravel. And it might be real nice looking, but by the time you get to the end of your journey, you're probably not going to like what it looks like, because it's going to be painful and it might give you some ... you're going to get some benefit out of it, but it won't necessarily be as good as it could have been.
Mark Valentine (42:08): Rachel, that is awesome. And reminds me of a debate I got into years ago when we were adopting a new fighter airplane into the fleet, had some amazing capabilities yet. We were still using the exact same tactics that we had always used with old airplanes. And so in a lot of ways we were handed this Porsche, but because we were used to driving tractors, we drove it like a tractor for a long time. So that's great. I can definitely equate to what you just said. Thank you. Rabbi, any thoughts?
Mike Harasimowicz (42:38): So I'd like to wrap it up with just rehashing one of your previous questions about use of AI. In the design and manufacturing weapons systems, there is a great capability to use AI, to help with the design, to reduce the amount of time actually doing physical testing, you can do a lot of modeling and send there, and you also can really reinforce and revitalize the maintenance and supply chain network. I think that's an important piece to have a lot of commercial value. As you look at how they're doing it in the auto industry, commercial airlines, everything that that's machine oriented, because there's such a great way to do that. In the same vein that we're retooling designed hardware. We're also retooling and designing the way we think through things. So modeling SIM can also play as we put ourselves through different exercises and scenarios that AI can actually generate for you.
Mike Harasimowicz (43:36): I spent some time doing some gaming. There's a level of, it's intense and it's fun. You learn from it, you survive the next day, but you learn a lot. I think that that same level of intensity can be brought into, our decision-makers and our field commanders, so that they can exercise, not only their physical bodies at the gym, but get into the SIM and get that exercise as well. Drifter, you spend a lot of time, getting sorties, getting hours, to do that type of training, you probably spend a lot of time in the simulator. We need to do that a lot at the macro scale. So at the JTF level that can't be once a year exercise, we should do that more often. AI can enable that, AI can help you remember last year's results and help you condition yourself to go, "Hey, I didn't do so well yesterday, but tomorrow I'm going to get even better." I think AI can build that environment for you to operate and to succeed and to learn.
Mark Valentine (44:34): That's awesome. Well, I tell you what, so what I'm taking away from this conversation, by the way, thank you both. I've learned a lot. So Rabbi, you have lived up to your name. You have taught me. Rachel, I've learned a lot from you as well, and to our audience at TransformX, I hope that this has been a valuable conversation for you. So again, I am personally taking away from this, is that AI is important in augmenting human intuition within the DOD and the department ... or excuse me, the DOD and the IC, because time is of the essence. John Boyd was right. There is a inaudible 00:45:05 and time is an important factor.
Mark Valentine (45:07): And when other machines are doing things that machine speed, humans can't compete. So let's let them do what they're good at, and let's let us do what we're good at. So I want to thank you both for an outstanding panel. And I know that our audience out here is going to be super impressed. Probably going to be hitting me up on LinkedIn. So when they have questions, I'm sure that they will find you there. So again, thank you all watching here and thanks for joining us at Scale AI and TransformX. And a final thank you to Mike "Rabbi" Harasimowicz and Rachael Martin. Have a great day.