Scale Events
+00:00 GMT
Articles
January 24, 2022

AI on the Edge: How to Do Knowledge Tiering Right

AI on the Edge: How to Do Knowledge Tiering Right

Here’s what you need to know about effectively separate knowledge bases for edge computing—and three best practices.

David S. Linthicum
David S. Linthicum
AI on the Edge: How to Do Knowledge Tiering Right

The rise of IoT and other device-centric computing continues to move much of the processing away from centralized, cloud-based artificial intelligence systems and closer to the data-gathering points.

The use of machine learning with edge computing has been systemic. Either the engines are built on the back end, typically in cloud-based AI systems, or AI processing occurs on the edge device itself, such as in a home thermostat, a vehicle’s computer, or even a kitchen appliance. Edge-based system designs tend to leverage AI either at the edge or centrally, but rarely both, due to redundancy costs and the need to avoid operational complexity. 

Newer approaches move the intelligence required to run the business nearest to where you need the knowledge to be used. There are several considerations to be aware of before you do this—including where to locate the intelligence and how to do it. Here’s what you need to know.

Knowledge Tiering as Architecture Is Proven

The notion of knowledge tiering has been around as an architectural concept since the appearance of distributed computing. Indeed, you can separate an application’s logical processing and data processing to accommodate the needs of the solution. 

An older example would be data that is entered during the day at all the regional offices and then synced at night with a centralized database that leverages core transactional data using widely distributed databases. This configuration could work around the lack of high-speed wide area networking of the past and use locally hosted databases to maintain good system performance to process transactions on customers’ behalf. 

This old data tiering solution solved two core problems. 

First, it provided local access to the data and thus provided acceptable performance to effectively run the business. This approach supported 99% of local business processing needs, such as recording the sale of a car at a dealership. 

Second, the back-end tier was available, if needed, for such things as looking up a transaction that had long since moved from the local database to the larger centralized databases that stored all combined transactions. Of course, the back-end database took 20 times longer to provide the required data, but since this was operational in nature, it worked just fine for the business requirements. 

The Evolving Edge

With the emergence of high-speed networks, faster databases, and cloud computing in support of centralized computing, old-time data tiering is not what first comes to mind for architects who set up business systems. However, with the rise of edge computing, some of the more traditional architectural patterns may return to serve newer and more modern purposes.  

Many will appear in the form of edge-based data tiering and knowledge tiering, because both have related patterns that are applicable to more modern edge computing architectures. 

The concepts are much the same, in that you want to place the knowledge bases as close as possible to the entity that leverages them and thus reduce the latency of accessing these AI engines to answer more tactical questions. (Note: The term knowledge base has many different definitions in and outside the world of AI. Here I use the term more generically to describe anywhere intelligence is stored to be used, updated, and reused.) 

Take, for instance, a jet engine that has an edge computing type of architecture in which a lower-powered device that is, say, the size of a Raspberry Pi, can automatically monitor the functions of the engine via hundreds of sensors that monitor core engine functions such as temperature, air pressure, rpm, etc. 

The engine’s systems can use this information (stored on its monitoring and controlling device) to deal with common problems such as the need to automatically reduce power if the engine starts to overheat. The system can do so with little or no latency, since the monitoring and controlling device is next to the engine. The issue is also reported to a back-end cloud system using air-to-ground networking, or when the airplane reenters network range. These are systems in use today. 

This example works fine with data, such as collecting data at the edge device and synchronizing the data to back-end systems as needed; those are the fundamentals of edge computing. The use of AI systems at the edge, or, at the “intelligent edge,” brings a whole new set of problems to edge computing, given the critical importance of having edge systems work with things such as jet engines. 

The core question with an intelligent edge is: Where does the knowledge base and knowledge engine reside when using edge-based computing? Because tiering is the more effective answer, the question becomes: What are the emerging best practices when it comes to knowledge tiering in support of edge computing on the “intelligent edge”?

It Pays to Connect the ‘Intelligent Edge’

To answer those questions, you need to understand that knowledge is not data and should not be treated as such. The storage of knowledge related to the operations of the jet engine at a tactical level, such as understanding when the engine is overheating, must exist near the edge device that operates the system, which in this case is the jet’s engine. 

Thus, you have a local knowledge base that builds from the training data it gathers from the engine and an evolving knowledge as to how to deal with routine issues in new and better ways. An “evolved” knowledge base could avoid making an engine shut down during overheating states in certain cases when an engine can’t be restarted without causing further damage or risk to the airplane. The evolved knowledge base derived this bit of knowledge through observation of the intelligent edge device at the edge.

The objective of the intelligent edge is to have a learning system teamed up with monitoring and data storage to determine the best way to do something, such as keeping a jet engine running at peak efficiency. This occurs with little or no human intervention.

However, If you leave the knowledge engine and the knowledge base at the edge tier only, you’ll leave features and value on the table.

Teaming up the edge-based knowledge tiers with a much larger and much more powerful back-end knowledge engine provides a few key benefits. An example includes the ability to share information by centralizing knowledge collected from the edge tiers into a central tier, which in turn reteaches the edge tiers that are responsible for the same duties. 

For instance, let’s say a certain model of plane distributed throughout global airlines learned a new way to deal with an engine overheating problem. The solution discovered in one plane can be shared with all other planes in one fleet and then shared with the manufacturer for distribution to all other planes of that model throughout the world. 

In other words, you have thousands of edge tiers that learn through the gathering of data over time and centralize that knowledge so the most effective best practices for dealing with engine problems are centrally known and “taught” to all affiliated edge-based knowledge engines. 

The Right Way to Do Knowledge Tiering 

The question is: What parts of the knowledge base and knowledge engine should exist at the edge, and what parts should be on more powerful centralized AI systems that perhaps run on public clouds? These are the issues around knowledge tiering that edge-based systems engineers, working with AI engineers, are trying to figure out to determine common best practices. 

Here are three best practices that are emerging right now.   

1. The Tiering of Data and Knowledge Bases Should Be Largely Decoupled 

You want knowledge to exist at the edge tier that directly deals with processes at the edge device. For instance, while there should be a knowledge base of “in-range” and “out-of-range” states for the jet engine example (tactical knowledge), knowledge that deals with long-term engine maintenance (strategic knowledge) should be at the central or back-end tier.

2. Look for Lightweight vs. Heavyweight Processing

You usually want to tier the knowledge at a location where it reduces or eliminates latency. If the edge device can’t handle the processing required (think mobile phones), it should be centrally located somewhere with more powerful processing and quicker storage, such as on back-end cloud-based AI systems. 

While there are exceptions, the more tactical AI processing is typically lightweight processing that can run on a lower-powered edge device, such as a jet engine monitoring device that deals with data that’s in range or out of range. 

A more strategic processing example would include an edge device that understands the dynamic maintenance schedules for all jet engines, including the aging-out of parts. This requires higher-powered processing and thus should be centrally located. 

3. Knowledge Bases Are Often Mixed and Matched

While you can find the same brand of AI technologies on edge devices and in the public clouds, a market seems to be emerging where the best-of-breed edge and centralized AI solutions are often different brands of products, even in the way they deal with knowledge processing. 

While this seems to be the best-of-breed solution, it’s important to ensure, and understand how, the different knowledge engines can share knowledge and, in a sense, train each other ongoing from tier to tier. 

Nothing’s Set in Stone Yet

To say that best practices are evolving is an understatement. We learn more each day as we stand up these systems, watch the successes as well as the failures, and respond with fixes and new approaches. 

Edge computing is here to stay, so you need to figure out how to best store and use data and knowledge on these edge devices. Knowledge tiering will be a dynamic architectural problem to solve, given the fact that devices with AI engines in them need to work and play well with more centralized systems. 

Dive in
Related
Blog
Federated Learning with ML: What It Is, Why It Matters
By Mehreen Saeed • Mar 22nd, 2022 Views 2.6K
Blog
Federated Learning with ML: What It Is, Why It Matters
By Mehreen Saeed • Mar 22nd, 2022 Views 2.6K
Blog
Computer Vision: What It Is and Why It Matters
By Mehreen Saeed • Jan 13th, 2022 Views 2.7K
Blog
The Week in AI: Earthquake Detection, ML Reasoning, Color Night Vision, Memristors
By Greg Coquillo • Apr 20th, 2022 Views 1.4K