Distributed AI – Could AI processing be faster? By Aditya Abeysinghe

 

Distributed AI – Could AI processing be faster?

By Aditya Abeysinghe

Distributed AI

Aditya-AbeysingheDistributed AI (Artificial Intelligence) is a field of AI where data processing is distributed across multiple nodes instead of processing at the source. A common issue with data analysis today is that it is difficult for a central processing source to process massive amounts of data generated by various types of sources. With distributed AI, processing of data is distributed using a system of nodes which can process synchronously and provide analyzed data.

Patterns in Distributing AI processes

Distributed AI is used in some systems to collect data from edge devices and then analyze it in the cloud by sending data between these devices and the cloud. AI models are trained in the cloud using data received from edge devices and then the output can be sent to the edge. This method of distributing AI processing using the cloud can increase the efficiency of processing data using cloud servers and store large amount of data in cloud storage. Therefore, this technique is suitable for systems which can allow some latency in data processing. However, this can increase costs of data processing as cloud storage and cloud processing needs to be purchased.

Another method is to train AI models in the cloud and then deploy them in edge devices for processing. This type collects data from edge devices which are used to train models in the cloud and then trained models are deployed back for local processing using edge devices. This can improve drawbacks of using cloud servers as there is minimal costs and latency for processing. However, this can also cause drawbacks as the same model is used in all devices. Also, only edge devices that can process data using these models can use this type of distributed AI.

For some systems that cannot send data due to privacy and regulations to train AI models using cloud processing, AI models are trained using edge devices. This method first uses models trained in the cloud which are deployed in edge devices. Then these models are further trained within edge devices with data from these devices. This can improve precision of AI models as models are updated within these edge nodes. However, there is an overhead of training models within edge devices and this type is not suitable for most devices.

Why Distributed AI?

One of the reasons for distributed AI processing is due to the large amounts of data generated using different types of edge devices. Traditional data processing in AI involves transferring data to a centralized cloud or edge server(s) which will process using models and return data back to devices. With different devices connected with each other using different methods and different types of data being produced using these devices, centralized processing of these data is often not efficient. A common solution is to process data within these edge devices. To process data at edge level, algorithms and methods to distribute AI models within these devices is required.

With distributed AI, algorithms which could be used within edge devices are trained and the ability to distribute processing across devices is enhanced. Devices which can allocate resources, schedules, and priorities for AI model processing is also required with dedicated microelectronics, sensors and storage which are optimized for AI processing. Also, the ability to evaluate AI processes similar to AI processes in clouds or remote servers and ensuring security of AI processes is important when using distributed AI.

Image Courtesy: https://www.devteam.space/

 

Comments are closed.