Rise of hybrid machine learning computing – By Aditya Abeysinghe

Rise of hybrid machine learning computing – By Aditya Abeysinghe

Aditya-AbeysingheDistributing models

Distributed machine learning is used to decentralize computation in machine learning models to individual nodes rather than computing in a centralized model. Distributed machine learning removes issues with large processing queues where devices have to send data to centralized computational models and obtain responses. Distributing models is always not viable as most nodes have limited storage resources and computational resources.

Using models that are near to a node

Centralized computing of models reduces issues with computation limits and storage limits. Models that are used by large volume of users often use centralized method of processing. However, the time to receive an output is high due to time for computing in the server and time taken to communicate data between the user and the server. In contrast, distributed models are faster as they need no time to communicate. However, nodes that host these models cannot process highly computation or highly storage dependent models as explained above.

hybrid machine learning computingPlacing models in a server that is closest to a user is a solution to issues of both of these methods. With nearby model computing, data from users are sent to the server and models in the server sends the data processed to the user. This is often used in medium scale architectural models where medium to large model computation is used.

Using a federated model

This method uses centralized and distributed model training. With this method of data handling, a centralized model that is stored in a server is used. Each device copies this model to the device. The model is then retrained with device’s data without transferring to a server. The model is changed to improve the performance and reduce errors. Each user device does the same process and changes the model stored within the device. Each model is then copied to the server and the model within the server is updated with the changes from devices.

The advantage of using this model training type is that privacy of data is enhanced as the model is trained within each device. The data used to update the model is within each device and does not get transferred to a server. Therefore, users could use a centrally stored model and update their model with the data within a device.

The model within the server is updated using models sent from devices. Therefore, there is less hardware usage for model training in the server. The total time to update the model is higher as each model will be sent to the server at different periods. Also, the accuracy of the model in the server could be reduced as updates from each device is added to the model.

Image courtesy: https://www.iais.fraunhofer.de/

Comments are closed.