Is Green AI theory or is it possible for AI systems? – By Aditya Abeysinghe

Is Green AI theory or is it possible for AI systems? –
By Aditya Abeysinghe

Green AI

Aditya-AbeysingheArtificial Intelligence (AI) based systems are often memory and computationally intensive systems due to multiple training cycles involved. An AI algorithm often does not provide proper output without using multiple cycles of training as it cannot update its algorithm to minimize errors which are captured in each cycle. However, systems which host these AI algorithms should be capable to withstand these heavy resource-intensive processes. Green AI is an AI methodology where AI systems are changed to be environmentally friendly with less resources used for training.

Red AI

The cost of computation is calculated by the number of floating-point operations per second. Typically, during training an AI model the number of floating-point operations per second are in quadrillions for a supercomputer and less for a personal computer. With increased use of AI in the past decade, AI models that use teraflops or petaflops of processing have evolved. With such a large processing power required to run AI models, the energy required to provide processing models has grown. Moreover, the heat generated, and the cooling required to balance the heat of systems which host these models has grown. Thus, the power required for computation, for cooling and for providing the functionality required by service requestors has increased due to use of AI models. In research, these systems are known as Red AI systems.

What causes AI to be Red AI?

The focus of researchers who research on various models is to improve the accuracy and efficiency of these models. Metrics such as precision, recall and error of means are used in most models to evaluate how the trained model will predict for test data. Researchers tend to constantly build and change models to increase the percentage of metrics and minimize errors. Therefore, multiple training cycles are used to train models.

One of the main reasons for Red AI is the complexity of the number of inputs used. Large AI models use billions of parameters to train a model. AI models which use thousands of parameters require many CPUs (Central Processing Unit), GPUs (Graphical Processing Unit), and storage which increase the power and cost of processing. These models produce highly accurate and highly efficient results by using large number of inputs. Therefore, research improve these models and use it for complex AI systems instead of reducing costs.

Another reason for Red AI is the large datasets used to train AI models. With larger models, large storage is required which in turn increases the computation cost. For example, BERT, a method of pre-training language representations from Google, was trained on three-billion-word pieces and is a top performant natural language processing model since 2019. Other complex AI algorithms used for vision even use three billion images.

Green AI

The cost of computation for a model depends on various factors such as the number of parameters used, the size of the dataset used, and hardware. Therefore, the cost of factors should be reduced to reduce the cost of processing a model. Green AI attempts to focus on a different set of factors to those used in Red AI to balance the accuracy of models with their cost of computation and cost of usage.

Floating-point operations is one factor used in Green AI. Instead of focusing on increasing the accuracy of models, this method focuses on work done by a model. The objective of using this method is to monitor the amount of work done and balance the accuracy of a model with the energy and computation used. However, using this method is not accurate as other factors such as memory used, and the work done by processes used by other libraries used in AI models also affect cost.

Image Courtesy: https://medium.com/

Comments are closed.