Aditya Abeysinghe

AI Cloud: The next use of AI By Aditya Abeysinghe Artificial Intelligence (AI) algorithms are often memory and computation intensive due to the number of processing cycles involved. Even a model with a few source code lines will often cause several minutes to produce the output. With growing use of AI-based applications, a solution to this issue is necessary to produce the services fast with minimal lags. Therefore, use of cloud for AI processing is a new method followed by many who use AI-based functions. Benefits of using the cloud Different types of clouds exist. I described most of these clouds in my article on *‘Data Management Strategies in Multiple Clouds’. These clouds have different benefits and are used for different tasks. With the use of cloud servers, scalability is possible within minimal time when memory, computation and other constraints reach limits. Scalability and elasticity can scale-up or scale-down these ...

Read More →

Theory of Mind: Are emotions needed for machines? By Aditya Abeysinghe Assessing the mental states of other humans is a task that humans have developed from late childhood. Emotions and social interactions are formed with understanding how others behave with a person. Theory of mind (ToM) describes this process of how mental states’ assessment affects the emotions humans make towards others. In my article on *Artificial General Intelligence, I described about how emotions are important for machines. For Artificial General Intelligence to be truly developed ToM is a key component. What is ToM? ToM in simple terms is the set of tasks the mind uses to think about the mental state of others. For example, assume that two people named James and Bred goes to a restaurant to have dinner. When James looks at the menu and tells that he will order some type of Noodles, Bred can infer that ...

Read More →

Data Fabric: Where are my data sources? By Aditya Abeysinghe In a previous article, I explained about *data management in multiple cloud environments. Multiclouds, as described in that article are used to distribute applications in multiple cloud environments. However, when applications are distributed in multiple clouds, monitoring, managing and providing a uniform flow of data is often time consuming, costly and requires additional resources. Therefore, a platform from which all these data can be integrated is useful. A data fabric is a data management model where all data endpoints of applications in multiple hosts can be integrated. What advantages do data fabrics provide? As I explained in a previous article about *Self-service integrations, they (Self-service integrations) enable users who are in non-technical teams to integrate apps into existing systems. With self-service integrations, these teams can access data and perform tasks they require with less time rather than requiring technical teams ...

Read More →

Data Management Strategies in Multiple Clouds By Aditya Abeysinghe The “cloud” has been one of the most trending paths in systems and application deployment during the past few years. The cloud has built-in functions for computation, analysis and networking and it dominates almost all application-specific functions we use at present. With the growing use of cloud services, several types of clouds have been proposed. Not only single clouds, multiple clouds have also been researched. In this article, several multiple cloud types are explained from a data and application management angle. Public and Private clouds Public clouds refers to cloud services provided by a cloud service provider where computing services such as storage are shared with other users. With public clouds, multiple users may access the same resource at the same time. For example, multiple users may store data in the same file system in the same storage allocated. Therefore, privacy ...

Read More →

Decentralized Web: Consumer protector over oligarchy By Aditya Abeysinghe The current web is called web 2.0 and is an improvement over the first web. This web which we use for blogging, emails, chats and even basic read only content changed the use of web to a more user involved web. It also made humans more digital with newer devices that support it. However, the main drawback of the current web is the centralized control of several organizations which affect privacy of users. Thus, a newer web, web 3.0 is being researched to improve drawbacks of current web known as decentralized web. Generations of the web Web 1.0 was the first generation of the web. It is known as the ‘read-only web’ as the content is only viewed, but not modified. For example, a website which shows products and their descriptions is a web 1.0 website. The early users had seldom ...

Read More →

Voice generation using text: A deep-learning method   By Aditya Abeysinghe Using text to generate speech similar to human voice is the main function of a text-to-speech (TTS) system. The process of converting text to speech is known as speech synthesis. Speech recorded is used to generate new speech, based on the input of the TTS. Since 1960s, several TTS systems have been developed for speech synthesis for current systems. However, these systems have several issues which led to the use of deep learning methods to synthesize speech. Current methods Two main methods exist for speech synthesis in traditional systems: concatenative and parametric. In concatenation-based synthesis the waveforms in the speech are concatenated to produce a speech stream. This type uses a waveform database to store and retrieve recorded speech. The speech appropriate for each text supplied is selected and joined to the stream to produce the final speech. In ...

Read More →

The next generation of computing: DNA Computing By Aditya Abeysinghe Silicon-based microprocessors changed the digital world. Data processing of devices from IoT (Internet of Things) to super computers is handled by these tiny electronic chips. Early microprocessors had limited processing speed, yet, at present, even the smallest devices could process billions of digital operations within seconds. However, with growing computational needs, there is a limit of capacity that these chips could provide. Therefore, a new type of processing has been long thought as a solution to computation demands.  How DNA Computing began DNA (Deoxyribonucleic acid) computing was first termed in 1994 when Leonard Adleman a computer scientist at the University of Southern California described about using DNA to solve the “travelling salesman” problem. Also referred to as the directed Hamilton Path Problem, the problem is about finding the shortest route between a number of cities such that, each city is ...

Read More →

Analytics to the next step: Augmented analytics By Aditya Abeysinghe   The traditional process of data analysis includes obtaining data from a raw source and then preprocessing and then analyzing it to make business decisions. However, this process includes data scientists and data analysts handling data of organizations. Many small and medium enterprises which have lesser capital to invest in these two ends lack the talent to profit from analysis and analytics on data. Augmented analytics is a new method used to analyze data using AI (Artificial Intelligence) and report on results found. What augmented analysis means for users For example, consider an organization that has to decide on reaching a new target market. It may consider what customers, who purchase from competitors are considering when purchasing products; it may consider the sales or profits of competitors or it may consider the factors that drive success in the target market, ...

Read More →

Building your own app: Self-service application integration By Aditya Abeysinghe   In a previous article, I described about composable enterprises. In summary, a composable enterprise means handling businesses in a modular approach where blocks of processes are either added or removed based on business requirement. However, if we go a level further down, a module consists of an application which in turn consists of components integrated together. With growing demands and changes in applications today, building apps and integrating them to create systems cannot be handled by developers only. Therefore, a self-service integration is seen feasible in such scenarios. What makes self-service integrations a hot topic? At present, almost all digital systems use software components for various functions. From reporting to monitoring these systems, the use of software applications is ubiquitous. With continuous monitoring required, a team to develop these components and troubleshoot issues is required. Large systems require several ...

Read More →

Explainable AI: How do AI models provide results? By Aditya Abeysinghe With increased use of ‘bot’ based programs at present, AI (Artificial Intelligence) has become an essential component in many functions. AI-based software is costly and time consuming to build due to multiple training cycles involved. With such costly inclusions in businesses, an important question that arises is whether results of these AI models could be trusted. Explainable AI is a component that is used to explain why results and the inner processes could be trusted or how they provide such results. The main disadvantage of most AI models is the hidden nature of inner behavior. Even developers of AI models cannot sometimes justify how these models behave under different inputs. However, analysts analyzing results from these models need to properly explain how models produce these results under certain conditions to clients. Therefore, a proper approach to explain how these ...

Read More →