By Pragyansmita Nayak
The usage of the term AI is so prevalent today that you literally should have been living under the rock to not have heard it yet! A large part of its use has been its misuse to create sensational news articles or generate rapid interest in a given product or application – often incorrectly collated with any system that is automated or has a rules-based engine as part of its inner workings.
There is no general definition of intelligence, oddly enough not even from the fields of psychology and philosophy with centuries of deep studies of human consciousness as their backbone. The general artificial intelligence (AI) theory simplifies by equating to human-like intelligence, but machine implementable. Attributes of a smart system include some type of short- and long-term memory mechanics, the ability to handle a sensor system, some motor skills coordination, and in some extreme cases, the machine may be capable of motivation, thinking, and/or consciousness. An AI solution today does not have “all” these traits, but it qualifies as an intelligent system if it exhibits one or more of these traits in combination – for example, if the system has a learning algorithm that improves performance over time without any external assistance.
The foundation of AI is built on three concepts: automata, context-free grammar, and the Imitation Game – the latter invented by the much reverred Alan Turing, and discussed in Computing Machinery and Intelligence. In the 1950s, the concept of “automata” or “self-acting” emerged to describe a machine that is able to perform on its own based on certain rules. The Imitation Game starts with two people on either side of a partition. On one side of the partition there is a person designated as a listener. On the other side, there is a person and a robot. The human and the robot speak at different times, and if the listener is unable to discern the human voice from the non-human one, it can be said that the robot has passed the “Turing Test.”
Programming intelligence is not there yet – “today” a smart performant system involves largely a store of knowledge, the ability to learn from experience, and improvement over time without any manual intervention. Each of this component needs to be disrupted individually as well as the interface mechanisms for seamless integration with any form of dataset (numeric vs. categorical, structured vs. unstructured and all the different categorizations) and knowledge representation (rules, decision trees, neural networks, deep neural nets, etc.). The search for the panacea “master algorithm” is “on” as the ultimate convenience. Additionally, big data technologies to bring compute to data and overcome the network and system resources limitation will truly harness the power of data to solve some of the most challenging problems that can benefit from automated learning and pattern detection for concept learning and fusion.
Representation learning or feature learning is the subdiscipline of the machine learning that extracts features or understanding the representation of a dataset. The related subdiscipline Transfer learning focuses on the ability for a machine learning algorithm to improve learning capacities on one given dataset through the previous exposure to a different one. These are a few examples of techniques among that increase reusability of artifacts of a learning process and help practitioners focus on their specific problem statement.
A exemplary data-focused solution should focus primarily on defining the problem statements and desired answers and secondary is finding the algorithm, representation and methodologies to implement an automated solution. This process in turn enables a step-by-step systematic growth from descriptive analytics to diagnostic analytics and then the jump to more well-defined and advanced predictive analytics (and thereon, graduate to prescriptive analytics and cognitive analytics). This will prove the efficacy of data in solving real-world problems and as building blocks for related and higher-order problems.