Close

The Ever-changing Definitions of AI – an Elusive Pursuit

Over the course of several decades, we have witnessed the landscape of Artificial Intelligence evolve- from simple rule-based systems to complex learning algorithms, our understanding of AI has transformed, and with it, our definition of the term, AI.

From John McCarthy's 1955 coinage of "artificial intelligence" as "the science and engineering of making intelligent machines," the AI domain has sprawled through several definitions. Today, the proverbial pendulum of this mutable identity has swung to “machines that mimic human problem-solving” and “decision-making which simulate human intelligence processes” with the distinctions for narrow, general, weak, and strong AI.

In the early days of AI, we focused on programming machines to perform clever tasks like playing chess, which isn’t considered AI anymore. Our journey through AI milestones begins with its early days in the 1950s and 1960s, emphasizing symbol manipulation and logical reasoning. The 1970s and 1980s ushered in knowledge-based systems like MYCIN and XCON, while the 1980s and 1990s saw the rise of connectionism and neural networks. Fast forward to the 2000s and 2010s: deep learning and reinforcement learning revolutionized AI performance in various tasks. Currently, large-scale language models like OpenAI's GPT series showcase the pinnacle of AI, but their status is precarious. Critics use "curve fitting" as a pejorative term, questioning machine learning models' depth and nature of knowledge. However, modern techniques like deep learning and reinforcement learning have made significant strides in addressing these concerns.

I can't help but contrast the ease of defining machine learning with the ever-changing landscape of artificial intelligence (AI) definitions. Machine learning offers mathematical precision, while AI remains a moving target, with goalposts perpetually shifting.

Goal of machine learning can be mathematically defined as:

h* = argmin_{h ∈ H} (1/n) Σ_{i=1}^n L(y_i, h(x_i))

where h* represents the best hypothesis found by the learning algorithm A. The objective is to minimize the average loss over the entire dataset, thereby finding the hypothesis that best approximates the true relationship between input features and output variables. However AI definitions, goals, and objectives are a moving target.

The AI community does employ various tests and benchmarks, such as the Turing Test, Chinese Room Test, Winograd Schema Challenge, Raven's Progressive Matrices, and numerous competitions to evaluate AI's capabilities - the AI Index provides a comprehensive overview of AI's progress across multiple dimensions however despite these evaluations, the perfect AI definition may always reside in the uncanny valley until artificial general intelligence (AGI) is achieved. As AI keeps evolving, we must embrace the chase, ever-striving towards the elusive AGI. So, as we marvel at GPT and transformer models today, we eagerly await the next leap forward in AI's ever-changing journey.

AI's Elusive essence is possibly the dilemma of the undefinable, the metamorphosis behind the constant transformation of meaning – from chess masters to GPT4.

Share