Introduction
The term Artificial Intelligence or AI seems to dominate our lives today, powering everything from our smartphone assistants to sophisticated business analytics. For many in Malaysia, the rise of tools like ChatGPT and Generative AI feels like a sudden boom. But exactly when was AI created, and what truly led to its current massive popularity?
A Brief Summary of AI’s Journey
The formal field of AI was created in 1956 at the historic Dartmouth Workshop, born from early optimism. This initial ‘Golden Age’ was challenged by two periods of disappointment known as the ‘AI Winters’ when funding dried up. The field finally gained global popularity in the early 2010s, thanks to the explosion of Big Data, vastly increased computing power, and the breakthrough of Deep Learning algorithms. Today, AI is an unstoppable force, transforming industries across Malaysia and the world.
1. What is Artificial Intelligence?
At its core, Artificial Intelligence is the capability of a machine or a computer program to simulate intelligent human behaviour. This involves performing complex tasks such as learning, problem-solving, decision-making, and natural language understanding.
AI is about creating systems that can reason and act intelligently, not just follow simple pre-programmed instructions. It seeks to replicate, and eventually exceed, the cognitive abilities of the human brain.
The Three Main Types of AI: Narrow, General, and Super
AI is broadly categorised into three types based on capability:
- Narrow AI (or Weak AI): This is the only type that exists today. It is designed and trained to perform a specific task, like recommending a YouTube video, translating languages, or detecting spam.
- General AI (or Strong AI): This is a hypothetical machine with the ability to understand, learn, and apply its intelligence to any problem, much like a human being. We are not there yet.
- Super AI: This is a hypothetical intelligence that surpasses human intelligence in virtually every field, including scientific creativity and social skills.
Machine Learning Vs Artificial Intelligence
It is important to differentiate between the terms as they are often used interchangeably:
- Artificial Intelligence (AI) is the overarching goal or discipline of building intelligent machines.
- Machine Learning (ML) is a subset and technique of AI. ML systems use algorithms that allow computers to learn directly from data without being explicitly programmed.
It was ML that truly powered AI’s modern popularity, enabling systems to automatically improve their performance through experience.
2. The Exact Moment of Creation: The Dartmouth Workshop of 1956
The intellectual groundwork for AI was laid earlier by figures like Alan Turing, who posed the fundamental question: “Can machines think?” in 1950. However, the formal discipline needed a name and a dedicated effort to bring its concepts to life.
The main driving force was John McCarthy, a young mathematics professor, who, along with Marvin Minsky, Nathaniel Rochester, and Claude Shannon, became the key architects of the field.
The Summer Project’s Revolutionary Proposal
In 1955, McCarthy proposed a summer research project based on a bold conjecture:
“That every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
This optimistic view set the incredibly ambitious tone for the entire field of AI research.
Why the Dartmouth Event is Called the Birthplace of AI
The Dartmouth Summer Research Project on Artificial Intelligence was held in 1956 at Dartmouth College in New Hampshire, USA.
During this workshop, John McCarthy coined and introduced the term “Artificial Intelligence” to define the new area of study. This single, seminal event brought together the key thinkers and laid the foundational principles and aspirations, officially marking the birth of AI as a dedicated academic field.
3. High Hopes and First Successes: The Golden Age of Early AI
Following Dartmouth, the early years were filled with breakthroughs that fuelled optimism:
- The Logic Theorist (created by Newell, Shaw, and Simon) was a program that could prove mathematical theorems. It is widely considered the first true AI program.
- ELIZA (1966) was developed by Joseph Weizenbaum. It was one of the first chatbots, able to simulate conversation by reflecting user input, and it showed how readily humans could attribute intelligence to a machine.
Early Optimism and Overpromises
The initial successes led to incredible optimism among researchers. Predictions were grandiose, with some believing that a machine with general human-level intelligence was only a decade away.
This period received significant funding, primarily from government defence agencies like the US Department of Defense, keen on the potential military applications.
The Dominance of Symbolic AI
The research during this age was dominated by Symbolic AI, sometimes referred to as ‘Good Old-Fashioned AI’ (GOFAI).
This approach assumed that human intelligence could be reduced to the manipulation of symbols and rules. While successful for specific, well-defined problems like chess, it proved fragile and ineffective when dealing with the ambiguity and complexity of the real world. This reliance on rigid rules would eventually lead to major issues.
4. The Challenges and the Quiet Years: The AI Winters
The early optimism crashed in the mid-1970s. The programs struggled with simple real-world tasks and required massive amounts of data and processing power that simply did not exist.
A pivotal moment was the 1973 Lighthill Report in the UK, which heavily criticised the lack of real-world progress in AI research. This report led to a dramatic reduction in government funding, triggering the first period of disillusionment and reduced investment known as the first AI Winter.
Limitations of Early Computing Power
A major stumbling block was the sheer inadequacy of the hardware at the time.
- Early computers were slow and limited in memory.
- Tasks that seem trivial today, like image recognition, required computational resources that were impossible to obtain affordably or reliably.
This lack of computing muscle severely restricted the complexity of algorithms that could be effectively developed and tested.
The Collapse of the Early Expert Systems Market
A brief resurgence in the 1980s came with Expert Systems, which were rule-based programs designed to mimic the knowledge of a human expert in a narrow domain.
However, these systems proved commercially unsustainable:
- They were extremely costly to build and maintain.
- They were rigid and couldn’t learn outside their programmed rules.
The eventual failure of this commercial market led to the second AI Winter in the late 1980s and early 1990s, forcing AI out of the public spotlight for nearly two decades.
5. How Big Data and Computing Power Changed Everything

Even during the ‘winters’, dedicated researchers continued to refine the concept of neural networks which are essentially computational models inspired by the human brain. The popularisation of the backpropagation algorithm in 1986 was a key moment, allowing these networks to be trained much more efficiently by correcting their internal errors.
The Importance of Big Data and Massive Storage
The true turning point that ended the AI Winter was the simultaneous rise of two things in the 2000s:
- Big Data: The internet, smartphones, and digital devices started generating unprecedented volumes of information. Neural networks thrive on massive datasets.
- Affordable Computing: The arrival of powerful, off-the-shelf Graphics Processing Units (GPUs), originally designed for video games, provided the cheap, parallel processing power needed to train complex AI models quickly.
The Breakthrough of Deep Blue Vs Garry Kasparov
A significant milestone that captured global attention was IBM’s Deep Blue defeating the reigning World Chess Champion, Garry Kasparov, in 1997.
Though Deep Blue was a highly specific, brute-force AI, its victory had a huge psychological impact. It demonstrated that a machine could finally beat humanity’s best in a complex game of intellect, renewing both commercial and academic interest in AI research.
6. The Popularity Boom: Deep Learning and Mainstream Adoption
The 2010s ushered in the modern AI boom. Researchers started building neural networks with many more layers, a technique called Deep Learning. This approach allowed systems to automatically learn complex features from raw, unstructured data.
A defining moment was the 2012 ImageNet competition, where a Deep Learning model (AlexNet) dramatically outperformed all rivals in image recognition. This victory proved the immense power of the paradigm shift.
The Rise of Image Recognition and Natural Language Processing
Following the ImageNet success, AI rapidly advanced in two critical areas:
- Computer Vision: Image and video recognition became accurate enough to be used in autonomous vehicles and advanced security systems.
- Natural Language Processing (NLP): Advanced NLP gave rise to consumer-facing tools like Amazon Alexa and Apple Siri, finally bringing practical AI directly into millions of homes, including those in Malaysia.
Generative AI’s Impact: ChatGPT and Beyond
The latest and most explosive wave of popularity began around 2020 with the widespread emergence of Generative AI and Large Language Models (LLMs) like ChatGPT and Bard.
This new generation of AI can create entirely new content such as text, images, and code, that is often indistinguishable from human-created work. This leap in creativity and accessibility has transformed how businesses operate and how individuals work, firmly cementing AI’s place in the global public consciousness.
7. Key Milestones in AI History
To better understand the journey from concept to consumer product, here is a comparison of key moments that shaped the field. This table highlights how the pace of progress has accelerated, especially in the last decade, and serves as the required comparison content.
| Year | Key Milestone | Significance for AI’s Evolution |
| 1950 | Turing Test Proposed | Provided a philosophical benchmark for machine intelligence. |
| 1956 | Dartmouth Workshop | Formal Birth of Artificial Intelligence as a field. |
| 1974 | First AI Winter | First major setback due to unrealistic expectations and funding cuts. |
| 1997 | IBM Deep Blue vs Kasparov | First machine to beat a world chess champion, signalling AI’s power in specific domains. |
| 2012 | AlexNet Wins ImageNet | Breakthrough in Deep Learning, kickstarting the modern AI boom. |
| 2016 | AlphaGo Beats Go Champion | Showcased the power of Deep Reinforcement Learning for highly complex, non-linear problems. |
| 2022 | ChatGPT Public Launch | Ushered in the era of Generative AI, marking the field’s highest level of public popularity and mainstream commercial use. |
8. The Future of Artificial Intelligence: A Look Ahead for Malaysia

As Malaysia accelerates its push towards a high-income digital economy under initiatives like the MyDIGITAL framework, understanding the evolution of AI is crucial. It helps us appreciate not just the current capabilities, but also the perseverance that brought us here, providing a vital perspective on how we adopt and govern these powerful technologies for our national benefit.
AI’s Role in Driving Malaysia’s Digital Economy
AI is not just a global trend, it is a vital catalyst for Malaysia’s future economic growth.
The Malaysian government is actively promoting AI adoption across sectors like manufacturing, healthcare, and finance to boost productivity and global competitiveness. For instance, AI in manufacturing enables predictive maintenance, while in finance, it enhances fraud detection and customer service, aligning with national digital goals.
Ethical Considerations and Governance in Southeast Asia
As AI becomes more powerful, ethical implementation is critical. For Malaysia, this includes several key areas:
- Ensuring the technology is deployed responsibly.
- Addressing concerns around job displacement through targeted upskilling initiatives for the workforce.
- Cultivating local talent and developing clear national policies to ensure fairness, transparency, and accountability in AI-driven systems.
The Next Steps in AI Innovation
The future points toward more sophisticated systems known as Agentic AI, autonomous systems capable of completing complex, long-term tasks without constant human intervention.
The integration of AI with physical systems, known as the Internet of Things (IoT), will also create smarter cities and infrastructure. This seamless blend of digital and physical will further Malaysia’s journey toward becoming a truly digital nation.
Conclusion
Artificial Intelligence was formally conceived in 1956 but achieved its current level of global popularity in the 2010s, driven by deep learning and the unprecedented availability of data and computing power. Its history is a testament to resilience, moving through periods of both spectacular triumph and profound disappointment. Today, AI is no longer a futuristic concept, but a powerful technology actively shaping industries and driving Malaysia’s path towards digital excellence.





