We know what intelligence is, but it’s somehow hard to agree on an all-encompassing definition. By extension, Artificial General Intelligence (AGI) is similarly hard to define. It represents the next jump in AI systems, something that has problem-solving capabilities similar to that of a human. The definition is already evolving, with eyes now set on “superintelligence” to represent the all-capable machine, something which seems more linked to a contractual clause than anything else, Connie Loizos at Tech Crunch:
“…a reported clause in OpenAI’s contract […] cuts off Microsoft’s access to OpenAI’s tech if the latter develops so-called artificial general intelligence (AGI)”
Whatever the term ends up being for this next jump, I think we’ll know it when we see it. It was once thought that a computer could not beat a human at chess until it did, and the benchmark changed; the goalposts moved further away. This has been repeated throughout history with other games (e.g. Go), art generation, and understanding natural language. The same is possible with AGI: we will reach a level and collectively agree that it’s not quite there; it falls short on newly defined parameters. We are measuring many factors—such as quality, speed, sustainability, and price—and continuing to grade intelligence and systems on a curve.
Striving to match human intelligence makes sense today. After all, there are plenty of tasks that require human input that I would rather not do. What happens when it goes beyond human intelligence? Christian Edwards and Katie Hunt at CNN quote Geoffrey Hinton, a Nobel Prize winner for work in machine learning:
“It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us”
— Geoffrey Hinton
The intellectual revolution that Hinton alludes to would be the introduction of a system capable of thought beyond human comprehension, which is difficult to conceptualise. Trying to measure something like this quickly falls to existing human metrics: speed, efficiency, sustainability—all of which may be irrelevant. Such a system could see us, the humans, as having a subset of its skills, making our perceived pinnacle of intelligence a human limitation in a digital intelligence era. We wouldn’t know what the goalposts are anymore.
We don’t have an all-encompassing system that can solve our problems and tie our shoes yet, but today’s large language models (LLMs) are capable enough to change and improve workflows, at least if you are able to correctly formulate a prompt to get what you’re after. As the goalposts continue to move, improve your positioning by adapting to the fast rate of change—Li Jiang at Stanford AIRE outlines three pillars for adapting to the world we are now in:
- Understand how AI systems work today, remove any trepidation you have, and learn how systems like ChatGPT formulate responses.
- Understand the difference between humans and AI systems, know where machines excel and where humans are better.
- Use the knowledge of the first two points to efficiently use AI systems in your workflow, do the work that requires human input while pushing the rest to AI systems.
I recommend watching the full 7-minute video, Survival Strategies in the Era of AI, to learn more.
There doesn’t seem much use in dwelling on the definition of AGI, or superintelligence, for most people, whereas learning more about existing systems and how you can adapt seemingly only has upsides. Simon Willison ends his 2024 LLM review with advice:
“Those of us who understand this stuff have a duty to help everyone else figure it out.”
Here is my input whilst we wait for the next big jump (and for the goalposts to move).