2023 was a breakout. Over the past year, generative AI has exploded from a conceptually interesting idea to mass adoption. The impact it has already had on sectors like advertising is immense. Somewhere between 75% and 85% of all companies now use some form of AI in their customer interactions, primarily chatbots and generative AI.
Generative AI is becoming a crucial tool for creators, creatives, anyone trying to convey ideas, or anyone trying reduce the work involved in making those ideas reality. The way generative AI is used, both on a mass scale and within corporations, is evolving as rapidly as the technology and is truly fascinating. And with equal rapidity it is becoming a part of our professional and personal lives.
Table of Contents
Observations from a ServiceNow Customer Event
I attended the ServiceNow customer event in Toronto earlier this month, and in many ways, it seemed to exemplify the current state of AI: part novelty, part industry-changing functionality. Novelty: Attendees were asked to choose a mood on a mood wheel, and the music in the room would shift accordingly. Unfortunately, the shifts were barely discernible and even if they had been bigger, it was disappointingly devoid of substance. We learned nothing. There was no delight. One of generative AI’s best selling points is its capacity to delight, so it’s disappointing that the execution wasn’t better. The conference use potential is immense.
Deep Advancements in Customer Service
On the other hand, it is clear what an impact AI is having in core parts of the ServiceNow software suites. The new AI-enabled customer service data capabilities are impressive. One of the most impressive parts was seeing how the application of AI to the data was liberating users from the confines of inflexible software. There are so many different ways to interact with and drill down into data now based on what we saw, and the positive impact on teams using that software will be huge. (I’d love to see that data)
We watched an interaction where a customer sought a refund after purchasing coffee with her loyalty points. Such transactions, as will be familiar to those in process design, can be complex. Multiple databases. Different access permissions. Approvals. However, with the entire customer service dataset loaded into a proprietary LLM, AI could access multiple datasets to provide the customer with a near-instant refund experience — a significant advancement. The ServiceNow demo had other impressive features that I’ll discuss in a future article. These demonstrated tangible, high-value AI outcomes. And a little bit of corporate delight!
Skepticism and Challenges: A Shift in Tone
With this and many other stories like it, it would seem all is quite rosy in the world of AI. Widespread adoption, billions in revenue, customer delight, explosive interest, great tech – what could be better? Yet, discussions with those at the forefront of AI reveal a different tone lately: one of confusion, skepticism, and growing frustration about AI’s trajectory. Why is this shift happening? There are three primary reasons or what I will call ceilings AI is hitting up against: hallucinations, resources, and breakthroughs. All three of these ceilings must be removed or AI will fail to achieve its full potential. Can they? We don’t know yet.
It’s the Hallucinations, Man
First, the issue of AI producing hallucinations or inaccurate outputs is more than just a minor annoyance.
Someone I spoke with recently referred to an AI system’s output as “95% accurate but still useless.” These glitches can be attributed to the incomplete or misdirected training of the AI system, which, in turn, affects the integrity of its output. An AI system’s reliability is only as good as the dataset it was trained on, but hallucinations indicate even a perfect dataset may not be enough. With hallucination management where it is, AI is not reliable enough to produce a trusted medical analysis or diagnosis for example. A new Chinese tech, Woodpecker, bills itself as the hallucination killer and will be on the market soon.
Rethinking AI’s “Glitches” l: AI Reliability and Dataset Integrity
The hallucinations AI produces, while concerning data integrity, are fascinating from an intelligence perspective. No one knows why they happen or what they mean. Though many see them as mere glitches in the LLM training models, what if they aren’t? Most research on these hallucinations aims to prevent them, but perhaps we should also be trying to learn from them if there is learning to be gained. Regardless, they must be better understood and either harnessed for creative applications or eliminated entirely.
Linearity isn’t Where It’s At
The second ceiling is a lack of acceleration in functionality. As AI evolves, its growth has been purely linear: the more servers and chips you have, the better performance you achieve. But it’s all proportional. The more resources you add, the better and faster your results. So far, this progress is strictly linear. Performance hinges entirely on computational power. We’ve yet to see that “aha” moment where linear computational processes transition to something different: an increasing efficiency — call it concentration. For now, this “concentration” remains elusive. If AI were a sauce, it’s still too watery. This apparent stagnation is shaping how people envision the future of AI. We’ve grown accustomed to a streamlined and predictable tech development trajectory, but AI defies such neat categorizations. It remains a frontier of the nearly entirely unknown.
Power and Consumption
The third ceiling challenge is resources. The power consumption required for AI is vast. If it continues on its current trajectory, it will soon be not just unsustainable but insupportable.
The issue of resources is similar to the one that plagues blockchain mining, where the technology’s utility is severely impacted by its resource-intensive nature. AI differs fundamentally from blockchain, however, in value, maturity and return on investment.
But it is still a huge problem. With these three ceilings, an even more fundamental question looms over many projects right now: when will we reach a point where AI evolves beyond just processing existing data, becoming something more transformative? What might that evolution look like?
Exploring the Unknown: The Future of AI
Every expert in the field grapples with these questions. It’s a quantum leap, an undefined space that we’re venturing into. Some are beginning to question whether the next evolutionary breakthrough in AI even exists or whether it’s attainable within meaningful timeframes.
Currently, we see well-funded LLMs striving for the next benchmark, the next significant innovation. But expected advancements have not evolved as expected. Gemini is still not out. Google seems to be drifting. OpenAI recently shelved Arrakis, an operating system once hailed as the next evolution of AI. Reports suggest this was due to technological advancements not keeping pace with the expected rate of investment.
The Path Toward Artificial General Intelligence (AGI)
Generative AI, while impressive, is still in its infancy. if AI is going to change the world, artificial general intelligence (AGI) must be achieved: an autonomous, operating intelligence. While rudimentary forms of this exist, they’re highly rule-based and hardly qualify as “intelligence.”
An example of a step toward a more generalized intelligence was presented by ServiceNow. However, the realization of corporate data managed by an AGI is still a distant dream, with no clear roadmap. In order for AI to fulfill its potential and perform the function generative AI is already demonstrating it can lay so naturally, a technology that manages repetitive, low-skill yet necessary work is invaluable. But can it run an office? Could it cure cancer? Address climate change? Eliminate poverty? Prevent disinformation? The dreams get big when artificial general intelligence is involved.
The Need for Collaboration Over Competition
Perhaps it’s time to pivot. Instead of competing, we should collaborate. Historically, capitalism hasn’t excelled at collaboration, but our current challenges demand it. A unified approach is crucial. Resources are limited, and demand is high.
If the world’s leading AI developers are scrapping major initiatives due to insufficient results, what does this imply about the state and future of AI development? Is the industry we experiencing a creative block? Why are we not going somewhere with all this potential? We can and should aim for impactful, human-centric AI applications.
The AI Moonshot: Grand Visions for the Future
Let’s not merely speculate about the next AI breakthrough. Let’s envision it, design it thoughtfully, and construct it carefully, aiming to address humanity’s most pressing challenges. Isn’t that AI’s ultimate purpose — to solve problems beyond our current capabilities?
Let’s set ambitious goals: a global healthcare AI system by 2029, with comprehensive data on all conditions, diagnoses, treatments, outcomes, and real-time care improvements. Or an integrated system encompassing all libraries, academic institutions, publishers, and news outlets, powered by AI, to validate information accuracy.
Far from the bloom falling off, we need to propel this technology forward. With intention. What grand vision will move humanity?
What is our AI moonshot?