By Jurgita Lapienytė, Editor-in-Chief at Cybernews
Businesses are letting AI run unsupervised, like giving an intern big decisions instead of simple tasks – though they’re not smart enough yet.
Imagine passing by a crowded train somewhere in Bangladesh or India during peak hour. You might not know where it’s going, but since so many people are aboard, you assume they must be headed somewhere worthwhile – so you jump on too.
Does that make sense? Not really. Yet it’s no less illogical than local businesses rushing to implement some AI agent just because other companies are doing it. But do they actually know what an AI agent does, or how to decide if the benefits outweigh the risk?
The definition of an AI agent itself might give you shivers. According to IBM, an AI agent is a system that can perform tasks autonomously. AI agents provide personalized and comprehensive responses, and learn to adapt to their users over time. But to do this, they need to know the ins and outs of your organization – and that’s where the whole problem starts.
AI technology is still very nascent, and we’re not very familiar with how it works. As users, we’re suspicious of it, and the backlash against AI is growing as it violates copyright laws, puts people out of jobs, and exaggerates existing societal biases.
For businesses, it jerks the door wide open to new risks.
Here’s one curious example. A new study by researchers at Princeton University and Sentient, an AI firm, has found that AI agents are vulnerable to memory injection attacks that can then manipulate their behavior. In a nutshell, it means that an attacker can plant a fake memory into an AI agent, which it then uses to make future decisions.
The paper urged addressing the threat, as such an attack could lead to persistent, cross-platform security breaches. Along with these breaches come the loss of user trust, system integrity, and operational safety.
Malicious hackers are pursuing every opportunity to exploit our increasing reliance on AI. For example, in one well documented case, Google’s Gemini was shown to be vulnerable to long-term memory corruption.
Hackers are working hard to mess with AI agents. Given you most likely trusted an AI agent with your internal data and processes, it can become your organization’s Achilles’ heel. This issue is constantly being addressed, with security researchers relentlessly working to patch various vulnerabilities.
But there’s something much harder to patch – it’s our biases. According to this Forbes story from 2021, AI bias caused 80% of black mortgage applicants to be denied. Years go by but the problem in the industry persists: last year, researchers from Lehigh University found that LLM training data likely reflects persistent societal biases.
Many of us mistakenly believe AI is objective and unbiased because it’s just math, software, or a machine. We couldn’t be more wrong. First of all, all LLMs are taught using flawed databases, collected and organized by humans over centuries. As a result, algorithms know more about white males than people of color, women, or other historically underrepresented groups in various fields and archives.
Would you be comfortable tasking an AI agent with giving out or denying loans? What about who gets promoted? Would you trust an AI agent to screen job applicants or decide who gets an interview? Would you use AI agents to recommend bail or sentencing decisions? And if you would, would you be 100% transparent about how you are using AI?
When your robotic intern fails, will you be able to explain the mistake and the reasoning behind it? I loved how the Harvard Business Review put it – ethical nightmares multiply with AI advances.
Don’t get me wrong. I’m all for delegating tasks and leaving the most mundane and time-consuming jobs to a piece of code. But with so many organizations implementing AI at breakneck speed, I’m just worried there isn’t enough discussion around the most pressing issues.
(To digress a little, employees actually complain about their employers being too slow to approve some AI tools. As a result, they are using some algorithms secretly, which is even worse.)
On top of mounting ethical problems and security issues, the use of agentic AI inflates costs while delivering questionable value to businesses.
Gartner, an American research and advisory firm, predicts that by the end of 2027, more than 40% of agentic AI projects will be canceled. Why? Because of huge costs, unclear value, and inadequate risk controls. Experts believe most agentic AI projects are driven by hype and often misapplied. They advise pursuing agentic AI only when it offers clear value or a solid return on investment.
Be smarter than those jumping on the bandwagon out of simple fear of missing out. We introverts like to joke about the fear of being included. Your company might be humming the “fail fast” mantra, but since AI is dealing with private data, such a failure won’t be painless.
You might fail fast, but you will likely fail big, too. So, take your time, think critically, and prioritize safety over speed. Before diving in, consider these practical steps:
- Conduct thorough risk assessments to identify potential vulnerabilities and impacts.
- Start with small pilot projects to test AI agents in controlled environments before scaling up.
- Implement strong data governance policies to protect sensitive information and ensure compliance.
- Ensure transparency in AI decision-making by documenting how decisions are made and making processes auditable.
- Invest in ongoing monitoring and auditing to catch issues early and continuously improve your systems.
- Plenty of companies end up calling in human experts to clean up after AI messes which usually costs more than just hiring an expert from the start. Save yourself the headache (and extra bills) by getting a real person to double-check your AI’s work.
Remember that a lot of fuss around AI agents is nothing more than a smoke screen to disguise the fact that we are dealing with a beast we don’t know much about. So maybe we shouldn’t let it decide on its own just yet.
ABOUT THE AUTHOR
Jurgita Lapienytė is the Editor-in-Chief at Cybernews, where she leads a team of journalists and security experts that uncover cyber threats through research, testing, and data-driven reporting. With a career spanning over 15 years, she has reported on major global events, including the 2008 financial crisis and the 2015 Paris terror attacks, and has driven transparency through investigative journalism. A passionate advocate for cybersecurity awareness and women in tech, Jurgita has interviewed leading cybersecurity figures and amplifies underrepresented voices in the industry. She’s recognized as the Cybersecurity Journalist of the Year and featured in Top Cyber News Magazine’s 40 Under 40 in Cybersecurity. Jurgita has been quoted internationally – by the BBC, Metro UK, The Epoch Times, Extra Bladet, Computer Bild, and more.
ABOUT CYBERNEWS
Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence.
Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:
- Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
- Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
- Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia.
- The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google’s latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
- The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
- The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
- They revealed that two online PDF makers leaked tens of thousands of user documents, including passports, driving licenses, certificates, and other personal information uploaded by users.
- An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
- The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
- The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
- The team analyzed NASA’s website, and discovered an open redirect vulnerability plaguing NASA’s Astrobiology website.
- The team investigated 30,000 Android Apps, and discovered that over half of them are leaking secrets that could have huge repercussions for both app developers and their customers.