Saturday, June 7, 2025
spot_img

Web Summit Vancouver: “It’s Going to Get Bad”

Security, Privacy and Identity in the AI Era, Part Two: Lessons from Industry

The current digital infrastructure in the United States reveals a sobering reality: most information needed to steal someone’s identity already exists online. Public records, land registries, and social media profiles create a treasure trove of data that malicious actors can exploit. With AI’s ability to synthesize and manipulate information, the threat landscape has expanded exponentially. And current legislative efforts show little ability to offer or interest in consumer protection. What can we learn from other industries and initiatives about attempting to regulate the unregulatable, and secure the unsecurable?

Beyond Human Oversight: A Flawed Security Paradigm

A common, and concerning, refrain at industry conferences is that “human oversight” will solve AI security challenges. This assumption is fundamentally flawed. Human oversight alone cannot address the scale and sophistication of modern AI-enabled threats.

Some solutions can be found in how industries are handling threats specific to them. Current policy approaches assume that well-meaning companies will follow established rules. Content watermarking initiatives like C2PA, supported by Adobe and Microsoft, represent industry efforts to mark AI-generated content. However, these voluntary standards are meaningless when malicious actors—whether state-sponsored groups or criminal organizations—simply ignore them.

Financial Services

At the recent Web Summit conference in Vancouver, examples from the financial services sector offered valuable lessons for AI companies. The concept of power of attorney, for example, provides a legal framework that could be adapted for digital agents. If we accept that a Wealthfront algorithm can trade stocks on someone’s behalf, why should AI agents operating in other domains be treated differently? The use of tokens for credit card authentication and processing has reduced fraud significantly. Tokenization reduces credit card fraud by replacing sensitive credit card details with unique, random tokens during transactions. This prevents fraudsters from intercepting or stealing actual card numbers, as the tokens have no inherent value and cannot be reverse-engineered to reveal the original card details

The Healthcare Model: Liability and Protection

Healthcare regulations in the United States demonstrate how proper incentive structures can drive security adoption. Medical record storage requirements come with significant liability, but organizations that follow established frameworks receive legal protection. This model suggests that AI companies need similar regulatory structures that balance accountability with protection for compliant organizations.

The healthcare industry’s approach shows that when companies face real consequences for security failures and clear benefits for proper implementation, they invest in robust protection measures.

The Authentication Gap

The technology for secure authorization already exists. Granular data permissions, cryptographic signatures, and robust authorization frameworks have been available since platforms like Facebook’s instant personalization project in 2010. The challenge isn’t technological capability—it’s the lack of incentive for companies to implement comprehensive security measures.

Organizations maintain sophisticated security protocols for their IT infrastructure, using TLS encryption, SSL certificates, and signed communications. Yet these same organizations routinely accept unsigned documents, PDFs, and other materials from individuals without verifying authenticity or detecting alterations.

This authentication gap represents a critical vulnerability in the AI era. As artificial intelligence makes it easier to generate convincing fake documents, audio, and video content, the absence of verification systems becomes increasingly dangerous.

Changing the “Move Fast, Break Things” Startup Culture

The Silicon Valley mantra of rapid iteration often relegates security to an afterthought. While this approach may have been acceptable for early-stage consumer applications, it’s inadequate for AI systems that handle sensitive data or make consequential decisions.

The financial services industry has long accepted fraud as “a cost of doing business,” maintaining acceptable baseline loss rates. However, the exponential growth of AI-enabled fraud across all channels—from account recovery to job applications—is making this approach unsustainable.

Recent incidents involving fraudsters posing as job applicants to obtain temporary employment highlight how AI is being weaponized against traditional security measures. These evolving threats are forcing fraud prevention teams to work more closely with authentication and threat detection specialists, creating opportunities for more integrated security approaches.

The Regulatory Landscape

Congressional discussions about AI have evolved significantly over the past two years. While strategic pauses in AI development were once debated, current bipartisan consensus focuses on maintaining American competitiveness against international rivals, particularly China.

Policy discussions now center on enabling infrastructure: electricity access, grid upgrades, and permitting processes that support AI development. However, critical issues like workforce reskilling and security frameworks receive less attention, creating potential blind spots in national AI strategies and those adopted by groups like the EU. Legislation in front of Congress now in Trump’s “big beautiful bill” proposed a two years hold on any restrictions on AI development, ostensibly to ensure US development can keep pace with competitors in China, Russia and elsewhere. At the moment, there is no regulatory bill or legislation limiting use of AI that has any teeth, as primarily evidenced by the widespread use of AI against civilians in Palestine, unchecked and nearly unnoticed.

The reality: AI is going to rewrite virtually every data-driven social function, including sophisticated crime. Historically criminal technical sophistication has made it nearly impossible for law enforcement, with limited resources to invest in the tech necessary to outwit fraudsters. Will AI mark a change in this pattern? According to the brightest minds at Web Summit, the change will enmesh us deeper in criminal capability, until and unless we can agree on a set of standards . This eventuality shows no signs of manifesting anytime soon. Until and unless we decide to focus on purposeful AI for the broader good of humanity, a lumpy kind of development chaos driven primarily by self interest will continue to dominate. It is impossible to foresee where that could take us.

Featured

The New Formula 1 Season Has Begun!

The 2025 Formula 1 season has kicked off with...

Savings Tips for Financial Success

Achieving financial success often starts with good saving habits....

How to Keep Your Customers Happy Round the Clock

Pexels - CCO Licence Keeping your customers happy is no...

Combating Counterfeits: Open Commerce Platforms Redefine Brand Integrity in Digital Marketplaces 

By Justin Floyd, Founder and CEO, RedCloud Technologies In an increasingly...

Building a Business on Your Own Terms

Fatima Zaidi is the CEO and Founder of Quill...
Jennifer Evans
Jennifer Evanshttp://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.