Monday, June 16, 2025
spot_img

Web Summit Vancouver: “It’s going to get bad. We’re in trouble.”

Security, Privacy and Identity in the AI Era (Part One)

(image from the Entrust 2025 Fraud Report)

Maybe you’ve seen the TikTok of the teenager trying to get into his mom’s Uber account. He can’t authenticate using her password (more on password security later) so he first tries to put the phone up to her sleeping face, which doesn’t work. He finds a picture of her, and tries to use that to authenticate. That doesn’t work either, until he asks an unnamed AI platform to create a video of her from a picture he took of her face on his phone. And he’s in.

Kids have been pulling stunts like this since time immemorial, but the difference now is the sophistication and ease of use of tools which not long ago would not have been accessible to the most sophisticated criminal networks. In essence, everyone now has a fraud engine in their pocket.

In other words, identity theft is not hard, and it is getting easier. At the Web Summit conference last week in Vancouver, a pervasive theme was the ease at which digital fraud can now be practiced, in part thanks to the strides made in AI’s ability to create voice and face replicas and more.

“We do things like digitizing the real estate closing, retirement transactions, wealth management, so really important life decisions about health care, your assets, and we’re seeing people use these tools, you know, kids pretending to be their parents, you know, pretending to be their grandparents, stealing their neighbor’s identity,” said Pat Kinsel, CEO of Proof (formerly Notarize) on the MainStage on Wednesday.

“We’re seeing basically brute force identity attacks, where people might receive 50,000 fraudulent applications from completely fake people.”

How is this impacting crime rates? Complex social engineering crimes that used to require collaboration can now be executed by a couple of teenagers. According to the Entrust 2025 Fraud Report, a new deepfake attempt happens every five minutes. “Traditional forms of fraud have given way to more complex and innovative techniques, including synthetic fraud, fraud as a service, and deepfake innovation,” the Entrust report says.

And the major organized fraudsters are seeing their options and capabilities increase as well. While companies are attempting to impose identification tech, large parts of the world are ignoring the effort, rendering it largely useless. Take watermarking technology.

“So there’s a standard called C2PA, if people are familiar, and this is from Adobe, Microsoft, Meta. The big companies have come together, and they’ve agreed how we’re going to watermark content. Well, the Russians and the Chinese – you know, they don’t give a shit, right? And they’re not going to watermark content. And so really, it’s a security posture,” said Kinsel.

“And what we’re seeing is that is no longer acceptable, the rate of growth of all types of fraud, the fact that people are attacking every, you know, channel from account recovery and password reset and support to forms and authors, everything.

“If you think about your IT stack, and you know the way your system would communicate with someone’s service, you would never allow unsigned instructions to enter your organization and your tech stack. You’ve got TLS, you’ve got SSL you’ve got encryption. And yet, when you interact with people, you allow them to submit PDFs, bullshit documents to you. You have no concept of who submitted them if they’ve been altered, and that is a huge problem we have to deal with.”

“AI is forcing the fraud teams to finally work together or to think about authentication, to think about, you know, threat detection and monitoring, and so I really see there’s this convergence on the horizon.”

Consider this scenario: using publicly available data from county records and LinkedIn profiles, someone could generate a complete fraudulent financial and employment history, potentially stealing property through forged documents. The legal framework often lacks mechanisms to verify legitimate ownership, creating vulnerabilities that AI can exploit at scale.

“The obligation to store medical records comes with a lot of liability,” said Kinsel. Is it time we rethought our approaches to records retention based on these vulnerabilities?

“In the United States, most of the information you need to steal someone’s identity is already on the internet, right? And so if you think about public records, land records, it’s entirely possible to go crawl a county, figure out who owns someone’s house, find them on LinkedIn, find their picture, generate a complete fraudulent financial history, employment history, and steal that house out from underneath them. And in the United States, like a county recorder, by law, is not even allowed to assess if you are the legitimate person who’s submitting a quitclaim deed or a transfer of assets. And so the legal construct of the United States is fundamentally flawed to solve this problem.”

“These are all solvable problems, but it’s gonna, it’s gonna get bad. I mean, you know, you hear the stories that’s coming for all of us, but there are people out there right now thinking you’re gonna slow things down. The Chinese are gonna pass us. You know, the Russians are gonna pass us. If we worry too much about security, we’re not going to advance as quickly we should—there are going to be some hits.”

“Two years ago, the discussion was whether or not there would be a strategic pause in artificial intelligence in the United States. And at the time, both parties very clearly said, ‘No, that would be un-American.’ I was in DC recently, and both parties again, the talk was, we can’t lose this race to China. And so most of the discussion in DC has changed to be about the enabling or blocking issues. So things like access to electricity, you know, upgrading systems for electricity, electricity transference, permitting. You know, how do we actually, you know, modify our entire economy to take advantage of these things? There isn’t a lot of discussion about these other issues, right?”

“But I think we can all agree that it’s not a good thing to be able to perfectly pretend to be grandma, and to be able to do it in less than three minutes, or now 30 seconds. And that certainly has resonated, and I believe it’s someone’s fundamental right to have control over your personhood, and that people should not be able to instantly copy you. You see a lot of legislation in DC around selective enforcement of deepfakes, for non-consensual pornography, for elections. My attitude has been—give me one legitimate reason why someone should be able to pretend to be me without my consent, and without my permission.”

“And I also believe that the answer is right in front of us. It’s the credit card networks that have solved this problem. When you transact, you don’t give people all of your financial information. You give them a tokenized representation of your payment information. There’s a signal sharing network—you can revoke that without exposing PII associated with that. So I think the model’s in front of us and people need to work together.“​​​​​​​​​​​​​​​​

How cautious should we be? “Consider getting rid of your land line. Don’t answer it unless you know exactly who is calling. New social engineering tricks use panic “your son called me, he’s stuck at a truck stop” “your daughter asked me to call you, she’s been in a car accident.” If an editor at the New Yorker can be convinced to hand off a bag containing $50,000 in cash to an unknown person in a car in the middle of the day, we are all vulnerable. “Use an anonymous email address only for public facing email. Never respond to texts from unknown numbers. Bottom line: attempts at social engineering will start with phone calls, texts, emails and chats. Consider these unsafe but necessary devices and media, minimize use, and report as spam and block any numbers or addresses that are unfamiliar or suspicious immediately.”

Modern AI-Enabled Threats

Synthetic Fraud represents a sophisticated evolution in identity theft where criminals create entirely fictitious identities using a combination of real and fabricated information. Unlike traditional identity theft, which relies on stealing an existing person’s credentials, synthetic fraud involves generating new identities by combining legitimate social security numbers with fake names, addresses, and biographical details. These synthetic identities are then used to establish credit histories, open accounts, and conduct financial transactions over extended periods. The artificial nature of these identities makes detection very difficult, as traditional verification methods often fail to identify inconsistencies in fabricated personas that have been carefully constructed to appear legitimate.

Fraud as a Service (FaaS) has emerged as a criminal business model that democratizes sophisticated fraud operations by offering specialized tools, services, and expertise to lower-skilled criminals. This ecosystem includes everything from stolen identity databases and document forgery services to custom malware development and money laundering networks. Criminal organizations now operate like legitimate businesses, providing customer support, service level agreements, and even user reviews for their illegal offerings. This commoditization of fraud capabilities has dramatically lowered the barrier to entry for cybercrime, enabling less technically sophisticated actors to execute complex fraud schemes that would have previously required specialized knowledge and resources.

Deepfake Technology employs artificial intelligence to create convincing audio, video, and image content that appears to show real people saying or doing things they never actually did. Using machine learning algorithms trained on extensive datasets of a person’s likeness, deepfakes can generate realistic impersonations that are increasingly difficult to distinguish from authentic content. While the technology has legitimate applications in entertainment and education, it poses significant security risks when used for impersonation fraud, social engineering attacks, or to create false evidence. The rapid advancement of deepfake tools has made this technology accessible to non-experts, creating new vectors for identity fraud, corporate espionage, and disinformation campaigns.

Part 2 to be posted Friday

Featured

The New Formula 1 Season Has Begun!

The 2025 Formula 1 season has kicked off with...

Savings Tips for Financial Success

Achieving financial success often starts with good saving habits....

How to Keep Your Customers Happy Round the Clock

Pexels - CCO Licence Keeping your customers happy is no...

Combating Counterfeits: Open Commerce Platforms Redefine Brand Integrity in Digital Marketplaces 

By Justin Floyd, Founder and CEO, RedCloud Technologies In an increasingly...

Building a Business on Your Own Terms

Fatima Zaidi is the CEO and Founder of Quill...
Jennifer Evans
Jennifer Evanshttp://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.