Last updated on June 3rd, 2025 at 11:40 pm
AI is now used in every enterprise, but with widely varying maturity models. Few organizations have AI in use in multiple applications across the enterprise, let alone are using it for mission critical applications in post-test production. ADP is the exception, and is deploying AI in different applications from coding to salary data benchmarking. It has successfully reduced hallucinations in an area with low tolerance for errors, compensation, by using exclusively its own data to train. I spoke with Chief Digital Officer Amin Venjara at Web Summit Vancouver this week.
JE: So you have an interesting role (as Chief Digital Officer at ADP.)
AV: It’s fun.
JE: I bet. How long have you been in the position?
AV: So I’ve been at ADP for 10 years, and I’ve been in this role for about 18 months. So relatively new.
JE: And what does it encompass?
AV: It encompasses all digital AI. The three key things that we do within the group today are: number one is creating a central data platform for the entire company so that we can ingest client data sources, internal data sources like our service tickets and finance data sources, and external sources, and put them all together. Number two is doing the data—
JE: Can I just stop you there for one second? When you say “put them all together,” are you talking about combining data sources into a data lake that can then be used by AI, or is it separate from AI?
AV: We’re creating a central data lake that really has all those data sources in one place that can then be leveraged for machine learning, AI, for traditional analytics, for a number of things. We’re really leveraging the fact that we pay over 4 million workers across the globe, across over a million clients. When you think about just the US alone, that’s nearly 20% of the working population.
JE: The power of that data—oh my gosh.
AV: We have many more products, so if you were to just take any one individual product, it’s still really meaningful. The power really comes with combining the data. You can really get a pulse on what’s happening with employment. We do a national employment report. Our ADP Research Group publishes that, and it helps to get the pulse of what’s happening because of these data sets. You can see what’s happening for small businesses—mom and pop shops across the country in the US—but also mid-size and large enterprises. We count almost 80% of the Fortune 500 as part of our client base as well. So we can see that really good mix that happens across industries and sizes, and it gives you a great pulse of what’s going on. Not only can you look at things like what’s happening with jobs, but recently you can actually see what’s happening with sentiment. You can combine both the countable data and what we call “radar.”

JE: And so where does sentiment come from?
AV: It’s a survey.
JE: Oh, it’s a survey.
AV: So that all comes from the data set that comes from our products, and then we’ve launched a survey that also comes out. We have survey products that we launch as well, but this is a specifically designed survey about how people are feeling about engagement, how people are feeling about the workforce. We design questions that help us rate those over time.
JE: That’s super interesting, and I’d like to get into that a little bit more. But I’m fascinated by this data lake that you have. So are you using it to train AI, or is AI taking a lead in helping you identify new sources of data or new interesting data points, for example?
AV: Yeah. So the way you think about it is—we have to be very permission-based in the way that we use this data, right? So we’re very clear about it. Whenever you establish the data lake, we have to have clear controls that allow us not only to understand the lineage of where the data came from and understand data quality, but also permissions, security, and privacy. Privacy is huge, so we understand each use case, where it comes from, what table somebody would need access to, and then what they would be allowed to access.
Now, when you think about how AI operates in this context, I’ll give you an example. Whenever somebody is running a payroll, there are hundreds of potential error conditions that could happen. For example, I could have somebody who’s getting paid but has negative hours. I could have somebody who’s getting overtime, but they’re not in an overtime-eligible role. There are all kinds of error conditions that can creep into how payroll is done, and we’ve seen millions of these happen. So you can actually train AI models to identify these error conditions and notify employers. We say, “Hey, as you’re running this, we’ve identified this condition. Here’s how to resolve it. Do you want us to resolve it for you?” And we can take that action. All of these processes use this data that makes that possible. Those models are being trained across this because you have to put together a lot of different data sets to be able to identify those patterns, and then that gets deployed, and the value comes to our clients through those kinds of examples.
JE: Do you have any stats on error reduction?
AV: So one thing that we’re seeing is the kinds of engagement rates we’re getting right now on these notifications. Through what we call ADP Assist, we’re seeing engagement rates—when we give these notifications of errors—north of 50, 60%. When clients are getting them, they’re engaging with them, addressing them, fixing them, and really taking action on what we’re seeing. We’re still in the early days of doing this, especially the autonomous side of it. There’s behavior change and everything that needs to happen because our clients and payroll practitioners are extremely careful. When it’s people’s money, you can’t accept a 95% accurate payroll. I don’t want it. You don’t want it. Nobody wants it. We hear this from our clients all the time. Nobody says anything when everything goes right, but the moment something goes wrong, that’s when somebody speaks up. They know that, and we know that because we’re so closely connected to them. That’s always the mindset that we have to take in how we’re delivering the product. AI is powerful, but you have to understand the use case that you’re working with.
JE: And again, I’m sorry to keep interrupting, but this is a really interesting vein. There’s been so much conjecture about hallucinations, and one of the things that we’ve heard and seen is that when you’re training an AI on your own data, typically the hallucination rate is reduced significantly. Is that what you’re seeing? And are you concerned about hallucination? I mean, you must be concerned about it, but you must be dealing with it.
AV: I think that’s 100% correct. Hallucinations are absolutely a concern, and this is one of the reasons why when people are going to start doing HR-related things that deal with compliance regulations, if you just use open, publicly available AI and GenAI tools, the risk of hallucination and the types of decisions that it might guide you towards could be wrong because it’s not really trained on domain-specific things that really matter a lot.
I’ll give you an example. So one of the ways that we’ve controlled for this already, in the early days—and I’ll give you some stats that connect to it too—we said, “How do you control for hallucination?” Well, we said we’re going to ensure we have a strong human-in-the-loop principle, but still leverage GenAI. As part of ADP Assist co-pilot capability we have within our platforms, it makes the search process better and enables natural language chat. Within that search process, we’ve seen a reduction of over 200,000 contacts from our clients.
JE: Wow. That must be an enormous cost savings.
AV: Yes, but more than that, it’s a better client experience. Why is that a better client experience? How often do you type in something in a search bar and you’re like, “It doesn’t understand what I’m saying.”
JE: And the results are not useful. Universal experience. So how is GenAI helping improve that process?
AV: Well, we can use our central data platform. We can take all the different searches—the millions, the tens of millions of searches that we see every year—and we can semantically understand what those searches are. GenAI helps us do that in a really powerful way. Then you can actually match that with all the help and support articles that we already have and the workflows of where things should go. Now you do a pairing between those, and you have a validation process where humans say, “This is the question. This is the proposed answer. Is this right?” Once that gets validated by one of our experts, now that gets into a certified, accurate category. This helps us ensure that when that next search comes in, we’re able to say this is what it semantically means, we can map it to an authoritative source, and this is accurate. So it eliminates that hallucination risk from the delivery. That’s been tremendous.
JE: The expression that I hear used frequently is “human at both ends, AI in the middle.” Is that reflective of how you’re using it?
AV: I think that’s a really good way of putting it. You have to have the human to help set this up first, right? So the architecture of how you’re designing it, the knowledge sources—because that’s the other thing too. There’s a lot of literature about this. Any data set is going to have bias, and so you have to be thinking about that in the nature of what you’re selecting. How do you account for that? So then you use AI, but then you have to have a human in the loop at the end to make sure that’s validated and comes out right. You put that process together, and you can see tremendous gains.
Now for us, it’s not just about that. We’ve seen 200,000 contacts eliminated by using GenAI and creating our ADP Assist capabilities. But then even when a contact comes in, a big part of our solution is connecting with our experts that have deep domain expertise. So when the contact happens—whether it’s a chat or call—we’re actually seeing that the handle time is down by over a minute. We get a huge volume of these calls, and we want to be able to connect with our clients. But what we’re seeing from our associates is that the time they spent on the logistics and administrative side—because of the tools we’ve deployed internally—has dramatically decreased. Things like taking notes, filling out case notes, filling out follow-ups or action items—because we can just take the voice, summarize it, and populate the notes for them, all that time is coming down. They can focus on interacting with the client, and they can really put their attention there. That’s also generating a better client experience and a better associate experience.
JE: That’s amazing. What you were describing sounds like something that I think has been overdue in the enterprise for a very long time. You’re describing a revolution in how the enterprise operates. And thank God, because this is still very new, and I’m sure there are still a lot of organizations that work the old way. But going into one database to get one answer and then having to switch into another one—it hasn’t been a great experience for a lot of people working in enterprise. How do you see this transforming your business, both short and long term, both from an employee satisfaction perspective and from a customer satisfaction perspective?
AV: Yeah, so I think it’s a great question, and that’s really the North Star for us—thinking about what’s going to drive our client experience. Because at the end of the day, all these tools are great; there’s a new tool in the toolbox. The problems that our clients are trying to solve are still the same, and what they’re depending on us for is: How do I attract top talent? How do I maximize retention? How do I control my costs? How do I ensure my compliance? They’re depending on ADP to help them achieve those outcomes.
(Part Two)