Read Part One
JE: So, this might have privacy issues all over it, but are you able to provide competitive compensation data to your customer base?
AV: Yeah, we actually generate salary benchmarks from the data that we have. And so again, this is a permission-driven process. We have a way in which clients can allow us to be able to use that data—aggregated, anonymized data that is then used to help train and build up the salary benchmarks. But it takes a lot of steps in order to make that happen. You have to actually build a taxonomy of jobs.
JE: Right.
AV: You then have to be able to aggregate the data, because these salary benchmarks, if you look at what’s happening generally in the industry, they’re updated every year—
JE: Which is a long time.
AV: —and that is a survey process. It’s manual, and when you talk to people who fill out these surveys, which are still our clients as well—HR departments—oh, yeah, they only do 50% of the mapping of the jobs, so it’s incomplete. It’s once a year, and we’re making salary decisions off of it. What we have comes directly off of payroll. We use AI to be able to do the mapping, validated by a human in the loop, because our clients are validating the mappings that we’re doing. But we do the updates monthly because we want to account for weekly pay cycles, bi-weekly pay cycles, and monthly cycles. So we do it on a monthly grain that allows us to have the biggest population. We’ve been doing that now for almost seven years—generating salary benchmarks—and we have nearly 30,000 clients that let us use their data.
JE: The image I have in my head, and you can tell me if this is like fantasy land or not, is dynamic pricing, like at Kroger, where they can (God help us) adjust for many factors including the attributes of the person who’s standing in front of the product. Are we going to get to the point where you’re going to put a job offer in front of somebody and that salary is going to be partly dynamically generated based on the data? I mean, not entirely, obviously—it’s got to meet expectations, etc. But do you think we’ll ever get there?
AV: Perhaps. I think we also have to be really careful about it, because, for example, today, this has come up for us a lot, which is we obviously have data with salary, but also we know the attributes of the person—say, their gender, their race and ethnicity—
JE: Right.
AV: And, you know, we could be producing benchmarks that allow people to cut along those dimensions. Is that the right thing to do? Probably not the right thing to do, because then it might perpetuate existing discrepancies that exist.
JE: Right. Ideally, it would build that in and factor in things like cost of living differences for different areas. You also need to factor expectations in, and that’s very complex training when you’re trying to avoid discrimination.
AV: That’s why—and I think in talent work, you have to be incredibly careful when you’re dealing with that. And so there’s a lot of these types of things. Remember, every data point that we’re dealing with is a person, and we take that very seriously. And so when there’s a lot of excitement to push into use cases and leverage AI, one of the things that we think very carefully about—as we review, because we review everything that comes through, it goes through a POC (proof of concept) pilot process to help us understand what AI is used and go through the right checks—we step back from it: is this doing right by our clients, and are they doing right by their employees?
People might have all the right intentions, but it could be used badly. So there are many things that we try to account for, like the search process, looking up analytics, salary benchmarks at the job level—these are really good use cases. I think there are many more, like correcting for payroll errors, doing those things autonomously.
I do think that we have been on this wave for many, many years of trying to drive more digitization in the process. Now we want this to be autonomous. Okay, is the problem we’re trying to solve the same problem? In many cases, yes, we just have a new name and a better technology. I agree—it’s a better technology. But you also have to remember, this technology has a different flavor to it.
JE: And behavior.
AV: Yeah, because this is not a deterministic point. When we were doing RPA (robotic process automation), it’s a very deterministic thing. I give you conditions x, y, z, behave this way, and it’s repeatable every time. Here you have a probabilistic way, which is “this happens,” but it’s much more flexible—
JE: And fluid.
AV: —and fluid. And the way the reasoning models are now operating, you can really break down that chain of thought. Because also in the early days of gen AI, you had to break down that chain of thought. Now you can see the reasoning model actually doing a chain of thought on their own and doing self-critique, right? So the LLM-judge style structure, and all of those things you’re seeing starting to pop up, which we’ll see only continue to increase. But at the end of the day, that’s still relying on the probabilistic notion of this happening, right? And it comes back to the point of every data point we have as a person. So how do you manifest that in terms of maximizing value and also minimizing the harm?
JE: And the technology that you’re using right now for all of this—is it LLM-based? Are you using agentic technology at all? How do you see that applying to your environment?
AV: We’re using pretty much all of the above, but again, I think we like to start first with the notion of what’s the problem we are trying to solve. Because even in this example, our clients need to answer questions like: what’s my headcount, what’s my turnover? This is a really great use case that we’ve seen GenAI be able to do, because you can parse these queries that you have to write reports for, you have to run—it’s going to simply get answers to those questions. Even in that, you know, breaking it down, when we did the architecture for this, we are using foundational models that help us be able to parse the natural language. But in some of those, we actually built an in-house model—
JE: Really? Actually, that makes a lot of sense for you give your data and its lineage.
AV: —because the nature of the problem made sense for us to be able to build a portion of that model, and other portions of it that are more natural language and broader understanding, we can use foundational models. So there’s really a taken approach in the architecture of building this: What is the problem you’re trying to solve? What’s the right way to go about doing these engineering use cases? It’s going to vary. That’s how we go about it.
JE: I’m going to ask you sort of a broader, future-looking question. So I was at Collision a few years ago, and the President of AWS got up and said, “SaaS is dead. SaaS is dead. AI is going to suck all the data out of SaaS and repatriate it back to the companies that actually own it.” And single-purpose SaaS companies are going to have a really hard time competing with that, especially with the repatriation, because what everybody wants is control over their own data. And, you know, with something like 240 different SaaSes in most enterprises, that’s an enormous shift of how people work and what they use. Are you seeing any of that? Is that in your plans, is that in your forecasting? And do you think that’s actually going to happen? Like, do you think Salesforce is going to disappear?
AV: Well, look, I think there’s a lot of different viewpoints on what’s going to happen. I would say that right now we’re not seeing any of that. We also play in the HR and the human capital management domain, and there’s a lot of expertise along with that. We’re not seeing a slowdown in how clients are thinking about this.
JE: Oh, I think you’re in a really advantageous position. You have a data set that is produced from managing critical functions and as you continue to build, it will feed insight on workplace and compensation trends for the planet. But if you’re a marketing SaaS company and all you have is somebody else’s data, it’s a different equation.
AV: And I think that’s where companies have to really position themselves in terms of what problem they’re solving, because if everything is just based on somebody else’s data, that can be a challenge. Almost every SaaS company—that’s what they’re based on. But I think that’s when you have to combine multiple things, because AI won’t solve everything. I mean, at the end of the day, there’s still things that I think will still need usability of how you navigate the experience to make this useful.
So for example, if you think about a few years ago, where we are today, UI and workflow are still the primary way in which enterprise software works, right? Point and click. Now, AI is helping to reduce that, and so, okay, I can take some of the stuff I point and click, and I can just do that through natural language.
JE: ServiceNow uses the expression “single pane of glass.”
AV: And so, like, okay, so now I can do less of that, but there’s still situations where actually seeing this is really helpful. They say, “Oh, maybe an agent can be able to do that.” But, you know, still bring it back to me when it still has to present to me the view—is it being able to actually generate that view that makes it easy for me to see, or something curated to address this problem? Well, in order to be better, I would think that we’re going to find the balance between those things. But then on top of that, at least in our space, there’s a service level that comes along with this too, and the human dimension that comes along with us.
And so just data with a very thin layer of value—they’re going to have a challenge. But if you are able to, one, be really clear about what are those hard problems that are going to need to be solved, and two, have a human layer of expertise that allows you to address those things as well. The other thing is that if you’re reacting because it doesn’t really need the data value that comes into play, you can actually look at the holistic picture, right?
JE: And is it the AI?
AV: If the agent has to reproduce that in each interaction, that’s pretty expensive. That’s an expensive thing because now I have to go across all the data for the agent. So there’s always been the traditional dilemma of, like, let’s take the salary benchmarking or anything like “I want to understand how a market operates when you have all the data together.” Well, data is not together, and somehow AI is going to do it. How would the AI have access to all those data at one time? So now, how are you going to have a holistic picture?
JE: I think we’re probably starting to get into more of an AGI type of thing, which I think is extremely far out still—very far out. But that would be what something like that would essentially undertake, am I right?
AV: Yeah. But then, so again, think about it. Examples of whatever vertical you want to choose—there’s a lot of proprietary data of like, what happens in my service calls, right? What I see in my service calls, and relative to other service calls in my industry, what needs to be happening. How do you have access to that data set? In our example of salary benchmarks, is AGI going to actually have access to everybody’s pay stub? Is that a level of data representation that we feel comfortable with? Those are the kinds of questions that we’re going to have to address.
So I think that, look, these are definitely questions that we’re going to have to consider. Do I think that SaaS is going to evaporate tomorrow? No. But I think that the value creation for customers who use SaaS solutions is going to have to be transformed so that the value proposition becomes clear as a lot of automation comes into play.
JE: Yeah, I think the transformation of the public sector is probably a bigger factor in some ways for this. So how are we for time? Okay, great. You were going to share, I think, another example before I rudely cut you off earlier.
AV: We have also seen success in other areas, for example, our development process.
JE: Oh, okay, so your technical development?
AV: Yeah, we’re deploying just because—I think this is when you think about how AI is working in our enterprise. We think about not only reimagining our product, but also the way that we work, getting better. And so what we think about for GenAI is both for our development teams as we build software—both what we build and how we get better at building it.
JE: So are you using GenAI to code?
AV: 50% of our development teams are using AI to code today. Absolutely. But I think “code” is a little bit to reductive – it’s much broader than that. If you look across the entire software development lifecycle, you’re seeing—like, for example, I was just looking at something the other day where the team said, “Look, here’s this mockup of this front-end screen we want to create.” We just put that mockup in and we generated all the code necessary to be able to generate that front-end screen. And so that is generating the code. But now we need to do tests about how that code is going to run. So build the tests and the use cases—it does a very good job at doing that. Write the documentation about how this code is working.
JE: The time consuming, lower value, but critical administration content and function. AI taking on documentation alone is a significant value for an enterprise.
AV: And the other thing now that we’re doing is to say, okay, but generating the documentation is one thing, but can you validate that the documentation is actually effective? Can you run an agent over the documentation, understand the code base, and be able to make a change to the code base that is then validated by a human so that you know that the documentation created by GenAI was actually effective?
JE: That is blowing my mind.
AV: That loop is—because you might generate something, it might do the thing, but it always comes back to this accuracy question. How can you validate that whatever it produces is actually any good? And that’s your point about human-AI-human—you have to always get back to the human at the end to then validate that this works. Those are the kinds of things that we’re starting to see accelerate. Half of the development teams at ADP are using GenAI today.
We have a maturity model of how we’re tracking teams throughout that, and we’re starting to see acceleration in how teams are not only doing some of these initial things of code generation, tests, and use cases and documentation, but really accelerating that development process to increase our velocity.
JE: Do you think AI is going to finally replace mainframes in the banking environment?
AV: That’s a good question. I think that mainframes are—that’s always been something we’ve kind of struggled with, especially in the move to the cloud. Everybody’s trying to get to the cloud and dealing with this. Look, I think there’s a lot of effort going on today, but you also have to remember the mainframes in a number of these applications—they do the job incredibly well
JE: And they’re in highly regulated environments.
AV: And some companies may not want to get off of them, because they perform—like think about the scale of transactions they’re performing—incredibly well. They are still fantastic.
Now again, maybe just one counterweight I think about this: a code base, right? One way to think about the problem is to say, “Now we can finally accelerate the transition, because we can rewrite all this code.” Right?
JE: Technical debt will not even really be a thing anymore.
AV: Another way to think about it would be to say, “We now maybe don’t have to worry so much about the skill set, because we actually have a way to produce the COBOL or whatever that’s needed to be able to maintain the mainframes.”
So I’m not saying one path or the other, but like this—these new technologies give us the ability to think about what’s the right decision for the companies and the position that they’re in and the systems that they have and the use cases. If you bring it back to the customer, you can make the right decision, but it just opens up more optionality for how to make those best decisions.
JE: Thank you so much. That was really, really fascinating.