Saturday, June 15, 2024
spot_img

Leading Transformation of The Enterprise through GenerativeAI: Jeremy Barnes and ServiceNow

Last updated on May 15th, 2024 at 04:26 pm

Jeremy Barnes was coding machine learning algorithms when the rest of us were playing with Flash. Twenty years later his company ElementAI was acquired by service workflow giant ServiceNow, and they are rewriting the unwieldy world of enterprise tech. This tech and others of its kind could shift the Canadian government’s productivity landscape, and help address the enormous technical debt across so much of federal and provincial infrastructure. GenerativeAI helps expose what is there and rewrite it using Now Assist and ServiceNow workflows. (At the moment ServiceNow appears to be significantly ahead of its competitors in value creation and deployability thanks to its model)

ServiceNow has real, operational solutions to cut through that technical debt and usher government tech straight into 2024, in no small part because of what Jeremy and his teams have accomplished. I got a chance to catch up with him at KNOW24 and talk about LLMs’ lifespan, the Simpsons, and customer value in AI. (disclosure: ServiceNow paid for my flights, accommodation, meals and conference pass to KNOWLEDGE24)

JE: Tell me how you got started in AI.

Jeremy Barnes: Okay, I founded a startup (in Australia) in 2000, just when the NASDAQ was starting to boom. We were looking for funding, and it turned out that there were excellent tax credits here.

Jen Evans: Excellent tax credits in Montreal for sure.

JB: I knew one week before I moved, and when I moved to Montreal, I’m ashamed to say, I thought it was an island off the west coast of Canada. I was very surprised because I didn’t speak French.

JE: Oh my gosh, that must have been such a shock.

JB: But yeah, I rolled with it for a year and I haven’t regretted it. That year, that first startup was one of the first waves of startups where we used this new thing called machine learning practice.
Instead of creating algorithms by hand, you take data and learn the algorithm from the data directly. It was almost a computational linguistics startup. We were trying to go after search at that time. That’s how I moved to Montreal.

I basically did a bunch of startups after that, each growing more than the one before, and eventually ended up founding a company which was acquired by Element AI very early. I doubled the size from eight to 15 people when one of my companies was acquired, and then I became chief architect, and afterward CTO of Element AI.

JE: And then ServiceNow acquired [Element AI] in 2020. And from everything I’ve heard today, from everyone I’ve spoken to, that acquisition is really the pivotal moment for the evolution of ServiceNow. Did you realize that at the time? Did you realize how impactful it was going to be? Sorry, impactful is a terrible word.

JB: Yes, it is a terrible word and yes, to some extent, yes. Because you’re from Canada, you probably understand that Element AI had a major impact, or at least everyone had an opinion about it. And so, when we were building our API, what we realized is our goal was to take this technology of deep learning and bring it into the enterprise. And what we realized is that you can do that in a way where just taking technology is never going to be reliable enough unless it’s built on top of some kind of structured workflow platform that can actually take the intelligence and give it guardrails and a substrate on which it can actually take action in a unified and consistent way. So, we had to build this entire workflow pipeline.
So when we started talking to ServiceNow, it’s like, they have that ready and they have thousands and thousands of customers and millions and millions of users, and we wouldn’t have to do that all ourselves. And this is pretty amazing. And so, yeah, that was the thesis. It was really clear that it made sense from the ServiceNow perspective. ServiceNow has been doing AI for an entire decade. It’s already had a bunch of acquisitions, and the way in which those acquisitions were integrated kept the platform unified so that it wasn’t just bolt-on acquisitions. It really was the way to create a really strong, single technology platform. So when we joined, people were starting to get the idea that AI was important, but it was still in the idea that you know, we want AI to be inexpensive, and we’ll get what results we can, but we want it to be exposed.

So we started building, and we integrated some of our products into the ServiceNow platform, and the way we started working with customers, and the interesting thing was, of course, afterward, when we well discovered chatting to them, they thought, ‘Ah, I don’t care so much that it’s inexpensive anymore because I think you can actually solve my problems.’ And so it was at that point then, where I think, you’re in phases that we had at the beginning for Element at that time of truth. And we were fortunate enough to be, you know, already embedded in a platform where we had all the prerequisites.

JE: And you know, that’s as you know, many of these technical acquisitions do not go so well. So, was that what, what do you credit the success of this to? Is it just that it was the perfect match? Serendipity or was there something specific that you think made it successful?

JB: I think there was. ServiceNow is a big company, but there are a lot of people who are thinking quite deeply about where work is going and you know, the long-term future of the platform and so you know, we had the right champion to service our, we had people who drove the acquisition, including CJ Desai, you know, he had this inkling that there was a lot more to this technology than then maybe we know. And so, we had the right set of people within the company, who understood that this could be a major strategic year to bring us here and so because of that, you know

JE: Jeremy, you haven’t been with ServiceNow for very long but the combination has obviously been company-changing at a very fundamental level. These types of acquisitions can go very slowly. How was it so productive so quickly?

JB: People have the impression that we went from zero to 60 in like no time at all. We were already at 50; we just had not kind of showing the world everything that we were doing. And so yeah, that’s I think was a lot of it was. I think it was some serendipity there always is, but to be honest, a lot of it was foresight, both by the Element AI people, but particularly by the ServiceNow people who understood how important this could be and it was here as necessary to get a handle on it and to understand that and you’re holding them as part of the platform.

JeE: That’s really interesting, and probably a model for a lot of organizations because it doesn’t always work that way. So I want to talk a little bit about generative AI now and sort of where it’s at. I have a colleague who was at the event in Silicon Valley, where they first demoed GPT. With which, you know, was electric for the world. He came back to Toronto and was telling me about it, I was like, he’s like WRITING is dead. And I talked to him again about six months later, and he said, You know, there’s nothing but upside here. We can’t see anything but potential. And I talked to him about six months later, and he said, Okay, we’re starting to see the ceiling. Now. We’re starting to see that there’s going to be a limit on how far this can go, whether it’s processing power, just the way that AI works. And when I talked to him recently, he said you know, we now think that there’s maybe 20 to 30% of ongoing growth that’s going to be capable for generative AI, and then it’s going to start to top out at a certain point. Do you think that’s an accurate assessment? Would that be how you look at the landscape?

JB: I think you’ve got to ask growth in what.

Because if we are looking at the growth in customer value, we are at the very, very beginning, we’re already seeing very, very strong customer value for the customers who have deployed it.

And you know, as we as we work with them, we see here all the things that are limiting the value and they are often very simple to fix. So I’m not going to know it until you actually observe it. But once you tell us how we can, we can fix that. So in terms of value, I think the growth is going to be continuing to be spectacular and explosive.

Now, we’re not working on the very largest models, and we’re not working on building nuclear power plants to power data centers, all that kind of stuff where if you project them as like, what’s gonna consume more energy and the whole world, right, we’re just not playing them.

We’re playing in the heart of the market where generative AI has its application to anything which we are doing as a part of our world of work. There’s probably an angle there where generally I can help make this more effective, get better outcomes and be more productive and we’re just kind of working through those things there.

So we don’t take the biggest time, we don’t produce the biggest model. Say those models are going to be interesting for a couple of ultra-high-value use cases. But mostly you need the right model for the job. And so the way we’ve thought about it more is, let’s make it on this platform, we already have it. Let’s make it so as we discover new things they are generally things that I can do for our users and our customers. Let’s make sure that’s accessible. If the LLM doesn’t exist, we’ll build it exists already. That’s great for integrators in the house.

And just for perspective, if you’re having the right hours on it, it’s often not as big, right? And now often, because of that, they’re faster, they have a better experience. They don’t go off the rails, their economics are better. And so that’s been the approach we’ve taken and for me by we’re very early on about So, growth in value, I think is one thing the growth reports are in our data and financial right. But with today’s technology or what is feasible, is we are just at the very, very beginning of what is possible.

JE: That’s such an important distinction. The growth at the utility is still fairly, if not entirely, linear, if you have to basically add another box to increase processing power you’re not really seeing a tremendous amount of efficiency. If I’m expressing this correctly, you basically need another Nvidia box if you want to expand your footprint, and there is no real exponential growth in efficiency right now. That some expected. Correct me if I’m wrong.

JB: You’d have to talk to them to hear about what they have their businesses doing just using that as an example. Yeah, I think you’re right, they’re more constrained by manufacturing than they are by the potential market size now, but you need to also separate it. There are three main parts to doing loads of language models. The first is the research that where you’re, you build these big supercomputers and use them throughout a whole bunch of experiments, to try some algorithm and to try this new way. Or if you’re using Python, you have people publish Apple scientific papers. And then you say okay, this thing seems to work. It seems to be the best way to do it.

Then we have to train the models themselves and that’s where the huge amount of work is. Because you need you know, you need 10s of 1000s of GPUs, or even the big we’re talking about 100,000 or 1 million GPU clusters,

JE: The $7 trillion ask.

JB: Right. And so that’s if you project that we naturally do exactly the same thing. You watch The Simpsons, Yes? (Nods) (quotes The Simpsons) “Well, I predict in the future that computers will be twice as powerful, and expensive and so big that you have to build a football stadium by just scaling out the wrong thing.”

“And for me, this is like scaling up the wrong thing. Sure, you can project the timeline out and things like that.

The third part is about how you deploy to run. And just to give a really concrete example here.

Last year, just before Knowledge, in conjunction with Hugging Face we announced a start language model which was built by an AI research team as part of the code and that model, it was the best-performing code model in the world. Now, a few weeks ago, we announced opcode two which was built by those same two with NVIDIA as a partner as well. The performance of the 3 billion parameter model was as good as a 15 million parameter. So to get that same accuracy and outcome, you need five times fewer GPUs in order to run it.

So what we’re seeing here is that that’s good enough, especially if you have a human in the loop. What a lot of companies eyes fail to think through is that unless the generative AI system is so perfect, it’s never wrong, then the entire value is not what it does when it’s right. But it’s how effective it is when the AI is wrong. And in order to have your system set up so it can still deliver value even if the prediction is not perfect. It’s everything else was down to see experiences that data is the guardrails, it’s the workflows that you can bring humans in the loop. It’s it’s a platform. And so you know for us because we have a platform, we don’t need to make the API perfect. The API will get better, it will generate a bit more value but our customers will get value even if it’s not.

So for that reason, we don’t need these models and people get drastically better because they are getting more efficient. Yeah, they’re getting way more efficient. And we’re very, very happy. That’s what our research team is working on and we’re all very, very happy to take those gains and just bring them to our customers.

Featured

Unleashing the Power of AI in B2B Marketing: Strategies for 2023

The digital marketing landscape is evolving rapidly, with artificial...

How To Check if a Backlink is Indexed

Backlinks are an essential aspect of building a good...

How to Find Any Business Owner’s Name

Have you ever wondered how to find the owner...

Do You Have the Right Attributes for a Career in Software Engineering?

Software engineers are in high demand these days. With...

6 Strategies to Make Sure Your Business Survives a Recession

Small businesses are always hit the hardest during an...
Jennifer Evans
Jennifer Evanshttp://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.