Each week Emerj interviews top experts regarding how AI is disrupting the financial services industry, offering a unique perspective on each show.
Troy Pospisil, founder and CEO of Ontra, and Eric Hawkins, SVP of Engineering, explain the specific benefits of new emerging generative AI tools on document processing, legal workflows, and the private equity space more broadly. Together, they provide financial leaders with keen insight into how to manage these business activities going forward as generative AI tools become more commonplace throughout the global economy.
Read the conversation below
Matthew DeMello (0:04)
Welcome, everyone, to the AI in Financial Services Podcast. I’m Matthew DeMello, senior editor here at Emerj Technology Research. Today’s guests are Troy Pospisil, founder and CEO of Ontra, and Eric Hawkins, SVP of Engineering.
Ontra is an AI-driven tech company that develops contract management and other legal solutions for organizations across financial services with a strong presence in private markets. Troy returns to the program with Eric to explain the specific benefits of new emerging generative AI tools on document processing legal workflows in the private equity space more broadly.
Together, they provide financial leaders with keen insight into how to manage these business activities going forward as generative AI tools become more commonplace throughout the global economy. Without further ado, here’s our conversation.
Eric and Troy, thanks so much for being with us on the program today.
Eric Hawkins (1:07)
Thanks, Matt. Great to be here.
Troy Pospisil (1:09)
Thanks, man. Good to be here. Appreciate you having us.
Matthew DeMello (1:11)
Absolutely, even pulling from our first episode with Troy, just laying kind of the groundwork of the general concerns in this space, I really want to just dive into private markets today and investment banks, hedge funds. In terms of that area, where are we seeing opportunities for generative AI and large language models (LLMs) to streamline workflows?
Eric, let’s start with you.
Eric Hawkins (1:34)
If you just think about so many of the different concerns in private markets, it all boils down to making sense of a lot of information really quickly. Whether this is on the deal side, and you’re sort of evaluating opportunities, and putting all the data together as quickly as you can to make really good decisions, or more on the operational side and in the space where we sit, being able to accelerate the pace of deal-making.
There are so many details in the private markets, so many regulations, so many sort of complex legal agreements that everybody has to handle. And the tools that are emerging, I think, especially with generative AI and LLM and some of these technologies, are just incredibly powerful. They can make sense of vast amounts of data and vast amounts of natural language very quickly.
It’s sort of like giving the people in these markets superpowers if they embrace these tools in these technologies, right? They can make better decisions, make them faster, turnaround deals a lot quicker than they could with sort of old fashioned manual processes.
Troy Pospisil (2:47)
I would just sort of put a capstone on that and say what we’re really trying to accomplish with our technologies for our customers is the ability to handle an increasing volume of work. Private market professionals are being really overwhelmed by the volume of work, especially administrative work. They’re raising more funds, those funds are getting larger, they have more and more LPs, and the documentation is getting more complicated. For example, the number of side letters per fund and per LP has gone up dramatically. The length and complexity of those side letters have increased dramatically.
So you layer all of that on top of each other, and you really just have an exponential growth in the amount of work. And then what these firms are increasingly looking to do is, well, what they’ve always been hoping to do, is have a very high level of precision in the work they do.
I think the holy grail for AI is helping firms address the volume with the same or fewer resources while also increasing the accuracy and precision of their work. Which is what we hope to achieve with our tools. And generative is a big piece of making that happen.
Matthew DeMello (4:04)
For sure. Now, I’m just coming at this from the angle of sales. That’s where I’ve experienced the true deal-making technology in those systems. So pardon that bias. And definitely, the whole point of this question is going to be — correct me if I’m wrong — I’m wondering about the substance of the generative AI in the deal process.
So large language models, I take it, are in the mix. Is it a kind of a co-pilot telling you: Maybe we want to move the deal in this way? Or is it more about you leveraging generative AI to say, Hey, can you bring me up some data? Can you bring me up some information that might enlighten or help me predict the trajectory of this deal?
What’s the technology, and what’s the workflow? We’ll start with Eric.
Eric Hawkins (4:50)
Yeah, I think it’s a lot more of the second there. The capabilities of generative AI at this point aren’t really ready to make recommendations about what you should do. So if you’re a private markets professional, and you’re thinking, you’re working on some deal, these technologies are not yet capable of you asking them, what should I do given this set of information?
They’re much better at making sense of things. So if you have a whole bunch of information, maybe it’s a very complicated agreement with your LPs, for example. You can run that information through LLMs, summarize it, and make sense of it a lot faster than you could manually.
The way I think of it is generative AI, LLM, at this point, just sort of summarizes and speeds up the work of making sense of a bunch of data. But they’re not going to give you superpowers. They don’t sort of replace the experts. They’re just sort of tools that augment the capabilities of the experts.
Matthew DeMello (6:00)
Yeah, I think also there, just in terms of a lot of the conversations about where co-pilots are going, there’s a lot of workflows where it’s, the copilot might be conversational right now, but it seems like the next phase is prescriptive. But it sounds like, at least in private markets, in these workflows, it’s much more about like you’re saying the data, or at least, they’re a little bit behind, at least in that development.
Or I don’t know, maybe the ultimate fruition here, or the end game of what the technology is going to look like here isn’t quite a co-pilot. Let me open that possibility. What do you think of that?
Eric Hawkins (6:35)
Well, I think I think it’s possible. And I think maybe it’s most useful to think through some concrete examples, right?
Let’s say you needed to draft a new LP agreement or something like that, right? You could, at this point, leverage generative AI and say, write me an LP agreement for the California Public Employees Retirement System. And it’s sort of this type of investment at this scale, it would be capable of coming up with some boilerplate information, right?
The point here is that it’s probably going to be pretty low quality right now. And you’re still, as a private market expert, you’re going to review this language. You’re gonna edit it. So it is kind of a copilot in the sense of bootstrapping the process or speeding things up. But it’s not something that you can rely upon on its own.
We talk a lot about humans in the loop. This is a very critical aspect of how you leverage these generative AI technologies right now. They’re not at the level of accuracy that can’t handle the levels of complexity. Where you can totally take humans out of the loop. So they can come up with a draft that the expert can quickly revise these types of things. This is how it accelerates.
Matthew DeMello (7:57)
Basically, giving you — I’ll say this as a managing editor, I’ve just launched into Chat GPT. Full disclosure for our clients, I have never used it to write an entire white paper. But when I’m on writer’s block, and I kind of need just a paragraph about something very small, that’s where I’m using chat GPT.
All that said, for what you’re seeing right here: it sounds like it’s not client-facing. And something else, just getting into the problems with LLMs, the hallucinations, the accuracy. I know, sort of the answer to this, that we’re getting from the industry is that — especially in FinServ workflows, healthcare workflows — we need expert feedback into those systems to help train them on very, very specific contexts, tones before they’re really customer facing. But that’s if you’re getting into more bespoke. This still seems like it’s it’s going to be internal, i’s going to be in the team.
And I’m wondering there, you’re touching on it a bit in your last answer. If you can give some more color, at least to the expert feedback in those systems, even for your product. I think that would be exemplary, at least for this audience.
Eric Hawkins (9:08)
There are a couple of ways that this shows up. One is if you just have expert humans utilizing these technologies in sort of a co-pilot fashion, as you mentioned. They’re looking at the outputs, and they’re revising things, and they’re adjusting. And they can do that because they’re the experts, right? The human in the loop needs to be the expert right now because they know what good looks like. They’re looking for a high-quality answer. They can filter out nonsensical responses, these types of things, right?
When it comes to the next jump here, it would be to trust these technologies enough that you can just use their outputs directly. You can imagine some world in the future where a contract negotiation is actually being driven by AI or by LLMs. One AI is drafting things, and the other AI is sort of commenting on things. That’s a very scary thing to do right now, in sort of an open loop, humans-out-of-the-picture fashion, because neither of these sides really has a good read on what good looks like. They’ve been trained on vast amounts of data. They sort of understand how to make sense of that data. But they don’t have that qualitative piece of this is a good contract or a bad contract. This is a good deal or a bad deal. There’s no way for it to evaluate that right now.
So the way that you wield these technologies is by steering them and guiding them. This can be sort of retraining of the models themselves. That is one way. You’ll hear people talking about fine-tuning and bringing industry-specific, proprietary data into the picture so that they can fine-tune these LLMs to do a better job on these sort of nuanced predictions that you’re asking them to make. That’s one way.
But another technique goes into more the prompt engineering and how you interface with the LLM themselves, is with experts in the loop. You can see how the humans are interacting with the outputs of generative AI, and then you adjust the prompts in the way that you’re interfacing with the LLM in the future. What you’re doing there is you’re finessing things into an ever-improving answer space.
These LLMs are vast. They can answer questions about anything. You’re trying to steer them into this particular area where they’re going to give you the highest quality responses possible. Steering them and guiding them without actually fine tuning the models is a very prevalent and emerging technique and something that we’re working a lot on right now.
Matthew DeMello (11:58)
Something I appreciated in your last answer that leads really well into my next question is how you described the risks involved, that if we had two AI systems working off each other on a workflow, and we’re just not there yet with the technology. I think that kind of speaks to the moment we’re at with generative AI overall. I’ve made this comparison before the audience is gonna go, oh, here he goes again. But we are at this moment with this technology that’s a lot like the early internet.
You’ve probably heard this narrative out there in the media. It’s a lot like 1998. Everybody’s very impressed that their website uploads in 30 seconds. By 2004, that’s ridiculous. That is so slow that by today’s standards, there’s definitely something wrong with your internet.
I’m wondering if we’re at this moment with generative AI where that’s expecting too much. That’s a little too dangerous. I’m wondering how you look at maybe where this technology will be in the next five years. Or, more to the point, we may see that, same thing with hallucinations and information and misinformation on these platforms in the next few years, as we saw with slow uploading speeds with the old modems of the late 90s and early aughts.
But once these technologies become more commonplace among private market players, what do you think that’s going to mean for this area of financial services?
Eric Hawkins (13:26)
The first thing that I would say is it’s not going to be five years. It’s going to be like five months because the innovation here on these things is really spectacular. The first thing that I would say there is the breadth of these models is expanding so rapidly that the types of things that they’re capable of analyzing or helping with. At this time last year, you couldn’t really trust Chat GPT to make sense of legal contract, right? Today. You can.
So how fast these technologies are evolving, especially with respect to how useful they are for things like private markets and more bespoke cases, is really shocking. So I think it is kind of analogous to the internet in that way. In the ’90s, there was very little that you could even do on the internet, right? Not only was it slow, but it just didn’t really have much for you. Now it has everything, and it’s fast. I think that’s very much going to be the same with all of us.
Another thing I think about today, how you interact with generative AI is a pretty limited and very constrained interface. You can only give it so much data at a time, right? So if you’ve played with Chat GPT, it’s a pretty small text window that you have to work with. And you kind of have to get creative with how you’re giving instructions. You’re pretty limited. And that’s not going to be the case. In a short amount of time, you’ll be able to — even today, it’s true with some of the announcements of Open AI — in the last weeks, you can just give it an entire document. So you can be like, here’s a PDF, make sense of this. You don’t have to jump through all the hoops of giving it tiny amounts of text at a time.
So I think it’ll be much less constrained. It’ll be much faster. And it’ll cover so much more in terms of breadth of understanding.
Troy Pospisil (15:30)
I would just put a finer point on Eric’s comment about the breadth of the models, there’s no doubt that the breadth of the models and the amount of data that are supporting the large language models is increasing dramatically. And they can recall that information near instantaneously.
The other interesting trend we’re gonna see is the depth of these models within specific areas. So as we talk about private markets, you’ll not only have the foundation of an LLM that has a massive breadth, but you’ll start to see players like us, who are complementing that with a breadth of data and knowledge from a very particular use case. These models are probabilistic, and so there’s an opportunity to weigh the probabilistic outcomes with the more industry-specific knowledge, such that you’re getting answers that are similar to what you might expect with someone who’s spent their career in that specific area or for many decades, they’re going to overweight the knowledge from that space when answering questions within that space.
Matthew DeMello (16:39)
Absolutely, and just given the name brand or household name brand among LLMs, you know, Chat GPT. So much that went into the development of that system was just drinking in an already flourishing internet. I think that speaks to everybody’s point about just the big difference between the dawn of the internet and what we’re seeing in generative AI today, as much as myself and a lot of members of the media love to make that comparison.
Just as a final question, given that backdrop, how should financial services professionals and their legal counsels think of where to focus their skills, given these changes?
Eric Hawkins (17:14)
I think the main thing here is that private market professionals and legal professionals will have more time to spend on their truly differentiated activities. So how you are sourcing good deals? What your sort of customer relationships are with your LPs? All those types of things are going to still be, what actually will be even more important.
If you think about the way the industry works today, the vast majority of private market professionals’ time is consumed by these repetitive tasks, like analyzing legal contracts, commenting on legal contracts, making sense of data so that they can make decisions about deals. These types of things, right?
A lot of those sort of routine, somewhat manual, very time-consuming things, AI is going to help a lot in the amount of work. The amount of time they spend on those things is going to go down. The sort of differentiated activities like, how do my LPS feel about our relationship? Why did they trust me? How do I sort of curate that relationship? That’s going to become even more important.
And I think, on the legal side, it’s very reasonable to say that legal teams will be differentiated by how quickly they can enable the commercial teams that they support, right? The legal teams are there to ensure that the firm is operating safely and that they’re protected. But at the end of the day, they need to be unblocking the commercial teams. The commercial teams are trying to get deals done as fast as they can. Legal teams need to embrace these technologies so that they can move faster and so that they can unblock the deal flow.
Matthew DeMello (19:02)
Absolutely, and part of the reason I asked this is, I think, even the culture is kind of past the talking point of AI is here to kill your jobs. Although I do think that, you know, inevitably, this means just everybody is rethinking how they’re approaching their skills, how they’re approaching the future. Everybody wants their skills to complement this technology. That’s at the heart of the question here. Anything to add?
Troy Pospsil (19:26)
I agree wholeheartedly with everything Eric said. Ultimately, what these technologies are going to do is allow you to get a higher return on your time and your attention, which is also true for other efficiency-enhancing technologies, like traditional workflow software tools.
I think the returns we’re going to see, and the power of these tools, is going to be something new and different than folks have experienced before. But the analogy to even word processing software holds true. Before people had to type everything out on a typewriter, and then you had word processing software. People take for granted things like copy and paste. But that was a massive efficiency improvement within the industry of law and anyone doing business that involve contracts.
And this is continuation on that continuum. But at hyperspeed, it is going to eliminate a lot of more mundane, repetitive tasks. As people think about their career and their skill sets, things like judgment and influence are going to be very important in a thriving career. And the folks who want to just hide in their office with their face lit up by a screen, just cranking away on repetitive tasks and thinking that they can make a career out of that. I think that’s going to be harder to do in the future.
Matthew DeMello (20:47)
Of course, of course. It’s so funny that you actually bring up copying and pasting. I was thinking about this the other day, because I distinctly remember, in the late 90s, my mom taking me to a computer internet class. I learned how to copy and paste and just the other day, I was thinking, I probably copy and paste more than I even type by 2023. And that’s been true, if you made a chart of that, that would be a gently sliding mountain, up from 1997, of course, over 25 years.
Eric and Troy, thank you so much for this trip down memory lane and a vision into the future. Can’t wait to have you back on the show. Thanks.
Eric Hawkins (21:19)
That was fun.
Troy Pospisil (21:20)
Thank you, Matt.
Matthew DeMello (21:25)
Before we wrap up today’s show, a small comment. I think we’ve done a bit on capital markets, private equity, and the areas of financial services we covered on today’s show. In the past, we haven’t quite covered the generative AI angle here. I think it’s quite revelatory to be talking about a space where generative AI has only just arrived and that the workflows still need to be worked out where the technology is most applicable to handling the problem.
As we’ve said many times on the podcast — even our sister mainstream podcast, the AI in Business Podcast, also available on the Emerj podcast platform — it’s not about having a fancy hammer and finding the right nail to hammer with that hammer. It’s about knowing your nails, knowing your problems, and building a hammer with these incredible new technologies that is most appropriate to hit that nail, to solve that problem.
And to see this happening in live action, slightly behind the other parts of the industry. At least when it comes to this kind of copilot vision I was prescribing in the podcast about midway through. I think we’re learning a lot from at least seeing these adoptions in action and unfold from enterprise to enterprise.
And I really appreciate Eric and Troy helping us peer into this process, at least in terms of the solution they’re offering and financial services. On behalf of Daniel and the entire team here at Emerj, thanks so much for joining us today, and we’ll catch you next time on the AI in Financial Services Podcast.