MarketBrains recently highlighted moves within the financial industry to create a universal language for financial applications by OpenFin, an initiative that can help financial AI applications make delegated decisions.
Paul Walker is an independent board member at OpenFin, the former co-head of Technology at Goldman Sachs and has a background in gravitational physics, among other achievements.
MarketBrains interviewed Walker about the recent FDC3 open source initiative for financial applications, what the implications are for the use of artificial intelligence in finance and the emergence of the data conversant quant.
MarketBrains: Can you tell me about your involvement with FDC3?
Paul Walker: I am involved with it primarily because I am the independent board member at OpenFin and in that role I help out as they design strategy around open source for product offerings.
One of the things that we concluded was that OpenFin, which is an incredibly valuable platform for application development and deployment, becomes even more valuable if those applications can interact with each other, and the first cases we thought about were the cases of user actions begetting other user actions.
We know this from the web, when we click, a click does one thing, which opens another page. On a phone, if you click on a phone number it opens your phone, or if you click on a location it opens a maps application.
There is all this interoperability that doesn’t go back across the internet to the server, but allows applications to talk to other applications, which is built-in to the operating systems.
That’s useful in capital markets as well, but it needs a standard.
MB: What might that look like in practice?
PW: If you are looking at an order in your order blotter, and you wanted to open up research on that stock, you don’t need the order blotter application to write the research application, you just need to know there is a research application that I can send a ticker to and then it will open the page of research.
For that user-driven interaction, you need an underlying standard in both the order blotter implementer and the research blotter implementer.
The person who writes the order blotter has to write a piece of software that says: when you click on the ticker send this message. And the person who writes the research blotter has to say when you receive this message, open this piece of research.
Those mechanics are at an “operating system” level, and we realized that there are a few important parts of software becoming standard that we needed to lean on to do that in capital markets as well.
The second thing we decided to pull on was open source, because standards that are closed standards essentially don’t work on the internet anymore.
“We are going to be using AI to greatly increase the abilities of people, but still allow those people’s discretion to be at the centre of the activity”
MB: And how does AI play into that?
PW: AI is secondary to FDC3. But you could easily imagine a program that uses AI to, for example, say: here are the six things you need to look at, and you click on one of them and it interacts with other tools. Or even an AI program that says: I am going to open what I think is interesting today.
It’s definitely the case that some of the applications in capital markets have predictive and cluster-based data analytics. And users could make decisions based on AI in their desktop applications, or perhaps as users allow those agents to have some degree of delegated decisions.
For instance, If you take a look at how we do order management now for equity trading, if I decide I want to sell a large number of shares, and I put that in an algorithm I don’t decide each of those child orders, I’ve decided the overarching intent of an algorithm that has figured out those child orders.
We are going to be using AI to greatly increase the abilities of people, but still allow those people’s discretion to be at the center of the activity. AI applications in FDC3 will be interacting in that way.
But the primary message of FDC3 is: let’s allow applications to interact with each other no matter the source of the data, whether the source is AI or order management, or Twitter or just me typing in reminders to myself.
You can imagine any application, and with it an open community-based, well led, well-defined standard to do that interoperability in the infrastructure of capital markets.
“It’s a different way of thinking, it’s a different way of programming and we are still getting used to as a community to what that means”
MB: Some quants tell us that one of the major problems in allowing AI to make decisions, even delegated ones, is explaining to investors what happened in the case of losses. What are your observations on AI development in this way?
PW: The big change that has happened with the collection of technologies that we call AI, or machine learning, or deep learning, is that it changed the way we program. It used to be when you wrote a computer program you would write down a series of instructions: if the price ticks up x times, buy, for example.
Turns out that’s a horribly unsophisticated and ineffective algorithm, but it’s something that we can understand.
And so that fits our model of computation that we have in our head, because the model of computation that we have in our head is: we tell the computer what we want it to do and it executes it flawlessly and at scale.
With the advent of AI, that is not exactly how we are programming computers anymore. Instead what we are doing is programming computers by saying: we have a problem of data transformation, or data prediction or data clustering.
That now looks like writing a computer program that has a very large number of parameters, maybe millions rather than three. Too many parameters for a person to say what they are.
Imagine if we are doing red-green identification, for example. You don’t say what all those million parameters are, you write a separate program that takes a training set, runs it through that network and then finds the right value of those parameters to make sure that most of the red images are put in red in the program, most of the green images are put in green.
Programming has changed from: write my set of steps to say it’s red or green, to: come up with a program that I think would predict red or green without knowing what the values of all those parameters are, and then write a second program to find all those parameters.
MB: What are the implications of these changes in trading?
PW: If a trader decides to buy a bond, why did they decide that? They saw the price on their screen and thought the price was good. So, why did they think the price was good? If you ask why all the way down the stack of why a person made a decision we realize that we don’t know.
MB: Right, because the black box is inside your head.
What we’ve ended up doing is writing these programs that have these structures that we think, with good reason, are able to find sets of parameters that act as if they have learned, and then deploy those on data that we haven’t trained them on, and find they actually work.
It’s a different way of thinking, it’s a different way of programming and we are still getting used to as a community to what that means.
“There is now a new set of skills that a data conversant quant needs to be aware of…”
MB: What are some of the pitfalls there?
PW: You’ve seen all the stories I am sure about facial recognition programs that don’t recognize people of colour, or sentiment analysis that views statements about men as more positive than statements about women.
It’s not like there’s a programmer out there saying, let’s write a racist face recognizer, or I think men are better than women; what they are doing is they are choosing a training set for their sentiment analysis where that cultural bias is inside it.
So, how do you make sure that you are not introducing bias into your training set?
And then in terms of finance: how do you interact with the regulatory backdrop, how do you interact with fair and equitable treatment of your customers, how do you interact with the explanatory power of why you made a particular decision?
I think those are all questions that the community is working on and has good answers, but has to be conscious of.
There is now a new set of skills that a data conversant quant needs to be aware of and those skills are not just stochastic calculus, and they are not just analysis of an algorithm, it’s also how do we think about data science, about data validity, about training bias?
MB: Any advice on discussing those realities with investors?
PW: What I tell investors to look for is: has this AI embedded itself in a business process in a way that makes that business better, and has it used a dataset to configure itself that we believe is representative of what that business process has done?
And: is this human replacement or human augmentation? The risk in those two is very different.
In capital markets, for a long time, we will see many of these AIs not as human replacement but as human augmentation. People will be smarter and better around managing, finding and using data to make decisions. But those decisions will still lie with the person.
We have a cultural belief that people make better decisions than computers, you can argue that either way but for now let’s assume that it is true in some cases.
But also, I don’t think the AI techniques are good enough to have all the data and for the algorithm to do all the decision processing for something as complex as capital markets, quite yet.
If I was writing an AI program now that did something as high stakes as trading, I would put some sanity checks on the back of it, just like we do with more traditional algorithms.
I’d code it: I want you to go buy stock but please don’t put an order of 4% of apples ADV. If that happens, something has gone wrong with my algorithm.
This interview is edited and condensed