The UK is in a strong position to be a world leader in the development of artificial intelligence (AI) and, along with the wider adoption of AI, could deliver a major boost to the economy for years to come, according to a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able?
“The UK’s strengths in law, research, financial services and civic institutions, mean it is well placed to help shape the ethical development of artificial intelligence and to do so on the global stage,” said the report.
But it also presented evidence from across the financial services industry that there are major challenges to overcome.
James Wise, a partner at Balderton Capital, an investment manager, highlighted the specific challenges for AI-focused companies.
“The most challenging area of finance in this field is for spin-outs from academic research between launching the company and getting to a first product,” he told the Committee.
That’s because AI start-ups have a longer development period due to the complexity of the software involved, and the need for huge amounts of data, resulting in a “Valley of Death” for start-ups due to the lack of funding before product launch, Wise explained.
This was reinforced by comments from Prowler.io’s team, which the Committee visited.
Founded in January 2016, Prowler is led by Vishal Chatrath, co-founder and CEO. The company closed its first round of seed-funding in August 2016, and acquired over 50 staff, of 24 different nationalities, with 24 PhDs.
Their founders explained that they had founded the company because they observed that most AI start-ups were focused on using AI for visual recognition and classification, a problem they believed to be largely solved.
Prowler set out to develop technology which could reliably make the millions of ‘micro decisions’ found in complex systems in dynamic environments in which there was often high degrees of uncertainty.
In particular, they were focusing on financial services, transport and the games industry.
Lots of data in a black box
Prowler’s team identified two issues with conventional machine learning approaches: a reliance on very large quantities of data and the “impenetrable” nature of deep learning systems, which is especially problematic in decision-making applications.
This latter problem was not only about ethical principles, but also more mundane issues, such as the ability to acquire liability insurance for their products, a crucial consideration for real-world deployment.
They were keen to move beyond the machine learning systems used today by combining three widely used approaches (probabilistic modelling, multi-agent systems and reinforcement learning).
The aim was to build an approach to AI which would be observable, interpretable and controllable.
Another general observation made by Prowler’s team was that the industry placed too much emphasis on the data itself, and not enough is placed on the processes whereby it is processed and actually understood.
As one of their team members put it, “we need big knowledge, not big data”.
The impenetrable nature of deep learning systems, sometimes called the “black box” problem, was also discussed by many witnesses in the context of use restrictions for certain domains, among them certain kinds of financial products and services like personal loans and insurance.
Experts from the University of Edinburgh emphasised the human element, arguing that given the “completely unintelligible” nature of decisions made via deep learning, it would be more feasible to focus on outcomes, and “only license critical AI systems that satisfy a set of standardized tests, irrespective of the mechanism used by the AI component”.
Private sector squeezes out government and academia
The Committee reported that high private sector demand for machine learning expertise risked eroding training pipelines, as the academics needed to train the next generation of talent were being attracted away from universities into private companies.
The Future of Humanity Institute at the University of Oxford warned that high salaries, “as well as other benefits of working in industry (such as proximity to other talented researchers and access to large amounts of data and computing power) present a formidable obstacle to the UK Government (and academia) in recruiting AI experts”.
Witnesses suggested that lessons might be learnt from “other domains, such as finance and law, where competition for talent with the private sector has been fierce”, and universities could “consider novel initiatives such as special authority for a department to pay higher than usual salaries”.
Open banking as data privacy guide
One of the main findings in the report is that individuals need to be able to have greater personal control over their data, and the way in which it is used.
The Committee pointed to the Open Banking initiative, launched in January of this year, as a demonstration of how individual control of personal financial data can work in practice.
Open Banking refers to a series of reforms relating to the handling of financial information by banks.
From 13 January 2018, UK-regulated banks have had to let their customers share their financial data with third parties (such as budgeting apps, or other banks).
Banks are sharing customer data in the form of open APIs (application programming interfaces) which are used to provide integrated digital services.
The intent of these reforms is to encourage competition and innovation, and to lead to more, and better, products for money management.
Importantly, personal information can only be shared if the data subject (the person whose information it is) gives their express permission. The Competition and Markets Authority said “the principles underlying Open Banking are similar to the new portability principle in GDPR—and there is a lot of potential in the portability principle to help get data working for consumers.”