About a-team Marketing Services

A-Team Insight Blogs

The Potential of Generative AI to Transform and Disrupt Trading

Subscribe to our newsletter

What are the applications and use cases for generative AI in trading? How can firms successfully integrate large language models into their trading workflows and address challenges around data quality, model interpretability and potential biases? What are the regulatory and ethical concerns around AI systems and how can these be alleviated so that firms can realise the full potential and the benefits that generative AI can offer?

These were the questions that were discussed during a lively and engaging panel session at the A-Team Group’s recent TradingTech Briefing New York, entitled ‘The Potential of Generative AI to Transform and Disrupt Trading’, moderated by Eric Karpman, SME in Trading & Investment Management Technology, and featuring distinguished speakers Vijay Bhandari, Technology Principal & Innovation Lead for AI and ML – Innovation Network at Deutsche Bank, Nataliya Bershova, Head of Execution Research & Managing Director, AB Bernstein, and Keyvan Azami, Enterprise AI Engineering Lead at Google.

AI’s evolution

The discussion started with some background on AI. Beginning in the 1950s, Alan Turing’s proposal of the Turing Test marked the inception of AI. The 1980s saw the introduction of expert systems and Geoffrey Hinton’s neural networks. A significant milestone in the 1990s was IBM’s Deep Blue using AI and machine learning to beat chess champion Garry Kasparov.

The birth of the Internet in the 2000s resulted in an explosion of data, which facilitated deep learning. The advent of hyperscale computing, led by AWS, Amazon, and Azure, subsequently made computation more affordable, boosting deep learning.

The 2020s ushered in GPT3, followed by ChatGPT in November 2023. Historically, interaction with machines was in languages specifically designed for them, such as Java, C++, and Assembler. Now, machines increasingly operate in natural human language, broadening access to advanced technology. This suggests we are now at a significant inflection point in the evolution of AI.

Large language models and generative AI

The conversation then turned to large language models (LLMs) and generative AI. Panelists explained how LLMs, despite being termed as ‘language models’, exhibit multi-modal capabilities in that they proficiently process diverse data types, such as visual, written, and audio data, and project them into a hyper vector space for comparison and similarity analysis. This ability enables text-to-image transformation (as seen in applications such as DALL-E and Midjourney) text-to-music conversion, and various other manifestations utilising models trained on a variety of data content.

The significant advantage of these models lies not only in their capacity to generate content but also in their ability to reduce the barriers to entry and experimentation, enabling idea testing through prompt engineering. Furthermore, ‘few-shot learning’ allows these models to grasp intentions quickly based on a small set of examples. Techniques like ‘chain of thought prompting’ enable users to guide the model through a problem-solving process, much like explaining to a child. Subsequently, the model can address specific queries. It’s crucial to note that this process does not involve teaching the model; rather, it prompts the model to apply its pre-existing knowledge to the current context.

Use cases in trading technology

The discussion then focused on use cases within the trading and trading technology space. In a time when we’re seeing a huge increase in the amount of data, it’s becoming very important to summarise it efficiently, so that humans can make informed decisions using diverse information sources like articles, videos, earnings calls, and analyst reports. LLMs could potentially augment the work of high-touch traders by helping them gather, condense, and present important facts in a clear and brief response. This capability could enhance customer service and allow traders to uncover insights more efficiently and promptly. However, the use of this technology still depends on human judgment about how to use the summaries, what information to share or talk about with clients, and so on. The trader’s expertise and experience are crucial here, particularly concerning investment and trading choices.

On the other hand, electronic trading has mostly been guided by systematic trading strategies, where decisions are made by algorithms using numerical data, including market data, fills, cancellations, and various quant signals. Existing methods also exist for incorporating news into systematic trading, which involves converting text information into scores that can be processed by the machine code and employed by trading strategies. This additional layer introduces potential challenges and risks, however. Without thorough back testing to see how these summaries relate to the numerical inputs used in trading strategies, there could be major risks.

Panelists felt that strong use cases for LLMs in systematic trading is unlikely to materialise in the near term, as firms would need to establish specific rules, controls, and alerts, as well as provide assurance to regulators that appropriate policies are in place. This cautious approach is necessary given the inherent risks associated with trading.

Risks

An audience poll was conducted during the session, in which attendees were asked, ‘What do you consider to be the biggest risk around adopting generative AI and large language models?’. 58% of respondents cited ‘potential mis-use/risk of misinformation’ as their biggest concern, with ‘lack of explainability’ and ‘potential bias’ both scoring 16%. Panelists went on to discuss risks in more depth.

One big risk with large language models is that they might spread false information, a problem known as ‘hallucination syndrome’. Considering the ongoing ‘fake news’ problem, these models could end up mixing real and made-up information. There are ways to reduce these risks, but panelists agreed that it still poses a significant problem.

Privacy is also a concern, largely due to the nature of LLMs. While these models offer the advantage of reducing the need for large compute power and extensive training data, the flip side is a potential risk to privacy. If a service provider aggregates individual training examples, the model could inadvertently learn private or sensitive information. Therefore, it’s crucial to consider the model’s ownership, the controls that have been implemented, and privacy-preserving techniques.

Data quality

Panelists then turned to the subject of data quality, which is key to the functionality of LLMs. It was agreed that accessing raw data directly from its source helps minimise errors that can occur during the enrichment and validation processes in a typical enterprise data pipeline. Although it requires more curation, using source data increases the chance of attaining high-quality inputs for the models.

The effectiveness of LLMs can also benefit from a strong culture of governance, stewardship, and provenance. Clear understanding of data’s lineage and responsible handling through all stages from source to distribution provides a solid foundation for creating reliable models.

Inadequate data can lead to significant financial losses, especially in trading. To mitigate this risk, firms should establish controls and alerts for timely anomaly detection in AI systems. Notwithstanding the advancements in automation, panelists agreed that there is a continued need for human oversight in high-risk areas to prevent reliance solely on fully automated systems.

Reducing Bias

One consequence of generative AI’s potential to not only generate data for itself but also for other models. Is the potential for inherited biases to be perpetuated and amplified in the system. Panelists stressed that it is essential to address bias at every level, including the data collection methods and participant inclusivity. Due to inherent human biases, vigilance must be exercised throughout the process to minimise the propagation of bias.

Panelists also discussed why it is vital to have a human in the loop. Framing problems appropriately for AI models requires an understanding of the users’ journey and workflows, which should be considered from the start rather than backloading the UX and engineering design.

In the AI marketplace, firms like Bloomberg and Broadridge provide specialised pre-trained models, which have potential advantages over more generic models because they are trained on domain-specific data such as financial or legal information. Open source continues to move at a rapid pace and is a valuable resource for many firms. Tech giants like Google, Amazon, OpenAI, and Microsoft are well-positioned to provide foundational models, but the real value comes from combining these models with proprietary data.

Conclusion

As the session drew to a close, panelists agreed that we are at a pivotal moment in the trading industry. The potential of Generative AI is undeniable, yet it prompts a significant question – how do we achieve ‘responsible AI’? While the answer remains elusive, a combination of addressing bias, privacy, access, and diversity will form part of the solution.

One panelist suggested that rather than awaiting external solutions, firms should focus on their local domains and use cases to apply AI responsibly. Another highlighted that developing protocols for bias mitigation and ensuring privacy safeguards can make a significant difference. A third recommended that companies should shoulder the responsibility of data management, from cleaning and preparing data to normalisation, backup, and eventual commercialisation of the product. Employing quantitative techniques and extensive back-testing can further ensure data quality.

All panelists agreed that data quality cannot be overestimated. Ensuring data comes from trustworthy sources, understanding its origination, and applying procedures for outlier removal are paramount.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Best practice approaches to trade surveillance for market abuse

Breaches of market abuse regulation can lead to reputational damage, eye-watering fines and, ultimately, custodial sentences of up to 10 years. Internally, market abuse triggers scrutiny of traders and trading behaviours; externally it can undermine confidence in markets and cause financial instability. This webinar will discuss market abuse of different types, such as insider trading...

BLOG

DTCC, Clearstream, and Euroclear Collaborate on Digital Asset Ecosystem Framework

Three leading financial market infrastructures – DTCC, Clearstream, and Euroclear – in collaboration with Boston Consulting Group (BCG), have unveiled a blueprint aimed at establishing an industry-wide digital asset ecosystem to drive acceptance of tokenised assets. The paper, “Building the Digital Asset Ecosystem,” highlights a significant $16 trillion business opportunity by 2030 through the tokenisation...

EVENT

Buy AND Build: The Future of Capital Markets Technology, London

Buy AND Build: The Future of Capital Markets Technology London on September 19th at Marriott Hotel Canary Wharf London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Tackling the Data Management Challenges of FATCA

As the July 1, 2014 deadline for compliance with the Foreign Account Tax Compliance Act – or FATCA – approaches, financial institutions around the world are working to ensure their data management and operational systems will meet the requirements of the US legislation. This report discusses the requirements of FATCA and how the legislation is...