Resonanz Capital's data scientist, Chang Liu and software engineer Dinko Georgiev share insights on the chatbot they developed for hedge fund investments. 

How would you explain a chatbot to an intelligent 8 year old child?

The chatbot we developed is based on a Large Language Model (LLM) which trains computers to interact with users in normal human language. We can send user questions in natural language to a vast database and train the database to extract the most relevant text from hedge fund documents to answer the questions.

 

What is a foundation model and what is a Large Language Model?

Foundation models are large-scale general purpose neural networks trained on vast amounts of text data to understand and generate human-like language. They serve as starting points for developing more specific and specialized AI models. Large Language Models are subsets of a foundational models. For example, a foundational model could be used to create a chatbot, translate languages, or write creative content. An LLM, on the other hand, is typically only used for one or two specific tasks, such as generating text or translating languages. For instance, an LLM can be trained to scan and collect receipts from a dataset of financial transactions.

 

How are computers trained to interact with natural language instead of binary code?

The bottom line is natural language is still converted into binary code. Training computers to interact with natural language, as opposed to binary code, is a complex process that involves multiple stages models techniques. The field dedicated to this endeavor is called Natural Language Processing (NLP).

 

Why did Resonanz Capital develop a chatbot?

We had two main reasons. First, over the year, we’ve been collecting vast amounts of hedge fund data for our discretionary investment and advisory business. The power of an LLM can efficiently consolidate, extract and analyze this huge amount of unstructured documents. Second, it can make the hedge fund research process easier by allowing users to combine structured and unstructured data for fund analysis without having to go through thousands of individual fund documents.

 

How long does it take to develop a chatbot?

It took a month for research and developing the prototype and another two months for developing the first productive version. However there were numerous software and hardware development stages. A chatbot should provide quality responses in reasonable time. Some of the services used earlier like semantic search had potential but didn’t work as well for what we needed in text mapping or providing context for the data. There were also varying levels of complexity and capability which had to be sorted out, but also positive surprises, such as the deployment of certain open source LLMs which proved less complex than expected.

 

What was the greatest challenge in developing a chatbot?

In addition to the hardware and software steps we just mentioned, a good way to think of a chatbot is it’s like a brilliant and creative child who has absolutely zero context for the vast universe of data in its head. It can be incredibly dynamic if you provide the correct context and train it to respond to your precise needs.

For example if you just asked a chatbot to draw you a map of Manhattan, you might get a beautifully coloured map of all the boroughs or one granular enough to help you get from Time Square to a meeting downtown in rush hour. Both responses are valid, one is more creative, the other is more granular and practical, however you need to tell the model exactly what you need. It’s not alive.

The devil is in the details. One test involved scraping and assembling relevant redemption terms from a universe of hedge fund prospectuses. You may have to guide the chatbot to retrieve language only from a particular section or a single paragraph; otherwise, it might formulate a plausibly sounding, but incorrect, answer by taking words from across the document.

 

How does a chatbot get control of the universe of data required?

A major feature for including a user’s own data into chatbot is by running semantic embedding on the fund documents. Fund documents are split into overlapping strings and converted into multi-dimensional vectors then saved into a vector store database. When a user enters a question, the question is also converted into a vector. The question vector is compared with the vector store database. The similarity search returns a list of text that are most similar to the question.

 

How does one get a chatbot to say it doesn’t know the answer?

As models have advanced, handling situations where the chatbot doesn't know the answer has become easier. However, it's not as simple as training the model to say "I don't know." This is where the art of prompt engineering comes into play. Instead of creating a completely new version of the model, user queries are guided through a set of customized instructions designed to meet our specific business requirements. Of course, there are challenges involved in this process. For instance, when we asked our model if it knew any hedge fund jokes, it responded appropriately with a "no." But when we asked it to come up with a hedge fund joke, surprisingly, it didn't disappoint.

 

How fast does the ecosystem and ability to connect Large Language model evolve?

The evolution of Large Language Models (LLMs) may not occur rapidly due to the high cost of training them. However, the ecosystem surrounding LLMs is constantly evolving and dynamic. An example of this is LangChain, a popular workflow library for LLMs, which underwent several new releases during the development of our chatbot over the span of a few months.

 

Are there data privacy and security issues to consider?

Data privacy is certainly an issue if you are using models that are deployed to outside servers. In that case, you need to trust the big tech companies you send the data to, just like when you are using other online services from these companies. Alternatively, you need to choose an open-sourced model and deploy it to your own server. In this case, you can be certain that you have full control of your data.

 

What are the implications of an LLM for our business?

LLMs have a a tremendous potential to increase the depth and breadth of our research process. Additionally, they can support customer service through chatbots, providing on-demand and bespoke research insights. Ultimately, these models serve as just one piece of the puzzle in our toolkit, and their seamless integration should be carefully orchestrated to meet the specific needs and preferences of our clients..

 

Tell us what you think