The New Technological Age of Large Language Models (LLMs)

In a recent note , an electronic newsletter published by the leading Silicon Valley venture capital firm Andreessen Horowitz, Matt Bornstein and Rajko Radovanovic write:

“Large language models are a powerful new primitive for building software. But since they are so new—and behave so differently from normal computing resources— it’s not always obvious how to use them.”

The idea that pre-trained AI models like ChatGPT, Bard, etc., can be a powerful force for building new software and new technologies has been gaining a lot of momentum in the last few months. LLMs were, of course, originally designed to generate text, and the initial excitement over their capabilities was focused on text-centric applications, such as writing essays, summarising documents, advertisement copy-editing and so on. And these text-processing applications are, indeed, going to be used for many important use cases. However, in the past two or three months, an idea has emerged that text generation may not be an important application of LLMs and that the most significant uses for LLMs may not be in text processing at all.

This change in focus arises in the final analysis from a curious phenomenon observed in large pre-trained models. These models display a property called emergence, which means they suddenly seem to gain new abilities as they grow larger in scale. A blog published by Google researchers last year provides the following description of this curious phenomenon:

“In ‘Emergent Abilities of Large Language Models’, recently published in the Transactions on Machine Learning Research (TMLR), we discuss the phenomena of emergent abilities, which we define as abilities that are not present in small models but are present in larger models. More specifically, we study emergence by analysing the performance of language models as a function of language model scale, as measured by total floating point operations (FLOPs), or how much compute was used to train the language model. However, we also explore emergence as a function of other variables, such as dataset size or number of model parameters (see the paper for full details). Overall, we present dozens of examples of emergent abilities that result from scaling up language models. The existence of such emergent abilities raises the question of whether additional scaling could potentially further expand the range of capabilities of language models.”

One very interesting behaviour that emerges in large pre-trained LLMs is the ability to carry out complex reasoning tasks. It was already observed in early 2022 that using prompts that followed a chain of thought, i.e., a chain that includes a series of intermediate reasoning steps, significantly improves the ability of LLMs to carry out complex reasoning. Several researchers have since demonstrated that the use of well-constructed prompt chains permits LLMs not only to carry out complex reasoning but also enables them to plan a sequence of tasks and execute them to reach a pre-set goal. It is this emergent ability to reason, plan, and execute, that has brought up the possibility that they may completely change the software development process.

We have known for more than a year that LLMs can write code. Even GPT versions earlier than GPT-3.5 could write simple Python code. ChatGPT is, of course, quite good at writing code snippets and even some small projects. What has changed since the early days is the appearance of several frameworks that can be used to generate much longer codebases with more elaborate structures.

One such framework is described by Bornstein and Radovanovic. The build starts with contextual data sourced from the private archives of an enterprise. It includes transaction data, email, call recordings, training records, manuals, and so on. This data is embedded into a vector space and stored in a vector database. It is then used as an input to an orchestration framework, such as LangChain or OpenAI functions, that uses agents to design, plan, and carry out the project. These agents can also be used to develop the front end and orchestrate the web deployment.

Although the LLM agents are likely to carry out a lot of the development, it is unlikely that the process can be fully automated any time soon. The presence of a few humans in the loop will remain essential for the foreseeable future. It is because even with contextual data, LLMs are still prone to confabulation, and therefore the integrity of the codebase will require human feedback. It is likely, however, that major projects can be carried out in the future by a team of just 2-3 people working with a large collection of LLM agents.

As Bornstein and Radovanovic write:

“Pre-trained AI models represent the most important architectural change in software since the internet. They make it possible for individual developers to build incredible AI apps in a matter of days that surpass supervised machine learning projects that took big teams months to build.”

Some observers have speculated that this trend would lead to a reduced demand for software developers. The truth may be just the opposite. The market for software today is limited by a scarcity in the supply of programmers. There is a huge pent-up demand for technological solutions using AI and other technologies that can improve our lives. Once we learn how to use LLMs effectively in the development process, the demand for such systems is likely to explode, and we will need a lot more people who can develop these solutions efficiently and economically using a team of LLM agents.

About the Author

Dr Debashis Guha is an Associate Professor and Director of the Master of Artificial Intelligence in Business at SP Jain School of Global Management. He has worked extensively in the fields of Data Science, Artificial Intelligence, and Machine Learning in the US and India, and has founded a Bengaluru-based data science company. Dr Guha has taught at several US universities, consulted for major multinational corporations, central banks and governments worldwide, and published research papers in top-rated peer-reviewed journals.

Add a Comment

Your email address will not be published. Required fields are marked *