Skip to content
R&D stories

ChaTMeleon Crew: A Look at GPTs (Generative Pre-Trained Transformers)

Nazareno De Francesco

Senior deep learning and software analyst, natural language processing

Andrea Bolioli

Senior specialist, natural language processing

Transformative innovation

2022

Opportunities of artificial intelligence that can write like a human being

This article describes the research and experiments conducted by the ChaTmeleon Crew on Generative Pre-Trained Transformers (GPTs) in the fall of 2021.
Why did we call it ChaTmeleon? Because chameleons make colors, much like GPTs generate text.
Please accept marketing-cookies to watch this video.

The GPT chameleon

GPT-3 is an artificial intelligence (AI) system created in 2020 by OpenAI (an AI research lab founded in San Francisco in 2015) e made accessible to the general public in late 2021.

In technical terms, it is an autoregressive language model, (GPT-3 stands for Generative Pre-trained Transformer 3). GPT-3 is trained on a massive corpus of text from the internet, including books, articles, and websites, which allows it to understand and generate language in a variety of contexts. It can perform a wide range of natural language processing tasks, including text completion, translation, summarization, and question-answering, among others.

Want an example? You’ve just read one. The text in italics in the preceding paragraph was generated by GPT-3.

The Rationale for the Research

The ChaTmeleon Crew, made up of engineers, computational linguists, mathematicians, designers, and computer scientists, was formed to explore the inherent potential of this new AI and to seek answers to the following question: Can such a powerful and unpredictable artificial intelligence be a viable aid to a business or organization?

We often think that the ultimate purpose of AI is automation and overlook the role of AI in supporting human endeavors. Indeed, whereas in automation, AI makes the final decisions, with humans in a supporting role (human-in-the-loop), in the reverse paradigm (machine-in-the-loop), humans are responsible for the final outcome, and AI acts only as a supporter, stimulating creativity, providing cues, or completing parts that have already been partially processed by humans.

alt=

In the machine-in-the-loop paradigm, the machine supports the work of the human, who is tasked with sending the work context to the machine, receiving suggestions from the machine, and finalizing the result by incorporating the suggestions received.

To explore the many use cases, we set up a circular process based on 3 phases:

 

  • Exploration of ideas: we conducted a series of brainstorming sessions to select the most interesting use cases on which to test GPT-3.
  • Testing: we started with developing and training the model on the selected use case.
  • Learning: we analyzed the test results to learn about the performance of GPT-3 and how to put it to use.

Analysis and Brainstorming

During this phase, we reviewed the existing literature to see what use cases the tool had already been applied to, and envisioned some completely new ones.

As a result, we grouped the use cases into 3 clusters:

 

  • Repetitive work: tasks that require a lot of tedious and cumbersome manual work, and high precision;
  • Active work: tasks where the human being is generally an active part of the process, such as answering questions on a particular topic, summarizing a text, programming, and knowledge management;
  • Creative work: tasks where creativity runs the show and where AI can step in to stimulate creativity through free and almost unconstrained generation.
alt=

The Miro board we created during brainstorming. The arrow along the top edge shows the generation precision required for the analyzed use cases, grouped into 3 yellow macro-categories. Below the arrow, the use case clusters are orange for external customers and green for internal processes.

Few-Shot Learning – The True Power of GPT-3

Few-Shot learning (FSL) is an optimization strategy that allows a “machine” to work and recognize new objects even leveraging a small dataset (i.e., with few examples available) and to learn new tasks even if no examples are provided.

Thanks to its high parameter count — 175 billion versus a few hundred million in previous models — and the extensive pre-training dataset used to “teach it to learn,” GPT-3 can learn new tasks in FSL mode with very little data available.

Let’s illustrate this by looking at the use cases we analyzed.

Prototyping Two Use Cases

We chose two completely different use cases to best analyze the potential of FSL to support humans in their tasks. For creative work, we prototyped a Brainstorming Helper while for active work, we focused on Text Summarization.

 

Brainstorming Helper

We chose the Brainstorming Helper use case since it is frequently required at MAIZE, both internally within work teams and externally with clients.
Our goal was to expand idea generation semi-automatically and reduce the time spent on brainstorming sessions by introducing GPT-3 in a machine-in-the-loop paradigm.

Therefore, we conducted two tests:

 

1. Generation of solutions to existing problems.

We asked GPT-3 to solve some problems we had found in the negative comments of one of our client banks, such as ‘few branches available.’
GPT-3 suggested we “outsource branch management to a third party” to reduce opening costs.

alt=

The parts generated automatically by GPT-3 are in bold.

2. Generation of new ideas in a given context.

We asked GPT-3 to generate a set of initiatives to present to the public on the occasion of a certain event.

alt=

The instructions on the task to be accomplished are in bold, and the ideas generated are in plain text. In this case, the model was asked to generate a set of events to mark the anniversary of the game Pacman. An initial generation suggested individual activities, while a second, “Tokyo-based” generation suggested large-scale events and, thus, a more rewarding outcome.

The interactive nature of GPT-3 and the ability to revise, delete, edit, and put limits on the generation makes it suitable for use as a tool to assist in the creative and decision-making stages of a project, taking advantage of its prior knowledge of the world.

 

Text Summarization

We designed this experiment to test GPT -3’s ability to automatically summarize textual content. In this case, we decided to apply this approach to a set of opinions on the same topic to generate the group’s “average” summary comment.

The system analyzed and summarized 200 comments about the services of one of our client banks and demonstrated an excellent ability to distill an average user group summary comment.

alt=

Conclusions

The research and experiments conducted by the ChaTmeleon Crew have enabled us to delve into and examine the state-of-the-art model (as of fall 2021) for OpenAI’s GPT-3 text generation.

With a view to using GPT-3 to support business processes, our research was valuable. It yielded feedback that opened our perspective to other uses.
In fact, we found significant strengths applicable to our domain, such as its excellent generative capability in Italian (experiments are more likely to be found in English, in publications) and the previously lacking ability to train the model on new tasks with very few examples.

In the machine-in-the-loop paradigm, the machine must be set up and trained so that operation by end-users is as simple and controlled as possible to provide high-quality support. The development and testing phase of prompts (machine training) is a specialized and time-consuming activity that our AI experts can perform and, once completed, allows for easy operation by any user, even without a specific technological background.

alt=

Thanks to the ChaTmeleon Crew: Andrea Bolioli, Alessio Bosca, Federico Criscuolo, Nazareno De Francesco, Maria Luisa Gabrielli, Nadia Poli, and Riccardo Tasso with precious support from Elisa Cucchetto, and Francesco Tarasconi.

To find out more about the ChaTMeleon Crew:

INTERESTED IN MORE?

Pick a channel and start a conversation.