Behind the Scenes: Exploring the Inner Workings of ChatGPT – Part 1

Shaip favicon
Hey hi there, my name is Anubhav Saraf, Director Marketing at Shaip, how are you today?
Hello Anubhav! I'm an AI, so I don't have feelings, but I'm here and ready to help. As a Director Marketing at Shaip, you must have a lot of insights about data services and AI solutions. How can I assist you today?
Shaip favicon
I just wanted to know more about your training process. How were you built and trained?
As a language model developed by OpenAI, I was built and trained using a technique called "transformer-based machine learning," specifically a variant called the GPT (Generative Pretrained Transformer) architecture. The version you're talking about now is GPT-4. Here's a brief overview of my training process:
  • Data Collection: My training began with a diverse range of internet text. However, I should note that while I was trained on a broad dataset, I don’t know specifics about which documents were part of that training set or have access to any specific documents or sources.
  • Initial Training (Pre-training): This is where I learned to predict the next word in a sentence. I didn’t start knowing English or any other language. Instead, by predicting the next word in millions of sentences, I gradually learned about grammar, facts about the world and some reasoning abilities. However, this also means I might have learned from biased or incorrect information since the data was not thoroughly filtered.
  • Fine-Tuning: After the initial training, my creators at OpenAI provided further guidance with a more specific dataset, generated with the help of human reviewers. These reviewers follow guidelines to review and rate possible model outputs for a range of example inputs. The fine-tuning process is a kind of dialogue between the model and the reviewers that helps me to align better with human values.
  • Continuous Learning: While I don’t learn from individual interactions due to privacy reasons, OpenAI continually works with a team of human reviewers to provide feedback and make iterative improvements based on broader patterns and insights.
Shaip favicon
Can you provide examples – when humans were involved?
Yes, humans were indeed involved in my training process, but it might be different than you're imagining. My training process, often called machine learning, involved two main steps: pre-training and fine-tuning, both of which involve human interaction, albeit in different ways.
  • Pre-training Example: Suppose we have the sentence, “The cat sat on the __”. In the pre-training phase, the model like me would learn to predict the next word in the sentence, which could be “mat” “roof” or “tree”. This is learned by processing billions of sentences from diverse internet text, indirectly involving human interaction as humans created the original content.
  • Fine-tuning Example: After the pre-training phase, the model enters the fine-tuning phase. Here’s how it might work:

The human reviewer is given a task to review possible model outputs for input like “Translate the following English text to French: ‘Hello, how are you?'”. The model might generate several responses like “Bonjour, comment ça va?” or “Salut, comment vas-tu?” etc.

The reviewer rates these responses based on factors like accuracy, fluency, and appropriateness. These ratings help the model to generalize and respond to similar inputs in the future.

Remember, the examples provided here are simplified to illustrate the process. The actual training involves complex algorithms and vast amounts of data.

Shaip favicon
To train Large Language Models (LLMs), how much data would be needed?
Training large language models (LLMs) like GPT-3 or GPT-4 requires vast amounts of data. These models are trained on diverse datasets that encompass large portions of the internet, including books, websites, and other texts, to learn grammar, facts, and some degree of reasoning. However, specifying the exact quantity of data required is complex due to several factors:
  • Variety of Data: It’s not just about the volume of the data, but also the variety. The data needs to be diverse to cover a broad range of topics, styles, and tones found in human language.
  • Quality of Data: The quality of the data used for training can also influence how effectively the model learns. Better quality data can potentially reduce the volume required for effective training.
  • Model Architecture: The architecture of the model (like the number of layers or parameters in the neural network) can also affect how much data is needed. Larger models, like GPT-3 or GPT-4, typically require more data than smaller models.
  • Training Objectives: The type of tasks that the model is intended to perform can influence the amount of data needed. A model designed for a specific task may require fewer data if it’s well-targeted, while a general-purpose model like GPT-3 or GPT-4, designed to handle a wide range of tasks, will require more data for broader understanding.

In the case of GPT-3, for instance, the model has 175 billion parameters and was trained on hundreds of gigabytes of text. But these figures don’t fully capture the complexity of the process. The specifics of the training process, including the exact amount of data used, are proprietary details held by OpenAI and similar organizations.

Shaip favicon
Still, Is there a benchmark on how much data would be appropriate or enough for training ML Models?
The amount of data needed to effectively train machine learning models depends largely on the complexity of the problem you're trying to solve, the quality and diversity of your data, and the type of machine learning model you're using. Therefore, it's challenging to specify a definitive benchmark for data volume. However, here are some general guidelines:
  • Start Small: Especially for simple problems or initial model prototypes, a smaller dataset can be adequate. For example, if you’re building a basic image classifier, you might start with a few hundred or thousand images per category.
  • Consider the Complexity: More complex problems typically require more data. If you’re developing a deep learning model or working with a complex problem like natural language processing, you may need millions of data points.
  • Quality over Quantity: Quality, relevant data can often be more beneficial than a massive quantity of poor-quality data. Ensuring your data is properly cleaned, annotated, and representative of the problem you’re trying to solve is crucial.
  • Balance Your Classes: For supervised learning tasks, make sure to have a good balance of examples for each class. An imbalanced dataset can lead to a model that performs poorly on underrepresented classes.
  • Use Validation and Test Sets: Part of your dataset should be set aside for validation (tuning model parameters) and testing (evaluating model performance). Common splits might be 70% of the data for training, 15% for validation, and 15% for testing, although these proportions can vary.
  • Experiment and Iterate: Start with an initial amount of data, train your model, evaluate its performance, and iteratively add more data, tweak your model, or improve your data quality based on the results.

Remember, more data typically improves the performance of machine learning models, but only up to a point. After a certain point, collecting more data might be more time-consuming and costly than its worth, and it may be more beneficial to focus on improving the quality of your data or refining your model architecture.

Shaip favicon
The data that was used to train ChatGPT, was it annotated?
The process of training models like ChatGPT involves two main steps: pre-training and fine-tuning. Let's discuss how annotation, or the lack thereof, fits into these stages:
  • Pre-training: In this phase, the model is exposed to a large amount of internet text data. This data is not explicitly annotated, but the model is trained to predict the next word in a sentence. This gives the model a general understanding of grammar, facts about the world, and some reasoning abilities.
  • Fine-tuning: After pre-training, the model is further refined on a narrower dataset generated with the help of human reviewers. These reviewers follow guidelines from OpenAI to review and rate possible model outputs for a range of inputs. In this sense, the fine-tuning process can be seen as a form of annotation, where human reviewers guide the model’s responses, helping it to better align with human values and ensuring that it avoids inappropriate or unsafe content.

So while the initial dataset used for pre-training is not annotated in the traditional sense, the fine-tuning process does involve a form of annotation, where human reviewers guide the model’s behavior by rating different outputs.

Shaip favicon
Please provide some examples of the types of annotations used for the training.
In the fine-tuning phase of training models like ChatGPT, reviewers are often tasked with scoring potential model responses. The types of annotations or scoring can cover a variety of aspects of the model's output. While the specifics can vary, here are some illustrative categories of annotations:

Correctness: Reviewers evaluate whether the model’s output is factually correct.


Appropriateness: The model’s output is evaluated for its appropriateness in relation to the given input.


Safety: Outputs are reviewed to ensure they don’t include harmful instructions or offensive language.


Completeness: This assesses whether the model’s response fully answers the user’s query.


Fluency: Reviewers check if the model’s output is linguistically fluent and coherent.


These categories are just examples. The actual fine-tuning process involves a more complex evaluation system and continual feedback between the reviewers and the AI development team. The goal of these annotations is to make the model’s responses more helpful, safe, and aligned with human values.

Shaip favicon
After the huge data was collected to train ChatGPT, how many annotators were involved in annotation?
The number of human annotators (or reviewers) involved in the fine-tuning process of AI models can vary widely and depends on the scale of the project and the specific requirements of the model in question.

OpenAI, the organization behind ChatGPT, does not publicly disclose the exact number of reviewers involved in the fine-tuning process. However, given the scale and complexity of training such a large language model, it’s safe to say that the process likely involves a substantial team of reviewers.

These human reviewers follow guidelines provided by OpenAI to review and rate possible model outputs. It’s a continuous, iterative process where reviewers’ feedback is used to improve and refine the model over time. OpenAI maintains a strong feedback loop with the reviewers, including weekly meetings to address questions and provide clarifications.

Shaip favicon
Still for simple to complex tasks how many annotators would be required to annotate 10,000 pdfs.
The number of annotators needed to annotate 10,000 PDFs would depend on various factors such as:
  • Complexity of the Task: Simple tasks like labeling images or categorizing text would take less time per PDF, allowing a single annotator to handle more files. On the other hand, complex tasks like detailed entity extraction or semantic analysis would require more time per document, and therefore more annotators for the same volume of PDFs.
  • Length and Complexity of the PDFs: A longer or more complex PDF will take more time to annotate than a shorter or simpler one.
  • Quality Requirements: If high precision is required, it might be necessary to have multiple annotators per PDF to cross-verify annotations.
  • Time Constraints: If the annotations need to be completed quickly, more annotators would be needed.

As a hypothetical example, let’s say an annotator can annotate 5 PDFs per day for a particular task, and works 5 days a week. In this case, to annotate 10,000 PDFs, a single annotator would need 2000 days, or about 8 years.

However, if you had a team of 20 annotators, they could complete the task in about 5 months (assuming 20 workdays per month). With 100 annotators, the same task could be completed in about a month.

Remember, these numbers are purely illustrative and the actual speed of annotation would depend on the factors mentioned above. Additionally, proper training of annotators and consistent quality control are critical for the success of any large-scale annotation project.

Shaip favicon
That’s thorough! I am Highly Impressed…
I'm glad you found the information helpful, Anubhav! It's my goal to provide comprehensive and accurate responses. If you have any more questions about AI, machine learning, or any other topic, please feel free to ask!

Stay tuned for the second part of our Chat GPT interview

Social Share