OpenAI Introduces Chat GPT API for Developers

Expect to see Chat GPT, the hugely popular AI chatbot integrated into a growing number of your favorite apps and tools. Chatbot developer OpenAI has unveiled a new API that gives developers access to the power of ChatGPT.

OpenAI on Wednesday also unveiled an API for Whisper, a speech-to-text model whose source code was open-sourced by an AI research firm in September 2022.

A number of large companies are already using the Chat GPT API, including Snapchat, Instacart, and Shopify, according to OpenAI.

Instacart will use conversational AI technology to help customers create shopping lists from open-ended questions like “What’s a healthy lunch for my kids?” In the meantime, Shopify is integrating Chat GPT technology into Shop, the consumer app that shoppers use to search for a variety of products and brands. The Quizlet learning platform also uses the Chat GPT API for AI tutoring.


The Chat GPT model family we’re releasing today, gpt-3.5-turbo, is the same model used in the Chat GPT product. Its price is $0.002 per 1000 tokens, which is 10 times cheaper than our existing GPT-3.5 models. It’s also our best model for many non-chat use cases – we’ve seen early testers migrate from text-davinci-003 to gpt-3.5-turbo with a little tweaking needed for their tooltips.

Traditionally, GPT models use unstructured text, which is presented to the model as a sequence of “tokens”. Instead, Chat GPT models use a sequence of messages along with metadata. (For curiosity: Under the hood, the input is still rendered to the model as a sequence of “tokens” that the model can consume; the raw format used by the model is a new format called Chat Markup Language (“ChatML”).)

We’ve created a new endpoint to interact with our ChatGPT models:


curl \
 -H "Authorization: Bearer $OPENAI_API_KEY" \
 -H "Content-Type: application/json" \
 -d '{
 "model": "gpt-3.5-turbo",
 "messages": [{"role": "user", "content": "What is the OpenAI mission?"}] 


  "id": "chatcmpl-6p5FEv1JHictSSnDZsGU4KvbuBsbu",
  "object": "messages",
  "created": 1677693600,
  "model": "gpt-3.5-turbo",
  "choices": [
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity."
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 18,
    "total_tokens": 38

Python bindings

import openai

completion = openai.ChatCompletion.create(
  messages=[{"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."}]


The Chat GPT API provides access to GPT 3.5 Turbo, the same model used in the Chat GPT product.

“Users of the Chat GPT API can look forward to continuous model improvement and the ability to choose dedicated capacity for greater control over models,” reads an OpenAI blog post.

OpenAI also clarified that data submitted via the API will no longer be used to further train and improve OpenAI models unless the organization using the API chooses to provide the data. This should give some reassurance to consumers who are concerned that their data will be used to power AI models without their consent.

Chat GPT Upgrades

We are constantly improving our Chat GPT models and want to make these improvements available to developers as well. Developers using the gpt-3.5-turbo model will always get our recommended stable model, but will still be able to select a specific version of the model. For example, today we are releasing gpt-3.5-turbo-0301, which will be supported until at least June 1st, and in April we will update gpt-3.5-turbo to a new stable version. Switching updates will be presented on the models page.

Dedicated Instances

We now also offer Dedicated Instances for users who want more control over a specific model version and system performance. By default, requests are executed on a computing infrastructure shared by other users who pay for each request. Our API runs on Azure, and with provisioned instances, developers will be charged on a per-period basis for provisioning the compute infrastructure reserved to serve their requests.

Developers have full control over instance load (higher load improves throughput but makes each request slower), the ability to enable features such as longer context constraints, and the ability to pin a model snapshot.

Dedicated Instances can make economic sense for developers using more than 450M tokens per day. In addition, it allows you to directly optimize the developer’s workload based on hardware performance, which can significantly reduce costs compared to a shared infrastructure. For questions about a specific instance, please contact us.

OpenAI also implements a 30-day default data retention policy for API users, with more strict retention options based on user needs.

Finally, the company said on Wednesday that its top engineering priority at the moment is to improve the stability of its products.

“Over the past two months, our uptime has not met our or our users’ expectations,” the company said.

Frequently Asked Questions

Is Fine-Tuning Available for GPT-3.5-Turbo?

No. As of Mar 1, 2023, you can only fine-tune base GPT-3 models. See the fine-tuning guide for more details on how to use fine-tuned models.

Do You Store the Data That is Passed into the API?

As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy.

Leave a Comment