OpenAI And GPT-3 VS NLP Cloud

How does OpenAI compare with NLP Cloud? Both platforms propose advanced AI models for text understanding and text generation, but there are several important differences in terms of features, pricing, and terms of service.

In this article, we will make an in-depth comparison between OpenAI and NLP Cloud.

Usage Guidelines And Application Review

Before GPT-3, OpenAI used to release open-source AI models. GPT and GPT-2 were both open-source models that anyone could deploy and use as they pleased. Hence the word "Open" in "OpenAI". But when they created GPT-3, OpenAI decided to keep it as a black box only available through their paid API. Officially for ethical reasons.

Since then, open-source equivalents have been released like GPT-J and GPT-NeoX, and you can install them by yourself and use them as you please.

OpenAI are extremely restrictive about the kind of applications they allow. You can't integrate their API in production without submitting your application for validation first and they enforce very strict "usage guidelines". Here is an overview of their validation process.

Some applications are simply not allowed by default, like applications based on "unscientific" premises, paraphrasing and rewriting applications (considered as "plagiarism"), multi-level marketing, and more. Here is a more detailed list from OpenAI's usage guidelines:

OpenAI Disallowed Applications

Additionally, many AI applications that you might have in mind are very likely to be rejected by OpenAI. For example you can't generate large content, which means that you can't use GPT-3 to write a whole blog article for you. Many chatbot use cases are rejected too. For example you can't build a chatbot that acts as a companion or a chatbot that uses insults or adult words. Your application is also very likely to be rejected if it is related to social media, healthcare, coaching, legal, and much more. Here are some extracts of OpenAI's guidelines about "high-stake" domains (applications considered as very sensitive that are very likely to be rejected) and about text length:

OpenAI High Stakes Domains Restrictions

OpenAI Generation Length

OpenAI ask you to implement a "user identifier" that is going to individually identify each end-user of your application. Based on this, rate limiting applies: end users can't make more than 60 requests per minute.

Many projects are simply aborted because of these strict limitations.

None of these restrictions are applied by NLP Cloud. You can use NLP Cloud for any kind of application without restrictions, and you can make as many requests as you want per end user without rate limiting (as long as you select the right plan of course).

Pricing Differences

OpenAI and NLP Cloud both propose pay-as-you-go prices. It means that you can pay after the fact, only for the number of requests or tokens you actually consumed.

NLP Cloud also propose standard packages paid upfront. These plans give you access to a specific number of requests per minute. These plans are more cost-effective than pay-as-you-go if you have a large volume of requests to perform.

See NLP Cloud's pricing here. See OpenAI's pricing below:

OpenAI Pricing

Let's make a simple simulation. GPT-J is equivalent to GPT-3 Curie, so we are going to compare both prices.

On NLP Cloud, making 10 requests per minute on GPT-J, using 800 tokens each, will cost you $199/month (Full GPU plan).

On OpenAI, it will cost you 0.006 x 0.8 x 10 x 60 x 24 x 31 = $2,142/month.

The price difference is very significant, and it is actually even more important when comparing fine-tuning and embedding plans!

Features And Models Available

OpenAI and NLP Cloud adopted 2 very different strategies: OpenAI make one single in-house model (GPT-3) while NLP Cloud assembles the best open-source AI on the same platform.

It means that on NLP Cloud you can of course use some GPT-3 competitors like GPT-J and GPT-NeoX, but you can also use many other models like Bart, T5, Distilbert, NLLB 200, spaCy, etc.

Using specialized smaller models is often much more cost effective and much faster than using a huge GPT model (even if very versatile).

Sometimes, some use cases just cannot be covered by GPT-3, GPT-J, and GPT-NeoX. That's the case with multilingual translation for example. For such a use case, you will need to use a dedicated model, like Facebook's M2M100.

Leveraging specialized models is also a good way to decrease complexity. For example, performing summarization with GPT-3 will require some advanced prompt engineering, while you can very simply get advanced results thanks to dedicated fine-tuned models like Facebook's Bart Large CNN.

Data Privacy

There is a significant difference between OpenAI and NLP Cloud when it comes to data privacy.

NLP Cloud's privacy policy is dead simple: no user data sent to the API is stored on NLP Cloud's servers, and no one has access to this data.

OpenAI on the other hand are doing many things with their customers' data. Users' data is processed by some internal software and sometimes reviewed by OpenAI's employees. More importantly this data is stored for an unlimited period, and it is used to train and improve some of OpenAI's AI models like semantic search and classification models. See an extract from OpenAI's privacy policy below:

OpenAI Privacy Policy

These privacy considerations can be critical for many businesses, especially those dealing with data-sensitive industries like healthcare, legal, finance...

Conclusion

Many customers are looking for alternatives to OpenAI, mainly because of their use case restrictions, data privacy policy, and prohibitive pricing.

We do believe that the NLP Cloud API is a very good alternative to OpenAI!

At NLP Cloud, we are proud to provide a high-level support to all our customers, and we are constantly adding more cutting-edge AI models in order for our customers to deliver their AI project in no time.

Would you like to have a try? Test NLP Cloud here!

Juliette
Marketing manager at NLP Cloud