Advanced AI Platform

Use the best AI engines without sacrificing data privacy.

NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. This platform is focused on data privacy by design so you can safely use AI in your business without compromising confidentiality, and even deploy our AI models on-premise / at the edge. We offer both small specific AI engines and large cutting-edge generative AI engines so you can easily integrate the most advanced AI features into your application at an affordable cost.

Why Build With NLP Cloud?

High Performance

Fast and accurate AI models suited for production. Highly-available inference API leveraging the most advanced hardware.

Data Privacy And Security

NLP Cloud is HIPAA / GDPR / CCPA compliant, and working on the SOC 2 certification. We cannot see your data, we do not store your data, and we do not use your data to train our own AI models.

On-Premise / Edge AI

For critical security and privacy needs, or for performance reasons, you can deploy our models in-house on your own isolated servers. Our expert team is here to assist.

Multilingual AI

Use all NLP Cloud's AI models in 200 languages, thanks to our multilingual models and our multilingual addon.

No Complexity

Do not worry about DevOps or API programming and focus on text processing only. Deliver your AI project in no time.

Custom Models

Fine-tune your own models or upload your in-house custom models, and deploy them easily to production

NLP Cloud is an NVIDIA partner

NLP Cloud closely collaborates with NVIDIA in order to deliver state-of-the-art performance. Our generative AI engines are deployed on the most advanced NVIDIA GPUs in order to guarantee low latencies and affordable costs. You can also deploy our AI engines on your own on-premise NVIDIA GPUs.

Built For Developers

NLP Cloud provides you with a simple and robust API.

Scalability and high availability are managed seamlessly by the platform.

Not sure how to correctly use generative AI and large language models? Our support team is here to advise!


See our client libraries on Github:

Python
Ruby
Go
Node.js
PHP

More details in the documentation.

curl https://api.nlpcloud.io/v1/en_core_web_lg/entities \ > -X POST -d '{"text":"John Doe is a Go Developer at Google"}' ^2000 `[ { "end": 8, "start": 0, "text": "John Doe", "type": "PERSON" }, { "end": 25, "start": 13, "text": "Go Developer", "type": "POSITION" }, { "end": 35, "start": 30, "text": "Google", "type": "ORG" }, ] user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/bart-large-mnli-yahoo-answers/classification \ > -X POST -d '{ "text":"John Doe is a Go Developer at Google. He has been working there for 10 years and has been awarded employee of the year.", "labels":["job", "nature", "space"], "multi_class": true }' ^2000 `{ "labels":["job", "space", "nature"], "scores":[0.9258800745010376, 0.1938474327325821, 0.010988450609147549] } user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/roberta-base-squad2/question \ > -X POST -d '{ "context":"French president Emmanuel Macron said the country was at war with an invisible, elusive enemy, and the measures were unprecedented, but circumstances demanded them.", "question":"Who is the French president?" }' ^2000 `{ "answer":"Emmanuel Macron", "score":0.9595934152603149, "start":17, "end":32 } user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/distilbert-finetuned-sst-2-english/sentiment \ > -X POST -d '{"context":"NLP Cloud proposes an amazing service!"}' ^2000 `{ "scored_labels":[ { "label":"POSITIVE", "score":0.9996881484985352 } ] } user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/bart-large-cnn/summarization \ > -X POST -d '{"text":"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."}' ^2000 `{ "summary_text":"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world." } user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/gpu/dolphin/generation \ > -X POST -d '{ "text":"Dolphin is a powerful NLP model", "min_length":10, "max_length":30 }' ^2000 `{ "generated_text":"Dolphin is a powerful NLP model for text generation. This is the open-source version of GPT-4 by OpenAI. It is the most advanced NLP model created as of today." } user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/opus-mt-en-fr/translation \ > -X POST -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}' ^2000 `{ "translation_text": "John Doe travaille pour Microsoft à Seattle depuis 1999." } user@local:~$` ^3000

curl https://api.nlpcloud.io/v1/python-langdetect/langdetection \ > -X POST -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999. Il parle aussi un peu français."}' ^2000 `{ "languages": [ { "en": 0.7142834369645996 }, { "fr": 0.28571521669868466 } ] } user@local:~$` ^3000

user@local:~$

Customers

BBVA
Johnson & Johnson
Zapier
GSK
Generali
Schneider
General Electric
Dell
Zoom
PWC
Lufthansa
Deloitte

"We spent a lot of energy fine-tuning our machine learning models, but we clearly underestimated the go-live process. NLP Cloud saved us a lot of time, and prices are really affordable."

Patrick, CTO at MatchMaker

"We use NLP Cloud's ChatDolphin model. It is very impressive and on par with OpenAI ChatGPT. Great thing is that it can be deployed on-premise, which is something we might consider in the future for privacy and compliance reasons."

Marc, Software Engineer

"We had developed a working API deployed with Docker for our model, but we quickly faced performance and scalability issues. After spending weeks on this we eventually went for this cloud solution and we haven't regretted it so far!"

Maria, CSO at CybelAI

"We eventually gave up on fine-tuning LLaMA 3... We are now exclusively fine-tuning and deploying Dolphin on NLP Cloud and we are happy like this."

Whalid, Lead Dev at Direct IT

A Medical Business Case

LAO (Laboratoire d'appareillage occulaire) is a French industrial laboratory making innovating lenses in order to cure specific eye diseases like the Lyell's syndrome.

LAO uses NLP Cloud classification API for automatic support tickets triage.


"Our collaboration with NLP Cloud has tremendously helped us increase our productivity and our patients satisfaction. We had the intuition that AI could help us but we had no idea how to implement it. NLP Cloud's expertise has been crucial."

Frédéric Baëchelé, CEO at LAO


Learn more here.

Use Cases

Use Case Model Used
Automatic Speech Recognition (speech to text): extract text from an audio or video file, with automatic language detection, automatic punctuation, and word-level timestamps, in 100 languages. We use OpenAI's Whisper Large model. Playground >>
Classification: send a piece of text, and let the AI apply the right categories to your text, in many languages. As an option, you can suggest the potential categories you want to assess. We use LLaMA 3.1 405B and an in-house NLP Cloud model called Fine-tuned LLaMA 3 70B. We also use the Bart Large MNLI Yahoo Answers and XLM Roberta Large XNLI by Joe Davison. Playground >>
Chatbot/Conversational AI: discuss fluently with an AI and get relevant answers, in many languages. We use LLaMA 3.1 405B and in-house NLP Cloud models called ChatDolphin, and Fine-tuned LLaMA 3 70B. We also use Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. Playground >>
Code generation: generate source code out of a simple instruction, in any programming language. We use LLaMA 3.1 405B and in-house NLP Cloud models called ChatDolphin, and Fine-tuned LLaMA 3 70B. We also use Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. Playground >>
Dialogue Summarization: summarize a conversation, in many languages We use Bart Large CNN SamSum by Philipp Schmid. Playground >>
Embeddings: calculate embeddings in more than 50 languages. We use several Sentence Transformers models like Paraphrase Multilingual Mpnet Base V2.
Grammar and spelling correction: send a block of text and let the AI correct the mistakes for you, in many languages. We use LLaMA 3.1 405B and in-house NLP Cloud models called ChatDolphin, and Fine-tuned LLaMA 3 70B. We also use Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. Playground >>
Headline generation: send a text, and get a very short summary suited for headlines, in many languages We use T5 Base EN Generate Headline by Michal Pleban. Playground >>
Image Generation/Text To Image: generate an image out of a simple text instruction. We use Stability AI's Stable Diffusion model. It is a powerful alternative to OpenAI DALL-E and MidJourney. Playground >>
Intent Classification: understand the intent from a piece of text, in many languages. We use LLaMA 3.1 405B and in-house NLP Cloud models called ChatDolphin, and Fine-tuned LLaMA 3 70B. We also use Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. Playground >>
Keywords and keyphrases extraction:extract the main keywords from a piece of text, in many languages. We use LLaMA 3.1 405B and an in-house NLP Cloud model called and Fine-tuned LLaMA 3 70B. Playground >>
Language Detection: detect one or several languages from a text. We use Python's LangDetect library. Playground >>
Lemmatization: extract lemmas from a text, in many languages All the large spaCy models are available.
Named Entity Recognition (NER): extract structured information from an unstructured text, like names, companies, countries, job titles... in many languages. We use LLaMA 3.1 405B and an in-house NLP Cloud model called Fine-tuned LLaMA 3 70B. We also use all the large spaCy models. Playground >>
Noun Chunks: extract noun chunks from a text, in many languages All the large spaCy models are available.
Paraphrasing and rewriting: generate a similar content with the same meaning, in many languages. We use LLaMA 3.1 405B and an in-house NLP Cloud model called Fine-tuned LLaMA 3 70B. Playground >>
Part-Of-Speech (POS) tagging: assign parts of speech to each word of your text, in many languages All the large spaCy models are available.
Question answering: ask questions about anything, in many languages. As an option you can give a context so the AI uses this context to answer your question. We use LLaMA 3.1 405B and in-house NLP Cloud models called ChatDolphin, and Fine-tuned LLaMA 3 70B. We also use Roberta Base Squad 2 by Deepset, Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. Playground >>
Semantic Search: search your own data, in more than 50 languages. Create your own semantic search / RAG model out of your own domain knowledge (internal documentation, contracts...) and ask semantic questions on it. Playground >>
Semantic Similarity: detect whether 2 pieces of text have the same meaning or not, in more than 50 languages. We use Paraphrase Multilingual Mpnet Base V2. Playground >>
Sentiment and emotion analysis: determine sentiments and emotions from a text (positive, negative, fear, joy...), in many languages. We also have an AI for financial sentiment analysis. We use DistilBERT Base Uncased Finetuned SST-2, DistilBERT Base Uncased Emotion, and Finbert by Prosus AI. Playground >>
Speech Synthesis (Text-To-Speech): convert text to audio We use Speech T5 by Microsoft. Playground >>
Summarization: send a text, and get a smaller text keeping essential information only, in many languages We use LLaMA 3.1 405B and in-house NLP Cloud models called ChatDolphin, and Fine-tuned LLaMA 3 70B. We also use Bart Large CNN by Meta, Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. Playground >>
Text generation: achieve all the most advanced AI use cases by either making requests in natural language ("instruct" requests) or using few-shot learning. We use LLaMA 3.1 405B and an in-house NLP Cloud model called ChatDolphin and Fine-tuned LLaMA 3 70B. We also use Dolphin Yi 34B by Eric Hartford, and Dolphin Mixtral 8x7B by Eric Hartford. You can also fine-tune your own text generation model for even better results. Playground >>
Tokenization: extract tokens from a text, in many languages All the large spaCy models are available.
Translation: translate text in 200 languages with automatic input language detection. We use NLLB 200 3.3B by Meta for translation in 200 languages. Playground >>

Looking for a specific use case or AI model that is not in the list above? Please let us know!

Edge AI / On-Premise

Most of our AI models can be deployed on your own servers.


This is the best solution for critical applications that require a high level of privacy like medical applications, financial applications... Our models do not require an internet connection.

It is also interesting in case of applications requiring a low latency, since you can make sure that your AI model is as close as possible to your end users.


Provisioning your own AI infrastructure can be challenging. That is why our engineers can assist you during the deployment process if needed.


You can also fine-tune your own models on NLP Cloud, and then deploy them on your own servers.

Train Your Own Models

Train/Fine-Tune your own AI models with your own business data, and use them straight away in production without worrying about deployment considerations like GPU availability, memory usage, high-availability, scalability... You can upload and deploy as many models as you want into production.

Support

Already have an account? Send us a message from your dashboard.


Otherwise, send us an email to [email protected].


We also provide advanced expertise around AI (consultancy, training, integration...). Feel free to tell us more about your project.

Security At NLP Cloud

NLP Cloud places the safety of your data and privacy as a major concern. To guarantee the platform and data stay safe, we continuously deploy our resources and methods into our platform and methods. Mentioned below is only a portion of the security protocols we use. If you'd like to discuss how NLP Cloud can conform to your compliance requirements, please contact us!

Physical Security

The NLP Cloud production data is handled and held inside the most reliable cloud services and corporate data-centers.

Data Storage

Data that is stored for long-term use is safeguarded by being cryptographically processed.

System Security

The firewalls and secure system settings put in place protect all of the NLP Cloud servers and databases. Furthermore, Linux is the operating system that powers all of our production servers.

Password Encryption

NLP Cloud only stores a hashed version of your password, following the PBKDF2 algorithm with a SHA256 hash.

Internal Policies

NLP Cloud has generated extensive safety protocols touching on multiple aspects. These protocols are constantly renewed and distributed among all collaborators.

Collaborators Access

Every employee understands security protocols and regulations and participates in frequent training programs. Only a limited set of system administrators are allowed to access the NLP Cloud servers

Disaster Recovery

NLP Cloud maintains regular backups of information and regularly assesses its ability to restore the data in the event of a major issue.

Change Control

NLP Cloud implements strong guidelines to strike a balance between regulation and speed while changing system configurations.

Penetration Tests

We use outside security specialists to conduct thorough examinations of the NLP Cloud system.

Frequently Asked Questions

What is a token?

A token is a unique entity that can either be a small word, part of a word, or punctuation. On average, 1 token is made up of 4 characters, and 100 tokens are roughly equivalent to 75 words. Natural Language Processing models need to turn your text into tokens in order to process it.

Can I try NLP Cloud for free?

Yes. All the I models can be tested for free thanks to the Free plan without a credit card, but the throughput on this plan is very limited. The pay-as-you-go plan plan is the best way to easily test all the features without restrictions. A credit card is needed for this plan, but you automatically get an initial $15 credit for your tests.

Can I monitor my pay-as-you-go consumption?

Yes, there is a "Monthly Usage" section in your dashboard that lets you monitor the number of requests you made during the month and the number of tokens you generated. It is updated in real time.

Can I set up a maximum limit for my pay-as-you-go consumption?

Yes, in your dashboard you can set up both soft limits (to simply receive alerts) and hard limits (to completely stop consuming)

What does fine-tuning mean?

Fine-tuning means creating ("training") your own AI with your own data. The idea is that you give the AI model many examples (in a "dataset") so it learns from you and is then excellent at addressing your use case. The is the best way to achieve state of the art results in machine learning. You don't necessarily need to spend too much time on your fine-tuning dataset as modern AI models can be fine-tuned with few examples. For example you can reach great results with only 500 examples. You can fine-tune Dolphin, Yi 34B, and Mixtral 8x7B on NLP Cloud.

Can I deploy your models on my own servers?

Yes. Most of our AI models are available at the edge / on-premise. Our engineers are here to help, so please don't hesitate to contact us for more questions about privacy and low-latency.

Do I need to use a GPU?

It depends. Most of our AI models work very well without a GPU. But the most advanced generative models like ChatDolphin, LLaMA 3.1 405B, Yi 34B, and Mixtral 8x7B need a GPU in order to address bigger inputs and outputs, and to respond promptly. More generally, a GPU is recommended for production use for most of our models as it considerably improves the throughput and the response time.

How do you compare to GPT-4, GPT-3.5, and ChatGPT?

GPT-4, GPT-3.5, and ChatGPT are advanced AI models created by OpenAI. But they do not focus on data privacy, and these models impose many use case restrictions. At NLP Cloud we want to offset this monopoly by proposing great open-source and in-house alternatives to GPT-4, GPT-3.5, and ChatGPT, like Fine-tuned LLaMA 3 70B, ChatDolphin, Mixtral 8x7B, Yi 34B, and more!

How do you compare to OpenAI Whisper?

OpenAI Whisper is an open-source speech-to-text model. As it is open-source, we offer a fast Whisper API on NLP Cloud!

How do you compare to OpenAI?

NLP Cloud is a small and extremely dynamic tech company that proposes all the best open-source AI models at a fair price. We also propose our own in-house models. Not only is NLP Cloud much more privacy-focused, and less expensive than OpenAI, but we are also much less restrictive in terms of usage, and we offer many features and models that OpenAI doesn't offer. For example you can deploy our models on-premise, we are HIPAA / GDPR compliant, we propose many API endpoints dedicated to specific use cases, and much more!

I need a specific use-case or model that is not yet supported, can you support it?

Yes! We are very reactive and flexible. Most of our current models and features exist because our customers asked for them, so please let us know what you need.