Question Answering
Question Answering
What is Question Answering?
Question answering is about letting the AI automatically answer a question. As an option, you can give a context and let the AI use this context to answer the question.
For example imagine you want to ask the following question:
How to bake some bread?
The AI might answer something like this:
1. Grease a clean, dry bread pan with butter.
2. Stir 1 envelope dry active yeast to 1/4 cup warm water in a large bowl. Let the yeast bloom for 10 minutes, or until dissolved, then add 1 teaspoon salt, 1 teaspoon honey, and 1/2 cup unsalted butter.
3. Add 3 1/2 cups strong flour and mix well, then wait to process your dough for 3 minutes.
4. Add an ounce of water and mix well. If the water was not already warm enough to make a dough (i.e., it was not warm to the touch) then add a warm to hot bowl of water to the machine. Stir continuously for 3 minutes.
5. Turn your dough out onto a clean sheet of plastic wrap and fold over.
6. Cover with another piece of plastic wrap and place in a warm corner of your kitchen.
Now maybe you have specific advanced data you want to give the AI and ask a question on it (also known as "context"):
All NLP Cloud plans can be stopped anytime. You only pay for the time you used the service. In case of a downgrade, you will get a discount on your next invoice.
You might want to ask the following question:
When can plans be stopped?
And the answer would be:
Anytime
Why Use Question Answering?
Question Answering can be usefully used in the "real world". Here are a couple of examples.
Contracts Questions
Chat bots are used more and more everyday, both to answer customer questions and internal collaborators questions. Imagine that a customer is asking a legal question about his contract. You could perfectly use a question answering model for that and pass the contract as a context.
Product Questions
Here's another chat bots related example. Imagine that a collaborator has a technical question about a product. Why not provide him with a natural language interface and make his life easier?
Use GPU
Control whether you want to use the model on a GPU. Machine learning models run much faster on GPUs.
Context
The block of text that the model will use in order to find an answer to your question. 25,000 tokens maximum for Roberta Base Squad 2. 1024 tokens maximum for Fast GPT-J. Optional if you're using Fast GPT-J.
Language
NLP has a critical weakness: it doesn't work well with non-English languages.
We do our best to add non-English models when it's possible. See for example Fine-tuned GPT-NeoX 20B, XLM Roberta Large XNLI, Paraphrase Multilingual Mpnet Base V2, or spaCy. Unfortunately few models are available so it's not possible to cover all the NLP use cases with that method.
In order to solve this challenge, we developed a multilingual AI that automatically translates your input into English, performs the actual NLP operation, and then translates the result back to your original language. It makes your requests a bit slower but returns impressive results.
Simply select your language in the list, and from now on you can write the input text in your own language!
This multilingual add-on is a paid feature. It is included by default in the pay-as-you-go plan. If you are on another plan, please contact the support team so they can upgrade your plan.