Text classification is a ubiquitous capability with a wealth of use cases including sentiment analysis, topic assignment, document identification, article recommendation, and more. But collecting enough annotated examples to train traditional classifiers can be quite costly. Instead, we take a look at a classic technique that can be used to perform text classification with few or even zero training examples! We're talking about text embeddings, of course. New advances have significantly increased the quality of document embeddings and in our newest writing on Few Shot Text Classification we cover
- how to use them for topic classification,
- best practices for using them,
- and potential limitations.
Follow the links in the report to find code snippets so you can try it for yourself, and build your own demo so you can see the method in action!