Detoxify endpoints
Last updated
Was this helpful?
Last updated
Was this helpful?
Detoxify is an easy-to-use Python library that detects hateful or offensive language.
Detoxify is an to help researchers and practitioners identify potential toxic comments.
Detoxify models are trained to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, and Multilingual toxic comment classification.
Detoxify is offered as a set of two standalone text-only API endpoints so that you can avoid the hassle of hosting open-source models.
Any requests you send should meet the requirements listed on Features and requirements.
This endpoint predicts the Detoxify categories of text.
multilingual
An enumeration.