⏺️Detoxify endpoints
Detoxify is an easy-to-use Python library that detects hateful or offensive language.
Detoxify is an open-source tool to help researchers and practitioners identify potential toxic comments.
Detoxify models are trained to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, and Multilingual toxic comment classification.
Detoxify endpoints
Detoxify is offered as a set of two standalone text-only API endpoints so that you can avoid the hassle of hosting open-source models.
Any requests you send should meet the requirements listed on Features and requirements.
POST endpoint reference
This endpoint predicts the Detoxify categories of text.
multilingual
An enumeration.
POST /detoxify/text/ HTTP/1.1
Host:
Authorization: Bearer JWT
Content-Type: application/x-www-form-urlencoded
Accept: */*
Content-Length: 99
"callback_url='https://example.com'&text='text'&model='multilingual'&correlation_id='text'"
{
"job_id": "text"
}
GET endpoint reference
This endpoint returns the results of a previously queued classify job
GET /detoxify/text/{job_id} HTTP/1.1
Host:
Authorization: Bearer JWT
Accept: */*
{
"status": "done",
"results": {
"toxicity": 1,
"severe_toxicity": 1,
"obscene": 1,
"threat": 1,
"insult": 1,
"identity_attack": 1,
"sexual_explicit": 1
},
"msg": "text"
}
Last updated
Was this helpful?