Ask or search…


Flag toxic content
Detoxify is a simple, easy to use, python library to detect hateful or offensive language. It was built to help researchers and practitioners identify potential toxic comments.
Detoxify open-sourced models & code are trained to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, and Multilingual toxic comment classification. We surface Detoxify endpoints in our API.
You can read Detoxify's full documentation on our GitHub Detoxify repository.
Last modified 6mo ago
Copyright © Unitary Ltd