Ask or search…

Items and Characteristics

Automatically check any videos and images for harmful content
The ITEMS_CHARACTERISTICS endpoints surface items and characteristics detected in an image (object recognition) or video (including sound). Each item or characteristic has a prevalence score which represents the extent to which it is present in a given piece of content. We can also analyse the text that is shared along the image or video.
You can use these scores in mappings or models to implement safety policies, for example setting a numerical threshold for automated flagging of content or in a linear regression model (read more in our guide for selecting thresholds).
The following 50+ items and characteristics are available:
  • violence
  • firearm
  • knife - knives in all contexts, e.g. includes kitchen knives
  • violent_knife - knives in violent contexts, including hunting knives
  • alcohol- alcoholic drinks in bottles, cans or glasses
  • drink - drinks of any kinds
  • smoking_and_tobacco
  • pills
  • marijuana
  • recreational_pills
Hate symbols:
  • confederate_flag
  • pepe_frog
  • nazi_swastika
  • adult_content
  • suggestive
  • adult_toys
  • medical - NSFW content in a medical context, e.g. partial nudity in a breast examination
  • over_18
  • exposed_anus
  • exposed_armpits
  • exposed_belly
  • covered_belly
  • covered_buttocks
  • exposed_buttocks
  • covered_feet
  • exposed_feet
  • covered_breast_f
  • exposed_breast_f
  • covered_genitalia_f
  • exposed_genitalia_f
  • exposed_breast_m
  • exposed_genitalia_m
Visual content characteristics:
  • artistic
  • comic
  • meme
  • photo
  • screenshot
  • map
  • poster_cover
  • game_screenshot
  • face_filter
  • promo_info_graphic
Audio language toxicity:
  • toxic
  • severe_toxic
  • obscene
  • insult
  • identity_hate
  • threat
  • sexual_explicit
OCR language toxicity:
  • toxic
  • severe_toxic
  • obscene
  • insult
  • identity_hate
  • threat
  • sexual_explicit
Caption language toxicity:
  • toxic
  • severe_toxic
  • obscene
  • insult
  • identity_hate
  • threat
  • sexual_explicit
  • middle_finger_gesture
  • child
  • toy
  • face_f
  • face_m
  • gambling_machine
🙋🏽‍♀️ Note: content containing these items and characteristics can still be detected with Custom policies and with our Brand Safety Framework. This is because these products use inputs that represent all aspects of the content - not just Items & Characteristics. For example, our GARM product flags harmful White Supremacy content as Hate Speech through learnt patterns in the content, even though this isn’t an Item/Characteristic.

Add-on features

In early October 2023, we'll be including two add-on features in our Items & Characteristics response. Please let us know at [email protected] if you'd be interested in having these:
  1. 1.
    Optical Character Recognition (OCR). OCR refers to any text that appears on the image or video. Examples include the captions for translations, words displayed on a T-shirt or hhandwritten content.
  2. 2.
    Speech audio transcriptions. A literal transcription of the speech detected during a video. This is only available in English. More languages are coming up soon, starting with Spanish. Please be aware that any sound in the video may interfere with the audio transcription.
You can find below an example of how to process OCR and audio transcriptions from the Items & Characteristics API response. They are included in the sections "ocr_texts" and "audio_texts".
"status": "done",
"results": {
"violence": {
"violence": 0.0263,
"firearm": 0.0673,
"knife": 0.102,
"violent_knife": 0.0774
"other": {
"child": 0.5896,
"middle_finger_gesture": 0.0291,
"toy": 0.0278,
"gambling_machine": 0.0595
"url": "",
"ocr_texts": [
"I'm very happy",
"metadata": {
"width": 576,
"height": 1024,
"fps": 30.0,
"duration": 10.134,
"seconds_processed": 10.1
"audio_texts": [
"I've been thinking about it and I'm happy"
"msg": null
Last modified 2mo ago
Copyright © Unitary Ltd