Content Moderation

Task Description

The Content Moderation endpoint analyzes transcript text to detect sensitive or inappropriate content. It identifies potential issues such as sensitive topics, personal information, and other content moderation concerns.

Inputs

  • Transcript text: The text content to be analyzed for moderation purposes

Output

Description

Returns a JSON object containing moderation results with identified issues and detailed explanations of detected sensitive content.

Example

1{
2 "moderation": {
3 "issues": ["sensitive_topic", "personal_info"],
4 "details": ["Contains medical information", "Mentions personal contact details"]
5 }
6}