Content Moderation
Task Description
The Content Moderation endpoint analyzes transcript text to detect sensitive or inappropriate content. It identifies potential issues such as sensitive topics, personal information, and other content moderation concerns.
Inputs
- Transcript text: The text content to be analyzed for moderation purposes
Output
Description
Returns a JSON object containing moderation results with identified issues and detailed explanations of detected sensitive content.