Digital Health Assistant

Overview

As online platforms scale, maintaining the integrity of user-generated content becomes increasingly complex. To address this, we built a GenAI-powered assistant that automatically categorizes posts into three buckets: Acceptable, Not Acceptable, or Needs Human Review.

Our goal was to assist moderation teams by reducing the volume of content requiring manual review, improving response times, and maintaining consistency across decisions. Using a custom fine-tuned model and prompt engineering strategies, we embedded this assistant into our content workflow to deliver contextual, explainable decisions that support real-world moderation policies.

This assistant doesn’t replace human moderators—it augments them. By flagging edge cases and confidently handling clear-cut ones, it enables humans to focus on nuanced decisions where judgment, empathy, or cultural understanding are essential.

Challenge(s)

  • Ambiguity in Language: Natural language is rarely black and white. Sarcasm, coded language, or regionally specific expressions can distort meaning and mislead AI models. For instance, the phrase “nice job, genius” can be genuine or deeply sarcastic, depending on context.

  • Evolving Standards: Community guidelines aren’t static—they evolve based on social context, legal requirements, or brand reputation. Our model had to adapt without frequent re-training. We tackled this by layering in a prompt-based rules engine to adapt to guideline shifts dynamically.

AWS Services used

Amazon Bedrock, AWS Lambda, Amazon API Gateway, Amazon DynamoDB.

Customer details

  • Industry: Healthtech

  • Company Type: Mid-sized Enterprise

  • Location: United States of America

  • Project Timeline: 4 weeks

Next
Next

Badgers - AI Automation