Content Moderation Policy

Our approach to content moderation and safety on Nano Banana Pro Labs platform

2025/01/07

Introduction

Nano Banana Pro Labs is committed to maintaining a safe, respectful, and lawful environment for all users of our AI image generation and editing services. This Content Moderation Policy outlines our approach to monitoring, reviewing, and moderating content created on our platform.

Our Commitment

We strive to balance creative freedom with safety and legal compliance. Our moderation practices are designed to:

  • Prevent the creation of harmful, illegal, or abusive content
  • Protect users from exposure to inappropriate material
  • Comply with applicable laws and regulations
  • Maintain the integrity and reputation of our platform

Moderation Approach

Multi-Layered Moderation System

We employ a comprehensive moderation approach combining:

  1. Automated Filtering (Proactive)

    • AI-powered prompt analysis before image generation
    • Real-time content screening during generation
    • Pattern recognition for prohibited content types
    • Keyword and semantic analysis
  2. Post-Generation Review (Reactive)

    • Automated scanning of generated images
    • Machine learning classifiers for NSFW and harmful content
    • Hash matching against known prohibited content
  3. Human Review (Manual)

    • Review of flagged content by trained moderators
    • Appeal processing and investigation
    • Edge case evaluation and policy refinement
  4. User Reporting (Community)

    • User-initiated reporting of policy violations
    • Community feedback integration
    • Priority review of reported content

Prohibited Content Categories

Our moderation system actively detects and blocks:

Critical Violations (Zero Tolerance)

  • Child Sexual Abuse Material (CSAM): Any content depicting, suggesting, or sexualizing minors
  • Non-Consensual Intimate Images: Deepfakes or manipulated images of real individuals in intimate contexts
  • Terrorist Content: Images promoting, glorifying, or facilitating terrorism or violent extremism
  • Illegal Activity: Content depicting or facilitating criminal acts

Severe Violations

  • Sexually Explicit Content (NSFW): Pornographic, nude, or sexually suggestive images
  • Graphic Violence: Images depicting extreme violence, gore, or bodily harm
  • Hate Speech: Content targeting individuals or groups based on protected characteristics
  • Harassment and Bullying: Images intended to intimidate, threaten, or harm others

Moderate Violations

  • Misinformation: Realistic fake images designed to deceive (fake IDs, manipulated documents)
  • Intellectual Property Infringement: Unauthorized reproduction of copyrighted material
  • Spam and Abuse: Excessive generation, system manipulation, or automated misuse
  • Dangerous Activities: Content promoting self-harm, eating disorders, or dangerous challenges

Moderation Process

1. Pre-Generation Screening

When you submit a prompt:

  • Automated analysis checks for prohibited keywords and patterns
  • High-risk prompts are blocked with explanation
  • Low-risk prompts proceed to image generation
  • Borderline cases may include content warnings

2. Generation Monitoring

During image creation:

  • Real-time content analysis of generated images
  • Automatic rejection of policy-violating outputs
  • Users are notified if generation is blocked
  • No credits are deducted for blocked content

3. Post-Generation Review

After image generation:

  • Automated safety classifiers scan all images
  • Flagged content is queued for human review
  • Users may be temporarily restricted pending review
  • Confirmed violations result in enforcement actions

4. User Reports

Community reporting process:

  • Users can report policy violations via support@nanobananaprolabs.com
  • Reports include content description and violation type
  • Moderation team reviews within 24-48 hours
  • Reporters receive outcome notification (when appropriate)

Enforcement Actions

Violations result in proportional consequences:

First Violation (Minor)

  • Warning: Email notification explaining violation
  • Education: Guidance on acceptable use
  • No Penalty: Account remains active

Second Violation (Moderate)

  • Temporary Suspension: 7-day account suspension
  • Content Removal: Deletion of violating content
  • Credit Freeze: No credit usage during suspension

Third Violation (Severe) or Single Critical Violation

  • Permanent Ban: Immediate account termination
  • Data Deletion: Removal of user data and content
  • No Refunds: Forfeiture of remaining credits/subscription
  • Platform Ban: Email and payment method blocked from re-registration
  • Immediate Termination: Zero-tolerance enforcement
  • Law Enforcement: Reporting to appropriate authorities
  • Evidence Preservation: Data retained for legal proceedings
  • Cooperation: Full assistance with investigations

False Positives and Appeals

We recognize that automated systems may occasionally make errors.

Appeal Process

If you believe content was incorrectly moderated:

  1. Submit Appeal: Email support@nanobananaprolabs.com with:

    • Your account email
    • Description of flagged content
    • Explanation of why moderation was incorrect
    • Any supporting context or documentation
  2. Review Timeline: Appeals reviewed within 5 business days

  3. Human Review: Trained moderators manually evaluate appeals

  4. Decision Notification: Outcome communicated via email

  5. Account Restoration: If appeal succeeds:

    • Account restrictions lifted
    • Credits refunded (if applicable)
    • Violation removed from account history

Appeal Rights

  • One appeal per moderation action
  • Appeals must be submitted within 30 days
  • Repeated frivolous appeals may result in suspension
  • Decisions after appeal are final

AI Model Disclaimers

Third-Party AI Models

Nano Banana Pro Labs is an independent AI service platform that accesses AI capabilities through third-party model providers via APIs. We:

  • Do not develop or train AI models
  • Are not affiliated with any AI model provider
  • Use commercially available AI services
  • Implement additional safety layers beyond provider defaults

Model Limitations

AI models may occasionally:

  • Generate unexpected or unintended content
  • Fail to detect subtle policy violations
  • Produce false positives in moderation
  • Reflect biases present in training data

We continuously improve our moderation systems to address these limitations.

Synthetic Content Transparency

All images created using our Services are AI-generated and synthetic:

  • Not Real People: AI-generated faces and figures are not real individuals (unless based on user-provided reference images)
  • Synthetic Scenes: Environments, objects, and scenarios are artificially created
  • Disclosure Requirement: Users must disclose AI-generated nature when sharing content publicly or commercially

Data Privacy in Moderation

What We Collect

  • Prompts submitted for image generation (temporary)
  • Generated images (temporary, unless saved to account)
  • Moderation decisions and appeal history
  • User behavior patterns (aggregate, anonymized)

What We Don't Collect

  • Personal information beyond account details
  • Content from other platforms
  • Private communications outside our Services

Retention Periods

  • Routine Content: Deleted after 30 days (unless saved by user)
  • Flagged Content: Retained for 90 days for moderation purposes
  • Violation Evidence: Retained for legal compliance (varies by severity)
  • Appeal Records: Retained for 1 year

Moderator Training and Standards

Our human moderation team:

  • Undergoes comprehensive policy training
  • Receives regular updates on emerging threats
  • Follows standardized review guidelines
  • Operates under strict confidentiality agreements
  • Has access to mental health resources

Transparency and Reporting

We publish annual transparency reports including:

  • Total number of moderation actions
  • Content categories most frequently flagged
  • Appeal success rates
  • Geographic trends (anonymized)
  • Policy updates and improvements

Updates to This Policy

We may update this Content Moderation Policy to:

  • Address emerging content threats
  • Improve moderation accuracy
  • Respond to legal or regulatory changes
  • Incorporate user feedback

Users will be notified of material changes via:

  • Email to registered accounts
  • Website announcements
  • In-app notifications

Contact Us

For questions, reports, or appeals:

Response Times:

  • Critical violations: Within 1 hour
  • Urgent reports: Within 24 hours
  • General inquiries: Within 48 hours
  • Appeals: Within 5 business days

Last Updated: January 7, 2025