Child Safety Standards
Overview
Verda has zero tolerance for child sexual abuse and exploitation (CSAE) content. We are committed to keeping our platform safe and to working with law enforcement and child safety organizations to protect children.
Our standards
- We prohibit any content that depicts, promotes, or facilitates child sexual abuse or exploitation (CSAM/CSAE) in any form.
- Accounts that upload, share, or are associated with CSAM/CSAE content are immediately suspended and permanently banned.
- We use a combination of automated detection and human review to identify and remove violating content.
- All users must complete identity verification (KYC) before they can publish content, which deters anonymous misuse.
How to report
If you encounter any content on Verda that you believe involves the sexual abuse or exploitation of a child, please report it immediately:
- In the app: Tap the Report button on any post or profile to flag content for review.
- By email: Contact us at support@verda.ai with details of the content and we will take immediate action.
All reports are treated as urgent and confidential.
Content moderation
- All content uploaded to Verda is subject to review under our content moderation policies.
- We act swiftly to remove violating content and disable offending accounts.
- We regularly review and update our safety practices to stay aligned with evolving laws, platform standards, and best practices in child safety.
Contact
For questions about our child safety policies or to report a concern, contact us at support@verda.ai.
Have a question?
Reach out to us at support@verda.ai