Missing People UK & IRMissing People UK & IR

Safety & Moderation Policy

Last updated: 01/03/2026

Export PDF
Created by
Missing People UK & IR • 10/01/2026
Last edited by
Josh Storer • 22/01/2026
Review by
01/07/2026
Staff activity
2 record(s)
Viewed / edited by staff
  • Josh Storer viewed this policy
    22/01/2026, 13:05
  • Josh Storer edited this policy
    22/01/2026, 13:22
(This is placeholder data until we wire staff view/edit logs to your database.)
Emergency: If you believe someone is in immediate danger, call 999. Do not rely on this website for urgent response.

This Safety & Moderation Policy explains how we keep people safe, how we moderate content, how we restrict abusive accounts, and when safeguarding escalation may occur. Our goal is to reduce harm, prevent misuse, and support responsible reporting.

Contents

Tap an item to jump to that section.

1. Safety principles

Protect people first
We prioritise safeguarding, child safety, protection of vulnerable people, and preventing harm. We may take immediate action where risk is identified.
Reduce misuse and risk
We design the platform to deter hoax reports, harassment, doxing, stalking, and exploitation. We may restrict accounts, remove content, or apply additional controls.
Accountability and audit trails
Where possible, moderation actions should be logged and attributable to staff roles, to support governance and training.

2. Content rules (what’s not allowed)

Content must not be abusive, violent, false, illegal, or harmful. You must not submit or post:
  • Harassment, threats, hate speech, discrimination, or intimidation.
  • Doxing (private addresses, private phone numbers, personal identifiers).
  • Sexual content, exploitation content, or any sexual content involving minors.
  • Content that encourages self-harm, suicide, violence, or provides instructions to harm.
  • Hoax, false, misleading, or malicious reports (including impersonation).
  • Unverified allegations that could defame, inflame, or put someone at risk.
  • Vigilante behaviour, calls to confront people, or real-time “hunt” content.
  • Spam, scams, advertising, or repeated off-topic posting.
  • Malware, hacking attempts, or abuse of platform features.

Where a case involves a child or vulnerable person, we may remove additional details to reduce risk, including redacting live locations or sensitive information.

3. Reporting content and abuse

How to report
Use in-platform reporting tools where available, or contact the team via our contact page. Please include links, screenshots (if safe), and a clear description of the issue.
Urgent safeguarding
If the issue involves immediate danger, self-harm threats, a child at risk, or violence, call 999 and then contact us so we can act quickly on the platform.

4. Moderation actions and enforcement

Content changes
We may remove, hide, restrict, redact, blur, archive, or edit content to protect people, comply with law, or prevent misuse. This can include removing comments, removing photos, or changing visibility settings.
Account controls
We may warn, restrict, suspend, or ban accounts. We may also limit certain features (such as commenting) while an issue is reviewed.
Escalation and internal review
Complex issues may be escalated to senior staff or safeguarding leads. Where appropriate, actions may be logged for governance, training, and accountability.

5. 24h / 72h warnings and restrictions

Warning system
If a user breaches rules, we may apply time-based restrictions and warnings. Common steps include a 24-hour warning and/or a 72-hour warning.
Comment restrictions
During a warning period, a user may be restricted from adding comments to case pages. If the user attempts to submit a comment, they may see a notice explaining they are restricted due to a safeguarding breach.

6. Bans and access limitations

After repeated warnings
After repeated warnings, a user may be banned either from the whole site or from adding comments to case pages. Restrictions may be time-based or indefinite depending on severity and risk.
Serious breaches
For serious breaches (e.g., threats, exploitation, hate speech, doxing, encouragement of self-harm), we may act immediately without prior warnings, including permanent bans and escalation to relevant authorities.

7. Exception: your own reported cases

Own-case exception
In some restriction modes, a user may still be allowed to add comments/tips to case pages they have reported, while remaining restricted from commenting elsewhere. This helps reporters provide updates without allowing harmful engagement across the platform.

8. Safeguarding escalation and third parties

Self-harm and life-threatening content
If self-harm, suicide threats, violence, exploitation, abuse, or other life-changing / life-threatening risk information is submitted (including in comments, uploads, reports, or messages), we may need to take action to protect life and safety.
Emergency escalation
Safeguarding information may be escalated to emergency services or other safeguarding pathways where necessary. If immediate danger is identified, call 999.
Third-party visibility / sharing
In safeguarding or emergency situations, information may be viewed by or shared with relevant third parties (for example, emergency services, law enforcement, safeguarding partners, or service providers involved in protecting platform safety). We only do this where reasonably necessary for safety, legal compliance, or to prevent harm.

9. Legal basis for moderation and safeguarding

Legitimate interests
We process moderation and security data to operate a safe platform, prevent abuse, and protect users. This is carried out under legitimate interests, balanced against user rights.
Legal obligations
Some processing and disclosures may be required to comply with law, respond to lawful requests, or meet safeguarding and reporting duties where applicable.
Vital interests (protecting life)
Where there is an immediate risk to life or serious harm, we may act to protect vital interests, including escalating information to relevant parties.
Contract / service provision
Where users create accounts and use features, certain processing is necessary to deliver the service and maintain platform integrity.

More detail on personal data processing is available in our Privacy Policy.

10. FOI and accountability

You may request details about moderation processes, policy enforcement, and platform accountability by submitting a Freedom of Information request.

FOI Page

11. Contact us

If you believe content is unsafe, abusive, or incorrect, or you want to report a safeguarding concern, contact us.