Moderation flow

Content Moderation & Media Review System

Content Moderation & Media Review System

User-generated content moderation pipeline — pre-publication review, community reporting, admin tools, and server-side enforcement.


1. Overview

All user-uploaded media (photos attached to wildlife sightings) passes through a mandatory moderation pipeline before becoming visible to the community. No public media is displayed without explicit admin approval.

Core principle: Every piece of public user-generated content is reviewed by a human moderator before it appears in any community-facing feed.


2. Pre-Publication Review

Every public media upload enters the system in a pending state (ReviewStatus = 0). Media is not visible to any other user until an admin explicitly approves it.

Visibility rule (enforced in client code and backend database security rules):

visible = (ReviewStatus == approved) AND (Private == 0) AND (Deleted == 0)
  • Users see their own uploads immediately (local device only) with upload progress indicators.
  • Other users never see pending, rejected, or removed content.
  • Private media (e.g. personal trip check-in photos) bypasses the review queue entirely and is only visible to the owner.

3. Review Status Lifecycle

Media progresses through a strict state machine with five states:

StatusValueDescription
Pending0Initial state for all new public uploads. Awaiting first admin review.
Approved1Passed admin review. Visible in community feed.
Rejected2Failed admin review. Not visible in feed.
Under Review3Auto-escalated for re-review after community reports exceed threshold.
Permanently Removed4Terminal state. Cannot re-enter the review queue.

State transitions:

Upload created
    |
    v
PENDING (0)
    |
    +---> APPROVED (1) -----> [community reports] ----> UNDER REVIEW (3)
    |                                                        |
    +---> REJECTED (2)                         +-------------+-------------+
                                               |                           |
                                          RE-APPROVED (1)       PERMANENTLY REMOVED (4)
                                                                    [terminal]

4. Admin Review Dashboard

Administrators access a dedicated moderation interface with four tabs:

4.1 New Submissions

  • Displays all media with ReviewStatus = pending (0) that has been uploaded and is public.
  • Sorted newest-first, limited to 50 items per page.
  • Actions: Approve, Approve as Graphic, Reject, Delete.

4.2 Reported Content

  • Displays approved media that has received one or more community reports (ReportCount > 0).
  • Sorted by report count (highest first) to prioritise the most-reported items.
  • Shows a report summary with category breakdown (Graphic / Irrelevant / Offensive).
  • Actions: Flag as Graphic, Remove, Dismiss Reports.

4.3 Re-Review Queue

  • Displays media automatically escalated to ReviewStatus = Under Review (3) when report counts exceed a threshold.
  • Actions: Re-approve, Re-approve as Graphic, Permanently Remove.

4.4 Moderation Summary

  • Aggregated moderation statistics per user.
  • Shows users with the most reported media and users who have filed the most reports.
  • Supports sorting to identify patterns of misuse.

5. Community Reporting

Any authenticated user can report media they find objectionable.

Report Categories

CategoryValueDescription
Graphic Content0Wildlife imagery that may be disturbing (e.g. predation)
Irrelevant1Content unrelated to wildlife sightings
Offensive2Content that violates community standards

Safeguards

  • One report per user per media item: Enforced by the Firestore document ID pattern ({reporterUserId}_{mediaId}). Duplicate reports from the same user are structurally impossible.
  • Admin-only category: A fourth category (adminAction, value 3) exists for internal audit entries. Community users cannot create reports with this category — enforced by Firestore security rules.
  • Optional description: Users may provide up to 200 characters of additional context.
  • Non-blocking UX: Report submission is fire-and-forget. The form closes immediately with a confirmation toast, ensuring the reporting mechanism is frictionless while preventing abuse through the one-report-per-user constraint.

6. Graphic Content Handling

Rather than removing content that is graphic but legitimate (e.g. wildlife predation scenes), the system supports a graphic content flag:

  • When an admin flags media as graphic (Graphic = 1), the content remains in the feed but is displayed with a blur overlay.
  • The overlay shows a warning label and a "Tap to view" prompt.
  • Revealing the image is a client-side UI state change only — no data is written to the server, and the content re-blurs when the user navigates away.
  • This approach allows nature photography that some users may find sensitive to remain available without being displayed unexpectedly.

7. Server-Side Enforcement (Firestore Security Rules)

Moderation integrity is enforced at the database level, not just in client code.

Media Documents

RuleDetail
No self-approvalUsers cannot set their own media to ReviewStatus = approved (1) or permanentlyRemoved (4).
Immutable moderation fieldsOwners cannot modify Graphic, ReportCount, or LikeCount — these are admin/Cloud Function-managed.
Immutable identity fieldsUserID, MediaID, ParentType, ParentID, ParkID cannot be changed after creation.
Initial state enforcementNew media must start as ReviewStatus = pending (0) or be marked private. Cannot be created as approved. ReportCount and Graphic must be 0.
Soft-delete onlyUsers can set Deleted = 1 but cannot reverse it. Hard deletes are never permitted via client.
Upload status progressionUpload status can progress forward (queued → uploading → uploaded) or fail, but cannot revert from uploaded.
Admin full accessAdmins (verified via request.auth.token.admin == true) have update access to moderation fields.

Media Reports

RuleDetail
CreateAny authenticated user. ReporterUserID must match own UID. Non-admins cannot use adminAction category.
ReadAdmins or the original reporter only.
UpdateAdmins only (for setting outcome fields).
DeleteNever permitted.

Firebase Storage

RuleDetail
File sizeMaximum 5 MB per upload.
File typeimage/* only — no other content types accepted.
Path separationPublic pending uploads go to media/public/pending/{userId}/. Approved media is moved to media/public/approved/{userId}/ by Cloud Functions. Private media stored in media/private/{userId}/.
Write restrictionsUsers can only write to their own pending and private paths. The approved path is Cloud Function-only (no direct client writes).

8. Admin Audit Trail

Every admin action on media creates a permanent audit record:

  • Document type: A MediaReport with category = adminAction (3).
  • Document ID pattern: {adminUserId}_{mediaId}_{timestamp} — allows multiple admin actions on the same media item to be recorded.
  • Fields recorded: Admin user ID, timestamp, outcome (re-approved / removed / graphic-flagged), and an optional admin note.
  • Immutability: Community users cannot create adminAction reports (enforced by security rules). Existing reports are never deleted.

When an admin resolves reported content, all pending community reports for that media item are batch-updated with the admin's outcome, user ID, and timestamp.


9. Report Resolution Flow

When an admin acts on reported content, the following occurs in a single batch operation:

  1. The media item's ReviewStatus and/or Graphic flag is updated.
  2. All pending community reports for that media are updated with:
    • AdminOutcome (re-approved / removed / graphic-flagged)
    • AdminUserID (which admin acted)
    • AdminTimestamp (when the action was taken)
  3. An audit trail entry (adminAction report) is created.
  4. Local caches are invalidated so the feed reflects the decision immediately.

10. Summary of Moderation Safeguards

SafeguardImplementation
Pre-publication reviewAll public media starts as pending; not visible until admin-approved
Community reportingThree report categories; one report per user per item
Report escalationAuto-escalation to re-review queue when threshold exceeded
Graphic content overlayBlurred display with opt-in reveal for sensitive wildlife content
Server-side enforcementFirestore security rules prevent self-approval and field tampering
Storage restrictions5 MB limit, image-only, path-based access control
Audit trailEvery admin action permanently recorded with user, timestamp, outcome
Soft-delete onlyNo client-side hard deletes; permanent removal is admin-only and terminal
Admin accountabilityAdmin actions attributed by user ID and timestamped