Content Moderation & Media Review System
User-generated content moderation pipeline — pre-publication review, community reporting, admin tools, and server-side enforcement.
1. Overview
All user-uploaded media (photos attached to wildlife sightings) passes through a mandatory moderation pipeline before becoming visible to the community. No public media is displayed without explicit admin approval.
Core principle: Every piece of public user-generated content is reviewed by a human moderator before it appears in any community-facing feed.
2. Pre-Publication Review
Every public media upload enters the system in a pending state (ReviewStatus = 0). Media is not visible to any other user until an admin explicitly approves it.
Visibility rule (enforced in client code and backend database security rules):
visible = (ReviewStatus == approved) AND (Private == 0) AND (Deleted == 0)
- Users see their own uploads immediately (local device only) with upload progress indicators.
- Other users never see pending, rejected, or removed content.
- Private media (e.g. personal trip check-in photos) bypasses the review queue entirely and is only visible to the owner.
3. Review Status Lifecycle
Media progresses through a strict state machine with five states:
| Status | Value | Description |
|---|---|---|
| Pending | 0 | Initial state for all new public uploads. Awaiting first admin review. |
| Approved | 1 | Passed admin review. Visible in community feed. |
| Rejected | 2 | Failed admin review. Not visible in feed. |
| Under Review | 3 | Auto-escalated for re-review after community reports exceed threshold. |
| Permanently Removed | 4 | Terminal state. Cannot re-enter the review queue. |
State transitions:
Upload created
|
v
PENDING (0)
|
+---> APPROVED (1) -----> [community reports] ----> UNDER REVIEW (3)
| |
+---> REJECTED (2) +-------------+-------------+
| |
RE-APPROVED (1) PERMANENTLY REMOVED (4)
[terminal]
4. Admin Review Dashboard
Administrators access a dedicated moderation interface with four tabs:
4.1 New Submissions
- Displays all media with
ReviewStatus = pending(0) that has been uploaded and is public. - Sorted newest-first, limited to 50 items per page.
- Actions: Approve, Approve as Graphic, Reject, Delete.
4.2 Reported Content
- Displays approved media that has received one or more community reports (
ReportCount > 0). - Sorted by report count (highest first) to prioritise the most-reported items.
- Shows a report summary with category breakdown (Graphic / Irrelevant / Offensive).
- Actions: Flag as Graphic, Remove, Dismiss Reports.
4.3 Re-Review Queue
- Displays media automatically escalated to
ReviewStatus = Under Review(3) when report counts exceed a threshold. - Actions: Re-approve, Re-approve as Graphic, Permanently Remove.
4.4 Moderation Summary
- Aggregated moderation statistics per user.
- Shows users with the most reported media and users who have filed the most reports.
- Supports sorting to identify patterns of misuse.
5. Community Reporting
Any authenticated user can report media they find objectionable.
Report Categories
| Category | Value | Description |
|---|---|---|
| Graphic Content | 0 | Wildlife imagery that may be disturbing (e.g. predation) |
| Irrelevant | 1 | Content unrelated to wildlife sightings |
| Offensive | 2 | Content that violates community standards |
Safeguards
- One report per user per media item: Enforced by the Firestore document ID pattern (
{reporterUserId}_{mediaId}). Duplicate reports from the same user are structurally impossible. - Admin-only category: A fourth category (
adminAction, value 3) exists for internal audit entries. Community users cannot create reports with this category — enforced by Firestore security rules. - Optional description: Users may provide up to 200 characters of additional context.
- Non-blocking UX: Report submission is fire-and-forget. The form closes immediately with a confirmation toast, ensuring the reporting mechanism is frictionless while preventing abuse through the one-report-per-user constraint.
6. Graphic Content Handling
Rather than removing content that is graphic but legitimate (e.g. wildlife predation scenes), the system supports a graphic content flag:
- When an admin flags media as graphic (
Graphic = 1), the content remains in the feed but is displayed with a blur overlay. - The overlay shows a warning label and a "Tap to view" prompt.
- Revealing the image is a client-side UI state change only — no data is written to the server, and the content re-blurs when the user navigates away.
- This approach allows nature photography that some users may find sensitive to remain available without being displayed unexpectedly.
7. Server-Side Enforcement (Firestore Security Rules)
Moderation integrity is enforced at the database level, not just in client code.
Media Documents
| Rule | Detail |
|---|---|
| No self-approval | Users cannot set their own media to ReviewStatus = approved (1) or permanentlyRemoved (4). |
| Immutable moderation fields | Owners cannot modify Graphic, ReportCount, or LikeCount — these are admin/Cloud Function-managed. |
| Immutable identity fields | UserID, MediaID, ParentType, ParentID, ParkID cannot be changed after creation. |
| Initial state enforcement | New media must start as ReviewStatus = pending (0) or be marked private. Cannot be created as approved. ReportCount and Graphic must be 0. |
| Soft-delete only | Users can set Deleted = 1 but cannot reverse it. Hard deletes are never permitted via client. |
| Upload status progression | Upload status can progress forward (queued → uploading → uploaded) or fail, but cannot revert from uploaded. |
| Admin full access | Admins (verified via request.auth.token.admin == true) have update access to moderation fields. |
Media Reports
| Rule | Detail |
|---|---|
| Create | Any authenticated user. ReporterUserID must match own UID. Non-admins cannot use adminAction category. |
| Read | Admins or the original reporter only. |
| Update | Admins only (for setting outcome fields). |
| Delete | Never permitted. |
Firebase Storage
| Rule | Detail |
|---|---|
| File size | Maximum 5 MB per upload. |
| File type | image/* only — no other content types accepted. |
| Path separation | Public pending uploads go to media/public/pending/{userId}/. Approved media is moved to media/public/approved/{userId}/ by Cloud Functions. Private media stored in media/private/{userId}/. |
| Write restrictions | Users can only write to their own pending and private paths. The approved path is Cloud Function-only (no direct client writes). |
8. Admin Audit Trail
Every admin action on media creates a permanent audit record:
- Document type: A
MediaReportwithcategory = adminAction(3). - Document ID pattern:
{adminUserId}_{mediaId}_{timestamp}— allows multiple admin actions on the same media item to be recorded. - Fields recorded: Admin user ID, timestamp, outcome (re-approved / removed / graphic-flagged), and an optional admin note.
- Immutability: Community users cannot create
adminActionreports (enforced by security rules). Existing reports are never deleted.
When an admin resolves reported content, all pending community reports for that media item are batch-updated with the admin's outcome, user ID, and timestamp.
9. Report Resolution Flow
When an admin acts on reported content, the following occurs in a single batch operation:
- The media item's
ReviewStatusand/orGraphicflag is updated. - All pending community reports for that media are updated with:
AdminOutcome(re-approved / removed / graphic-flagged)AdminUserID(which admin acted)AdminTimestamp(when the action was taken)
- An audit trail entry (
adminActionreport) is created. - Local caches are invalidated so the feed reflects the decision immediately.
10. Summary of Moderation Safeguards
| Safeguard | Implementation |
|---|---|
| Pre-publication review | All public media starts as pending; not visible until admin-approved |
| Community reporting | Three report categories; one report per user per item |
| Report escalation | Auto-escalation to re-review queue when threshold exceeded |
| Graphic content overlay | Blurred display with opt-in reveal for sensitive wildlife content |
| Server-side enforcement | Firestore security rules prevent self-approval and field tampering |
| Storage restrictions | 5 MB limit, image-only, path-based access control |
| Audit trail | Every admin action permanently recorded with user, timestamp, outcome |
| Soft-delete only | No client-side hard deletes; permanent removal is admin-only and terminal |
| Admin accountability | Admin actions attributed by user ID and timestamped |