Moderation Architecture
Cryptid implements a three-layer moderation system that enables spam prevention and community governance while preserving end-to-end encryption and user privacy. This allows different levels of control without compromising the protocol’s security guarantees.
Design Philosophy
Section titled “Design Philosophy”Core Principle: Moderation happens at the layer with appropriate visibility and authority.
- Server admins moderate infrastructure (federation, rate limits) without accessing message content
- Group moderators moderate communities (member management, content removal) with visibility into their groups
- Individual users moderate their own experience (blocking, filtering) with complete personal control
This separation ensures that:
- End-to-end encryption is never compromised
- Social graphs remain hidden from servers
- Communities can self-govern
- Users maintain autonomy
Three Layers of Moderation
Section titled “Three Layers of Moderation”┌─────────────────────────────────────────────────────────────────┐│ Layer 1: Server-Level (Infrastructure) ││ Authority: Server administrators ││ Scope: Server configuration and federation │└─────────────────────────────────────────────────────────────────┘ ↓┌─────────────────────────────────────────────────────────────────┐│ Layer 2: Group-Level (Community) ││ Authority: Group admins/moderators ││ Scope: Individual groups and their members │└─────────────────────────────────────────────────────────────────┘ ↓┌─────────────────────────────────────────────────────────────────┐│ Layer 3: Device-Level (Personal) ││ Authority: Individual users ││ Scope: Personal experience and preferences │└─────────────────────────────────────────────────────────────────┘
Layer 1: Server-Level Moderation
Section titled “Layer 1: Server-Level Moderation”Responsibilities
Section titled “Responsibilities”Server administrators control:
-
Federation policies: Which servers to federate with
-
Rate limiting: Progressive trust and traffic shaping
-
Registration requirements: Proof-of-work, invites, time delays, etc
-
Server resources: Storage limits, message retention duration
What Server Admins CAN Do:
Section titled “What Server Admins CAN Do:”1. Block a server from federation
Section titled “1. Block a server from federation”POST /admin/v1/federation/block
{ "server_domain": "spam-factory.com", "reason": "Excessive spam reports from users"}
2. Verify a device for instant trust
Section titled “2. Verify a device for instant trust”POST /admin/v1/trust/verify
{ "device_id": "aabbccddee...", "reason": "Known community member"}
Note: Verification is by device_id
(not address) because:
- Device can have multiple addresses
- Verification applies to all addresses for a device
- device_id is the permanent identity
3. View aggregate server metrics
Section titled “3. View aggregate server metrics”GET /admin/v1/metrics
{ "total_devices": 50, "messages_last_24h": 4500, "spam_reports_last_24h": 12, "federation_peers": 25}
What Server Admins CANNOT Do:
Section titled “What Server Admins CANNOT Do:”-
Read message content (always encrypted)
-
Know who messages whom
-
See group membership (managed client-side)
-
Track social relationships (no contact lists)
-
Ban users from specific groups (not in server scope)
Server-Level Tools
Section titled “Server-Level Tools”-
Federation Blocklists: Servers control which other servers they federate with.
-
Progressive Trust Rate Limiting: New devices start with 10 messages/hour and earn up to 300 messages/hour over 24 hours.
-
Registration Anti-Spam: Servers can require one or more anti-spam measures:
-
Proof-of-Work (for capable devices)
-
Invite Code (for trusted onboarding)
-
Layer 2: Group-Level Moderation
Section titled “Layer 2: Group-Level Moderation”Responsibilities
Section titled “Responsibilities”Group moderators control:
-
Member management: Add/remove members
-
Content moderation: Delete inappropriate messages
-
Role assignment: Promote/demote members
-
Group policies: Set permission thresholds
Role-Based Permission System
Section titled “Role-Based Permission System”Groups use Discord-style roles with individual permission flags.
Permission Flags
Section titled “Permission Flags”/// Individual permission bits (64-bit, allows 64 distinct permissions)bitflags! { pub struct Permissions: u64 { // === Message Permissions === const SEND_MESSAGES = 1 << 0; const ADD_REACTIONS = 1 << 1; const ATTACH_FILES = 1 << 2; const DELETE_OTHERS_MESSAGES = 1 << 3; const PIN_MESSAGES = 1 << 4; const MANAGE_MESSAGES = 1 << 5;
// === Member Management === const INVITE_MEMBERS = 1 << 10; const REMOVE_MEMBERS = 1 << 11; const BAN_MEMBERS = 1 << 12; const MANAGE_MEMBERS = 1 << 13;
// === Role Management === const MANAGE_ROLES = 1 << 20; const ASSIGN_ROLES = 1 << 21;
// === Group Management === const CHANGE_GROUP_NAME = 1 << 30; const CHANGE_GROUP_ICON = 1 << 31; const CHANGE_GROUP_SETTINGS = 1 << 32; const VIEW_AUDIT_LOG = 1 << 33; const MANAGE_GROUP = 1 << 34;
// === Special Permissions === const ADMINISTRATOR = 1 << 63; // Bypass all checks }}
impl Permissions { /// Default permissions for new members pub fn default_member() -> Self { Self::SEND_MESSAGES | SELF::ADD_REACTIONS }
pub fn trusted_member() -> Self { Self::default_member() | Self::ATTACH_FILES }
/// Moderator preset pub fn moderator() -> Self { Self::truted_member() | Self::DELETE_OTHERS_MESSAGES | Self::PIN_MESSAGES | Self::MANAGE_MESSAGES | Self::REMOVE_MEMBERS | Self::VIEW_AUDIT_LOG }
/// Admin preset pub fn admin() -> Self { Self::moderator() | Self::BAN_MEMBERS | Self::MANAGE_MEMBERS | Self::MANAGE_ROLES | Self::ASSIGN_ROLES | Self::CHANGE_GROUP_NAME | Self::CHANGE_GROUP_ICON | Self::CHANGE_GROUP_SETTINGS }
/// Founder (all permissions) pub fn founder() -> Self { Self::ADMINISTRATOR }}
Role System
Section titled “Role System”struct Role { role_id: [u8; 32], name: String, permissions: Permissions,
// Display color: Option<Color>, // Role color in UI
// Hierarchy position: u32, // Higher = more authority
// System roles (can't be deleted) is_system: bool, // e.g., @everyone, Founder
// Metadata created_at: u64, created_by: [u8; 32],}
struct Color { r: u8, g: u8, b: u8,}
Group Structure
Section titled “Group Structure”struct GroupMetadata { group_id: [u8; 32],
// Roles roles: HashMap<RoleId, Role>, default_role: RoleId, // @everyone role
// Members (can have multiple roles) members: HashMap<DeviceId, GroupMember>,
// Founder founder_device_id: [u8; 32],
// Group settings moderation_policy: GroupModerationPolicy,}
struct GroupMember { device_id: [u8; 32],
// Multiple roles roles: Vec<RoleId>,
// Computed permissions (union of all role permissions) effective_permissions: Permissions,
// Metadata joined_at: u64, invited_by: Option<[u8; 32]>,}
Permission Checking
Section titled “Permission Checking”impl Group { fn has_permission(&self, device_id: &[u8; 32], perm: Permissions) -> bool { if let Some(member) = self.members.get(device_id) { member.effective_permissions.contains(perm) } else { false } }
fn can_delete_others_messages(&self, device_id: &[u8; 32]) -> bool { self.has_permission(device_id, Permissions::DELETE_OTHERS_MESSAGES) }
fn can_kick_members(&self, device_id: &[u8; 32]) -> bool { self.has_permission(device_id, Permissions::REMOVE_MEMBERS) }}
Default Roles
Section titled “Default Roles”When a group is created, these roles are automatically created:
impl Group { fn create_default_roles(founder_device_id: &[u8; 32]) -> HashMap<RoleId, Role> { let mut roles = HashMap::new(); // @everyone (default for all members) roles.insert(everyone_id, Role { name: "@everyone".to_string(), permissions: Permissions::default_member(), color: None, position: 0, is_system: true, });
// Moderator roles.insert(mod_id, Role { name: "Moderator".to_string(), permissions: Permissions::moderator(), color: Some(Color { r: 52, g: 152, b: 219 }), // Blue position: 10, is_system: false, });
// Admin roles.insert(admin_id, Role { name: "Admin".to_string(), permissions: Permissions::admin(), color: Some(Color { r: 231, g: 76, b: 60 }), // Red position: 20, is_system: false, });
// Founder roles.insert(founder_role_id, Role { name: "Founder".to_string(), permissions: Permissions::founder(), color: Some(Color { r: 241, g: 196, b: 15 }), // Gold position: 100, is_system: true, });
roles }}
Group Moderation Policy
Section titled “Group Moderation Policy”struct GroupModerationPolicy { // Group settings max_members: Option<u32>, require_approval_for_joins: bool, allow_member_invites: bool,
// Deletion policies log_deletions: bool, show_deleted_by: bool, show_deleted_reason: bool, keep_tombstones: bool, tombstone_expiry: Option<Duration>,}
-
Group Moderation Actions: All group moderation happens via MLS SystemOperation messages.
-
Content Moderation: Message deletion works through cooperative deletion. Well-behaved clients agree to delete messages when requests.
- Moderation Bots: Automated moderation can be implemented as bots with elevated power levels.
Bot capabilities:
- Fast spam detection (~20-50ms response time)
- Pattern matching (regex blocklists)
- Behavioral analysis (message frequency, content similarity)
- Automatic deletion of detected spam
- Escalation to human moderators
Bot limitations:
- Bots see encrypted messages (must be in group)
- Small window where spam might be visible before deletion (~100ms)
- Modified clients can ignore bot deletions locally
- Bots cannot prevent modified clients from viewing spam
Recommended bot setup:
struct ModBot { device_id: [u8; 32], roles: Vec<RoleId>, // Needs to be assigned moderator role permissions: Permissions, // Should compute to Permissions::moderator()
// Pre-compiled patterns for speed spam_patterns: Vec<Regex>,
// Behavioral thresholds max_messages_per_minute: 10, max_repeated_content: 3,
// Response actions auto_delete: bool, // Delete spam immediately auto_warn: bool, // Warn user first auto_kick_after: u32, // Kick after N violations}
Message Deletion
Section titled “Message Deletion”Both users and moderators can delete messages:
User Self-Deletion
Section titled “User Self-Deletion”enum SystemOperation { // ...existing operations DeleteMessage { // Which message to delete message_id: [u8; 32], // device_id of deleter deleted_by: [u8; 32], // When deletion occurred timestamp: u64, // Optional reason reason: Option<String> }}
Users can delete their own messages:
impl CryptidClient { async fn delete_own_message( &mut self, group_id: &[u8; 32], message_id: &[u8; 32],) -> Result<()> { // 1. Verify ownership let message = self.message_store.get(message_id)?; if message.sender_device_id != self.device.device_id { return Err("Can only delete your own messages"); }
// 2. Create deletion operation let deletion = SystemOperation::DeleteMessage { message_id: *message_id, deleted_by: self.device.device_id, timestamp: current_unix_timestamp(), reason: Some(String::from("User requested deletion")), };
// 3. Send to group (MLS encrypted) let encrypted = self.encrypt_system_operation(&deletion)?; self.send_to_group(group_id, encrypted).await?;
// 4. Delete locally immediately self.message_store.delete_message(message_id)?;
Ok(()) }}
UI Display:
[10:30] Alice: Hey everyone![10:31] Alice: [Message deleted by sender at 10:35][10:32] Bob: Thanks!
Moderator Deletion
Section titled “Moderator Deletion”Moderators can delete any message:
impl CryptidClient { async fn delete_message_as_moderator( &mut self, group_id: &[u8; 32], message_id: &[u8; 32], reason: String,) -> Result<()> { let group = self.groups.get(group_id)?;
// 1. Verify moderator permission if !group.has_permission(&self.device.device_id, Permissions::DELETE_OTHERS_MESSAGES) { return Err("Insufficient permissions to delete messages"); }
// 2. Create deletion operation let deletion = SystemOperation::DeleteMessage { message_id: *message_id, deleted_by: self.device.device_id, timestamp: current_unix_timestamp(), reason: Some(String::from("Message deleted by Moderator")), };
// 3. Send to group self.send_to_group(group_id, deletion).await?;
// 4. Delete locally and log self.message_store.delete_message(message_id)?; self.log_moderation_action(ModAction::DeleteMessage { message_id: *message_id, moderator: self.device.device_id, reason, })?;
Ok(()) }}
UI Display:
[10:30] Alice: Hey everyone![10:31] Bob: [Message deleted by moderator at 10:35]Deleted by @Charlie (moderator) at 10:35Reason: Spam[10:32] Charlie: Cleaned up spam
Deletion Policies
Section titled “Deletion Policies”struct GroupModerationPolicy { // ...existing fields from above // Deletion logging log_deletions: bool, // Default: true show_deleted_by: bool, // Show who deleted show_deleted_reason: bool, // Show reason
// Tombstone behavior keep_tombstones: bool, // Show "[Message deleted]" tombstone_expiry: Option<Duration>, // Remove tombstone after time}
Layer 3: Device-Level Moderation
Section titled “Layer 3: Device-Level Moderation”Responsibilities
Section titled “Responsibilities”Individual users control their own experience:
-
Personal blocklists: Block specific devices
- Clients can keep a list of blocked groups and devices
-
Group filtering: Mute or leave groups
-
Auto-blocking: Rules-based blocking
- Clients can automatically block devices based on rules like spam thresholds or invalid signatures
-
Shared blocklists: Subscribe to trusted curators
- Users can subscribe to blocklists from trusted contacts
Interaction Between Layers
Section titled “Interaction Between Layers”Example: Spam Handling
Layer 1 (Server)
Section titled “Layer 1 (Server)”-
New device
spammer@spam.com
registers -
Gets 10 msg/hour rate limit (new device limit)
-
Sends 10 spam messages in the first hour
-
Multiple users report spam
-
After x reports, the device’s rate limit will be set to 0 (blocked)
Layer 2 (Group)
Section titled “Layer 2 (Group)”-
Spammer joins a public group
-
Sends spam message
-
Moderator (with DELETE_OTHERS_MESSAGES permission) deletes message
-
After x deletions, moderator removes spammer from group
Layer 3 (Personal)
Section titled “Layer 3 (Personal)”-
User receives spam message
-
User blocks spammer device
-
User reports spam (contributes to server metrics)
-
Future messages from spammer are auto-ignored
Example: Federation Block
Layer 1 (Server)
Section titled “Layer 1 (Server)”-
Server A admin sees spam from
spam-factory.com
-
Admin blocks federation from
spam-factory.com
-
All messages from that server will be rejected
-
Users on Server A are protected
Layer 2 (Group)
Section titled “Layer 2 (Group)”Not affected because groups don’t know about federation
Layer 3 (Personal)
Section titled “Layer 3 (Personal)”Not affected because users don’t see rejected messages
Example: Group Moderator Abuse
Layer 1 (Server)
Section titled “Layer 1 (Server)”Cannot intervene because the server doesn’t know about groups
Layer 2 (Group)
Section titled “Layer 2 (Group)”-
Moderator abuses power (deletes valid messages)
-
Group admin (with MANAGE_ROLES permission) removes moderator role from member
-
Or group founder (with ADMINISTRATOR permission) removes the moderator from the group
Layer 3 (Personal)
Section titled “Layer 3 (Personal)”-
User disagrees with moderation
-
User leaves group
Blocking Group Members: Special Considerations
Section titled “Blocking Group Members: Special Considerations”Blocking in Cryptid has special constraints when the blocked device is a member of your groups, due to MLS group consensus requirements.
The Problem
Section titled “The Problem”MLS requires all group members to agree on group state (membership, keys, epochs). If you completely ignore another member’s operations, your group state becomes desynchronized and the group breaks.
Example of what breaks:
Let’s say Moderator Bob removes spammer Charlie from the group.
If Alice had completely blocked Bob, the SystemOperation message that removes Charlie won’t be delivered to Alice, this leads to:
-
Alice still thinking that Charlie is in the group
-
Alice trying to encrypt messages to Charlie
-
Alice’s messages failing to decrypt for everyone else
-
Group state breaking for Alice
The Solution
Section titled “The Solution”Cryptid implements 2 levels of blocking:
enum BlockLevel { /// Block messages but process system operations (group state changes) ContentOnly,
/// Block EVERYTHING (forces you to leave all shared groups) Complete}
struct BlockEntry { device_id: DeviceId, // Block by device_id, not address block_level: BlockLevel, reason: String, blocked_at: u64,}
Blocking by device_id is critical:
- Prevents bypass via address rotation
- Blocked user can’t evade by creating new delivery addresses
- Block persists across all addresses for a device
Address rotation does NOT bypass blocks. MLS reveals sender’s device_id in decrypted messages, enabling block enforcement regardless of which delivery address was used.
ContentOnly Block (Default for Group Members)
Section titled “ContentOnly Block (Default for Group Members)”-
Their regular messages are hidden from you
-
System operations (removals, key rotations) are still processed
-
You stay synchronized with the group
-
You benefit from moderation actions
Use cases:
-
Annoying group members
-
Moderators/Admins you disagree with but can tolerate
-
Staying in a group while avoiding specific members
Complete Block (Requires Leaving Shared Groups)
Section titled “Complete Block (Requires Leaving Shared Groups)”-
All messages ignored
-
All operations ignored
-
Cannot stay in groups with the blocked individual
Use cases:
-
Severe harassment
-
Complete avoidance necessary
-
The person is not in any of your groups
Special Case: Blocking Moderators / Admins
Section titled “Special Case: Blocking Moderators / Admins”Moderators and Admins have elevated permissions in groups. You cannot completely block a moderator while staying in their group.
if self.is_moderator_in_shared_group(&device) && level == BlockLevel::Complete { return Err(anyhow!( "Cannot completely block a moderator. Use ContentOnly block or leave the group." ));}
fn is_moderator_in_shared_group(&self, device_id: &DeviceId) -> bool { // Check all groups this client is in for group in &self.groups { // Check if device_id has any moderator-level permissions if let Some(member) = group.members.get(device_id) { // Consider someone a moderator if they can: // - Delete others' messages, OR // - Remove members, OR // - Manage roles let mod_perms = Permissions::DELETE_OTHERS_MESSAGES | Permissions::REMOVE_MEMBERS | Permissions::MANAGE_ROLES;
if member.effective_permissions.intersects(mod_perms) { return true; } } } false}
The rationale here is that moderators’ system operations (member removals, message deletions) are critical for group health. Ignoring them breaks the group for everyone.
Implementation Guidelines
Section titled “Implementation Guidelines”Message Filtering
Section titled “Message Filtering”impl Client { async fn display_message(&self, msg: &IncomingMessage) { // Decrypt MLS message to get sender's device_id let decrypted = self.decrypt_mls_message(&msg.mls_ciphertext)?; let sender_device_id = decrypted.sender_device_id; // From MLS, not from address
// Check blocklist by device_id if let Some(block) = self.blocklist.blocks.get(&sender_device_id) { if block.block_level == BlockLevel::ContentOnly { match msg.message_type { MessageType::Regular => { // Don't display, but keep in local storage self.store_hidden_message(msg).await; return; } MessageType::SystemOperation(_) => { // Process for group state, optionally hide details self.process_system_operation_silently(msg).await; self.ui.show_system_message( "A moderation action occurred", None // Don't show moderator name ); return; } } } else { // Complete block: ignore fully return; } }
// Normal display self.ui.show_message(msg); }}
Critical implementation detail:
Blocking must check the decrypted message’s sender device_id (from MLS), not the delivery address (from routing).
Unblocking
Section titled “Unblocking”impl Client { async fn unblock_device(&mut self, device_id: &DeviceId) -> Result<()> { self.blocklist.blocks.remove(device_id);
// Optionally: reveal previously hidden messages let hidden = self.get_hidden_messages_from(device_id).await?; self.ui.offer_to_show_hidden_messages(hidden.len());
Ok(()) }}
Design Rationale
Section titled “Design Rationale”Technical Necessity
Section titled “Technical Necessity”-
MLS requires that all members agree on group state
-
Ignoring member removals breaks key agreement
-
Ignoring epoch changes breaks encryption
-
Split-brain scenarios destroy groups
User Benefits
Section titled “User Benefits”-
Can hide annoying members’ messages
-
Still benefit from moderation (spam removal, harassment handling)
-
Clear choice: tolerate with hidden messages OR leave group entirely
-
Personal control over experience
Privacy Preservation
Section titled “Privacy Preservation”-
Blocking is completely client-side
-
No server notification (blocked person doesn’t know)
-
No protocol messages sent (purely logical decision)
-
Blocking is reversible
E2EE Moderation Limitations
Section titled “E2EE Moderation Limitations”End-to-end encryption provides maximum privacy but inherently limits server-side moderation capabilities.
Here’s what’s acutally possible:
What Works
Section titled “What Works”Client-Side Pre-Filtering:
- Official clients can block messages before sending
- Pattern matching (regex blocklists)
- Link filtering, rate limiting
- Immediate user feedback (“Cannot send: blocked pattern”)
- Users with modified clients can bypass
Fast Bot Detection:
- Automated bots can detect spam (~50ms)
- Quick deletion before most users see it
- Pattern matching, behavioral analysis
- Small window where message might be visible (20-100ms)
Server-Side Enforcement:
- Rate limiting by device_id (cannot be bypassed)
- Membership checks (server validates)
- Message deletion from server storage (new joiners won’t see)
- Progressive trust (new devices limited to 10 msg/hour)
Social Accountability:
- Vouch systems (voucher penalized if invitee spams)
- Reputation scores (cross-group behavior tracking)
- Community governance (members vote/moderate)
- Visible moderation logs (transparency)
What Doesn’t Work
Section titled “What Doesn’t Work”Client Verification:
- Cannot reliably detect modified clients
- Self-reported hashes are not trustworthy (clients can lie)
- No way to force client behavior (users control their software)
Perfect Spam Prevention:
- Cannot prevent modified clients from viewing spam
- Cannot prevent users of modified clients from screenshotting before deletion
- Cannot prevent sophisticated client mimicry
Forced Deletion:
- Cannot cryptographically prove deletion occurred
- Cannot force modified clients to delete messages
- Cannot remove from user backups
- Cannot unread messages that have already been read
Why These Limitations Exist
Section titled “Why These Limitations Exist”-
E2EE Fundamental Property: Clients must be able to decrypt messages (otherwise E2EE doesn’t work). Once decrypted, clients control the data.
-
Open Source Philosophy: Users can modify their clients. Verification is impossible without trusting the developers.
-
Privacy First: Alternative (non-E2EE) would let servers see/filter everything, but destroys privacy for all users.
Realistic Protection Strategy
Section titled “Realistic Protection Strategy”-
For most users (using spec-compliant well-behaved clients):
- Client-side pre-filtering blocks spam before sending
- Fast bots catch remaining spam within ~100ms
- Cooperative deletion removes from group history
- Experience is effectively spam-free
-
For sophisticated attackers (modified clients):
- Can see spam briefly before deletion
- Cannot spam the broader group (fast bot detection)
- Cannot evade blocks (based on device_id, not address)
- Eventually removed from group (behavioral detection)
-
Result:
- Spam protection works for vast majority of users
- Modified clients exist but have limited impact
All E2EE systems share these fundamental limitations. It comes with the paradigm.