Skip to content

Moderation Architecture

Cryptid implements a three-layer moderation system that enables spam prevention and community governance while preserving end-to-end encryption and user privacy. This allows different levels of control without compromising the protocol’s security guarantees.

Core Principle: Moderation happens at the layer with appropriate visibility and authority.

  • Server admins moderate infrastructure (federation, rate limits) without accessing message content
  • Group moderators moderate communities (member management, content removal) with visibility into their groups
  • Individual users moderate their own experience (blocking, filtering) with complete personal control

This separation ensures that:

  • End-to-end encryption is never compromised
  • Social graphs remain hidden from servers
  • Communities can self-govern
  • Users maintain autonomy
┌─────────────────────────────────────────────────────────────────┐
│ Layer 1: Server-Level (Infrastructure) │
│ Authority: Server administrators │
│ Scope: Server configuration and federation │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 2: Group-Level (Community) │
│ Authority: Group admins/moderators │
│ Scope: Individual groups and their members │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 3: Device-Level (Personal) │
│ Authority: Individual users │
│ Scope: Personal experience and preferences │
└─────────────────────────────────────────────────────────────────┘

Server administrators control:

  • Federation policies: Which servers to federate with

  • Rate limiting: Progressive trust and traffic shaping

  • Registration requirements: Proof-of-work, invites, time delays, etc

  • Server resources: Storage limits, message retention duration

POST /admin/v1/federation/block
{
"server_domain": "spam-factory.com",
"reason": "Excessive spam reports from users"
}
POST /admin/v1/trust/verify
{
"device_id": "aabbccddee...",
"reason": "Known community member"
}

Note: Verification is by device_id (not address) because:

  • Device can have multiple addresses
  • Verification applies to all addresses for a device
  • device_id is the permanent identity
GET /admin/v1/metrics
{
"total_devices": 50,
"messages_last_24h": 4500,
"spam_reports_last_24h": 12,
"federation_peers": 25
}
  • Read message content (always encrypted)

  • Know who messages whom

  • See group membership (managed client-side)

  • Track social relationships (no contact lists)

  • Ban users from specific groups (not in server scope)

  1. Federation Blocklists: Servers control which other servers they federate with.

  2. Progressive Trust Rate Limiting: New devices start with 10 messages/hour and earn up to 300 messages/hour over 24 hours.

  3. Registration Anti-Spam: Servers can require one or more anti-spam measures:

    • Proof-of-Work (for capable devices)

    • Invite Code (for trusted onboarding)

Group moderators control:

  • Member management: Add/remove members

  • Content moderation: Delete inappropriate messages

  • Role assignment: Promote/demote members

  • Group policies: Set permission thresholds

Groups use Discord-style roles with individual permission flags.

/// Individual permission bits (64-bit, allows 64 distinct permissions)
bitflags! {
pub struct Permissions: u64 {
// === Message Permissions ===
const SEND_MESSAGES = 1 << 0;
const ADD_REACTIONS = 1 << 1;
const ATTACH_FILES = 1 << 2;
const DELETE_OTHERS_MESSAGES = 1 << 3;
const PIN_MESSAGES = 1 << 4;
const MANAGE_MESSAGES = 1 << 5;
// === Member Management ===
const INVITE_MEMBERS = 1 << 10;
const REMOVE_MEMBERS = 1 << 11;
const BAN_MEMBERS = 1 << 12;
const MANAGE_MEMBERS = 1 << 13;
// === Role Management ===
const MANAGE_ROLES = 1 << 20;
const ASSIGN_ROLES = 1 << 21;
// === Group Management ===
const CHANGE_GROUP_NAME = 1 << 30;
const CHANGE_GROUP_ICON = 1 << 31;
const CHANGE_GROUP_SETTINGS = 1 << 32;
const VIEW_AUDIT_LOG = 1 << 33;
const MANAGE_GROUP = 1 << 34;
// === Special Permissions ===
const ADMINISTRATOR = 1 << 63; // Bypass all checks
}
}
impl Permissions {
/// Default permissions for new members
pub fn default_member() -> Self {
Self::SEND_MESSAGES
| SELF::ADD_REACTIONS
}
pub fn trusted_member() -> Self {
Self::default_member()
| Self::ATTACH_FILES
}
/// Moderator preset
pub fn moderator() -> Self {
Self::truted_member()
| Self::DELETE_OTHERS_MESSAGES
| Self::PIN_MESSAGES
| Self::MANAGE_MESSAGES
| Self::REMOVE_MEMBERS
| Self::VIEW_AUDIT_LOG
}
/// Admin preset
pub fn admin() -> Self {
Self::moderator()
| Self::BAN_MEMBERS
| Self::MANAGE_MEMBERS
| Self::MANAGE_ROLES
| Self::ASSIGN_ROLES
| Self::CHANGE_GROUP_NAME
| Self::CHANGE_GROUP_ICON
| Self::CHANGE_GROUP_SETTINGS
}
/// Founder (all permissions)
pub fn founder() -> Self {
Self::ADMINISTRATOR
}
}
struct Role {
role_id: [u8; 32],
name: String,
permissions: Permissions,
// Display
color: Option<Color>, // Role color in UI
// Hierarchy
position: u32, // Higher = more authority
// System roles (can't be deleted)
is_system: bool, // e.g., @everyone, Founder
// Metadata
created_at: u64,
created_by: [u8; 32],
}
struct Color {
r: u8,
g: u8,
b: u8,
}
struct GroupMetadata {
group_id: [u8; 32],
// Roles
roles: HashMap<RoleId, Role>,
default_role: RoleId, // @everyone role
// Members (can have multiple roles)
members: HashMap<DeviceId, GroupMember>,
// Founder
founder_device_id: [u8; 32],
// Group settings
moderation_policy: GroupModerationPolicy,
}
struct GroupMember {
device_id: [u8; 32],
// Multiple roles
roles: Vec<RoleId>,
// Computed permissions (union of all role permissions)
effective_permissions: Permissions,
// Metadata
joined_at: u64,
invited_by: Option<[u8; 32]>,
}
impl Group {
fn has_permission(&self, device_id: &[u8; 32], perm: Permissions) -> bool {
if let Some(member) = self.members.get(device_id) {
member.effective_permissions.contains(perm)
} else {
false
}
}
fn can_delete_others_messages(&self, device_id: &[u8; 32]) -> bool {
self.has_permission(device_id, Permissions::DELETE_OTHERS_MESSAGES)
}
fn can_kick_members(&self, device_id: &[u8; 32]) -> bool {
self.has_permission(device_id, Permissions::REMOVE_MEMBERS)
}
}

When a group is created, these roles are automatically created:

impl Group {
fn create_default_roles(founder_device_id: &[u8; 32]) -> HashMap<RoleId, Role> {
let mut roles = HashMap::new();
// @everyone (default for all members)
roles.insert(everyone_id, Role {
name: "@everyone".to_string(),
permissions: Permissions::default_member(),
color: None,
position: 0,
is_system: true,
});
// Moderator
roles.insert(mod_id, Role {
name: "Moderator".to_string(),
permissions: Permissions::moderator(),
color: Some(Color { r: 52, g: 152, b: 219 }), // Blue
position: 10,
is_system: false,
});
// Admin
roles.insert(admin_id, Role {
name: "Admin".to_string(),
permissions: Permissions::admin(),
color: Some(Color { r: 231, g: 76, b: 60 }), // Red
position: 20,
is_system: false,
});
// Founder
roles.insert(founder_role_id, Role {
name: "Founder".to_string(),
permissions: Permissions::founder(),
color: Some(Color { r: 241, g: 196, b: 15 }), // Gold
position: 100,
is_system: true,
});
roles
}
}
struct GroupModerationPolicy {
// Group settings
max_members: Option<u32>,
require_approval_for_joins: bool,
allow_member_invites: bool,
// Deletion policies
log_deletions: bool,
show_deleted_by: bool,
show_deleted_reason: bool,
keep_tombstones: bool,
tombstone_expiry: Option<Duration>,
}
  • Group Moderation Actions: All group moderation happens via MLS SystemOperation messages.

  • Content Moderation: Message deletion works through cooperative deletion. Well-behaved clients agree to delete messages when requests.

  • Moderation Bots: Automated moderation can be implemented as bots with elevated power levels.

Bot capabilities:

  • Fast spam detection (~20-50ms response time)
  • Pattern matching (regex blocklists)
  • Behavioral analysis (message frequency, content similarity)
  • Automatic deletion of detected spam
  • Escalation to human moderators

Bot limitations:

  • Bots see encrypted messages (must be in group)
  • Small window where spam might be visible before deletion (~100ms)
  • Modified clients can ignore bot deletions locally
  • Bots cannot prevent modified clients from viewing spam

Recommended bot setup:

struct ModBot {
device_id: [u8; 32],
roles: Vec<RoleId>, // Needs to be assigned moderator role
permissions: Permissions, // Should compute to Permissions::moderator()
// Pre-compiled patterns for speed
spam_patterns: Vec<Regex>,
// Behavioral thresholds
max_messages_per_minute: 10,
max_repeated_content: 3,
// Response actions
auto_delete: bool, // Delete spam immediately
auto_warn: bool, // Warn user first
auto_kick_after: u32, // Kick after N violations
}

Both users and moderators can delete messages:

enum SystemOperation {
// ...existing operations
DeleteMessage {
// Which message to delete
message_id: [u8; 32],
// device_id of deleter
deleted_by: [u8; 32],
// When deletion occurred
timestamp: u64,
// Optional reason
reason: Option<String>
}
}

Users can delete their own messages:

impl CryptidClient {
async fn delete_own_message(
&mut self,
group_id: &[u8; 32],
message_id: &[u8; 32],
) -> Result<()> {
// 1. Verify ownership
let message = self.message_store.get(message_id)?;
if message.sender_device_id != self.device.device_id {
return Err("Can only delete your own messages");
}
// 2. Create deletion operation
let deletion = SystemOperation::DeleteMessage {
message_id: *message_id,
deleted_by: self.device.device_id,
timestamp: current_unix_timestamp(),
reason: Some(String::from("User requested deletion")),
};
// 3. Send to group (MLS encrypted)
let encrypted = self.encrypt_system_operation(&deletion)?;
self.send_to_group(group_id, encrypted).await?;
// 4. Delete locally immediately
self.message_store.delete_message(message_id)?;
Ok(())
}
}

UI Display:

[10:30] Alice: Hey everyone!
[10:31] Alice: [Message deleted by sender at 10:35]
[10:32] Bob: Thanks!

Moderators can delete any message:

impl CryptidClient {
async fn delete_message_as_moderator(
&mut self,
group_id: &[u8; 32],
message_id: &[u8; 32],
reason: String,
) -> Result<()> {
let group = self.groups.get(group_id)?;
// 1. Verify moderator permission
if !group.has_permission(&self.device.device_id, Permissions::DELETE_OTHERS_MESSAGES) {
return Err("Insufficient permissions to delete messages");
}
// 2. Create deletion operation
let deletion = SystemOperation::DeleteMessage {
message_id: *message_id,
deleted_by: self.device.device_id,
timestamp: current_unix_timestamp(),
reason: Some(String::from("Message deleted by Moderator")),
};
// 3. Send to group
self.send_to_group(group_id, deletion).await?;
// 4. Delete locally and log
self.message_store.delete_message(message_id)?;
self.log_moderation_action(ModAction::DeleteMessage {
message_id: *message_id,
moderator: self.device.device_id,
reason,
})?;
Ok(())
}
}

UI Display:

[10:30] Alice: Hey everyone!
[10:31] Bob: [Message deleted by moderator at 10:35]
Deleted by @Charlie (moderator) at 10:35
Reason: Spam
[10:32] Charlie: Cleaned up spam
struct GroupModerationPolicy {
// ...existing fields from above
// Deletion logging
log_deletions: bool, // Default: true
show_deleted_by: bool, // Show who deleted
show_deleted_reason: bool, // Show reason
// Tombstone behavior
keep_tombstones: bool, // Show "[Message deleted]"
tombstone_expiry: Option<Duration>, // Remove tombstone after time
}

Individual users control their own experience:

  • Personal blocklists: Block specific devices

    • Clients can keep a list of blocked groups and devices
  • Group filtering: Mute or leave groups

  • Auto-blocking: Rules-based blocking

    • Clients can automatically block devices based on rules like spam thresholds or invalid signatures
  • Shared blocklists: Subscribe to trusted curators

    • Users can subscribe to blocklists from trusted contacts
Example: Spam Handling
  1. New device spammer@spam.com registers

  2. Gets 10 msg/hour rate limit (new device limit)

  3. Sends 10 spam messages in the first hour

  4. Multiple users report spam

  5. After x reports, the device’s rate limit will be set to 0 (blocked)

  1. Spammer joins a public group

  2. Sends spam message

  3. Moderator (with DELETE_OTHERS_MESSAGES permission) deletes message

  4. After x deletions, moderator removes spammer from group

  1. User receives spam message

  2. User blocks spammer device

  3. User reports spam (contributes to server metrics)

  4. Future messages from spammer are auto-ignored


Example: Federation Block
  1. Server A admin sees spam from spam-factory.com

  2. Admin blocks federation from spam-factory.com

  3. All messages from that server will be rejected

  4. Users on Server A are protected

Not affected because groups don’t know about federation

Not affected because users don’t see rejected messages


Example: Group Moderator Abuse

Cannot intervene because the server doesn’t know about groups

  1. Moderator abuses power (deletes valid messages)

  2. Group admin (with MANAGE_ROLES permission) removes moderator role from member

  3. Or group founder (with ADMINISTRATOR permission) removes the moderator from the group

  1. User disagrees with moderation

  2. User leaves group

Blocking Group Members: Special Considerations

Section titled “Blocking Group Members: Special Considerations”

Blocking in Cryptid has special constraints when the blocked device is a member of your groups, due to MLS group consensus requirements.

MLS requires all group members to agree on group state (membership, keys, epochs). If you completely ignore another member’s operations, your group state becomes desynchronized and the group breaks.

Example of what breaks:

Let’s say Moderator Bob removes spammer Charlie from the group.

If Alice had completely blocked Bob, the SystemOperation message that removes Charlie won’t be delivered to Alice, this leads to:

  • Alice still thinking that Charlie is in the group

  • Alice trying to encrypt messages to Charlie

  • Alice’s messages failing to decrypt for everyone else

  • Group state breaking for Alice

Cryptid implements 2 levels of blocking:

enum BlockLevel {
/// Block messages but process system operations (group state changes)
ContentOnly,
/// Block EVERYTHING (forces you to leave all shared groups)
Complete
}
struct BlockEntry {
device_id: DeviceId, // Block by device_id, not address
block_level: BlockLevel,
reason: String,
blocked_at: u64,
}

Blocking by device_id is critical:

  • Prevents bypass via address rotation
  • Blocked user can’t evade by creating new delivery addresses
  • Block persists across all addresses for a device

Address rotation does NOT bypass blocks. MLS reveals sender’s device_id in decrypted messages, enabling block enforcement regardless of which delivery address was used.

ContentOnly Block (Default for Group Members)

Section titled “ContentOnly Block (Default for Group Members)”
  • Their regular messages are hidden from you

  • System operations (removals, key rotations) are still processed

  • You stay synchronized with the group

  • You benefit from moderation actions

Use cases:

  • Annoying group members

  • Moderators/Admins you disagree with but can tolerate

  • Staying in a group while avoiding specific members

Complete Block (Requires Leaving Shared Groups)

Section titled “Complete Block (Requires Leaving Shared Groups)”
  • All messages ignored

  • All operations ignored

  • Cannot stay in groups with the blocked individual

Use cases:

  • Severe harassment

  • Complete avoidance necessary

  • The person is not in any of your groups

Special Case: Blocking Moderators / Admins

Section titled “Special Case: Blocking Moderators / Admins”

Moderators and Admins have elevated permissions in groups. You cannot completely block a moderator while staying in their group.

if self.is_moderator_in_shared_group(&device) && level == BlockLevel::Complete {
return Err(anyhow!(
"Cannot completely block a moderator. Use ContentOnly block or leave the group."
));
}
fn is_moderator_in_shared_group(&self, device_id: &DeviceId) -> bool {
// Check all groups this client is in
for group in &self.groups {
// Check if device_id has any moderator-level permissions
if let Some(member) = group.members.get(device_id) {
// Consider someone a moderator if they can:
// - Delete others' messages, OR
// - Remove members, OR
// - Manage roles
let mod_perms = Permissions::DELETE_OTHERS_MESSAGES
| Permissions::REMOVE_MEMBERS
| Permissions::MANAGE_ROLES;
if member.effective_permissions.intersects(mod_perms) {
return true;
}
}
}
false
}

The rationale here is that moderators’ system operations (member removals, message deletions) are critical for group health. Ignoring them breaks the group for everyone.

impl Client {
async fn display_message(&self, msg: &IncomingMessage) {
// Decrypt MLS message to get sender's device_id
let decrypted = self.decrypt_mls_message(&msg.mls_ciphertext)?;
let sender_device_id = decrypted.sender_device_id; // From MLS, not from address
// Check blocklist by device_id
if let Some(block) = self.blocklist.blocks.get(&sender_device_id) {
if block.block_level == BlockLevel::ContentOnly {
match msg.message_type {
MessageType::Regular => {
// Don't display, but keep in local storage
self.store_hidden_message(msg).await;
return;
}
MessageType::SystemOperation(_) => {
// Process for group state, optionally hide details
self.process_system_operation_silently(msg).await;
self.ui.show_system_message(
"A moderation action occurred",
None // Don't show moderator name
);
return;
}
}
} else {
// Complete block: ignore fully
return;
}
}
// Normal display
self.ui.show_message(msg);
}
}

Critical implementation detail:

Blocking must check the decrypted message’s sender device_id (from MLS), not the delivery address (from routing).

impl Client {
async fn unblock_device(&mut self, device_id: &DeviceId) -> Result<()> {
self.blocklist.blocks.remove(device_id);
// Optionally: reveal previously hidden messages
let hidden = self.get_hidden_messages_from(device_id).await?;
self.ui.offer_to_show_hidden_messages(hidden.len());
Ok(())
}
}
  • MLS requires that all members agree on group state

  • Ignoring member removals breaks key agreement

  • Ignoring epoch changes breaks encryption

  • Split-brain scenarios destroy groups

  • Can hide annoying members’ messages

  • Still benefit from moderation (spam removal, harassment handling)

  • Clear choice: tolerate with hidden messages OR leave group entirely

  • Personal control over experience

  • Blocking is completely client-side

  • No server notification (blocked person doesn’t know)

  • No protocol messages sent (purely logical decision)

  • Blocking is reversible

End-to-end encryption provides maximum privacy but inherently limits server-side moderation capabilities.

Here’s what’s acutally possible:

Client-Side Pre-Filtering:

  • Official clients can block messages before sending
  • Pattern matching (regex blocklists)
  • Link filtering, rate limiting
  • Immediate user feedback (“Cannot send: blocked pattern”)
  • Users with modified clients can bypass

Fast Bot Detection:

  • Automated bots can detect spam (~50ms)
  • Quick deletion before most users see it
  • Pattern matching, behavioral analysis
  • Small window where message might be visible (20-100ms)

Server-Side Enforcement:

  • Rate limiting by device_id (cannot be bypassed)
  • Membership checks (server validates)
  • Message deletion from server storage (new joiners won’t see)
  • Progressive trust (new devices limited to 10 msg/hour)

Social Accountability:

  • Vouch systems (voucher penalized if invitee spams)
  • Reputation scores (cross-group behavior tracking)
  • Community governance (members vote/moderate)
  • Visible moderation logs (transparency)

Client Verification:

  • Cannot reliably detect modified clients
  • Self-reported hashes are not trustworthy (clients can lie)
  • No way to force client behavior (users control their software)

Perfect Spam Prevention:

  • Cannot prevent modified clients from viewing spam
  • Cannot prevent users of modified clients from screenshotting before deletion
  • Cannot prevent sophisticated client mimicry

Forced Deletion:

  • Cannot cryptographically prove deletion occurred
  • Cannot force modified clients to delete messages
  • Cannot remove from user backups
  • Cannot unread messages that have already been read
  1. E2EE Fundamental Property: Clients must be able to decrypt messages (otherwise E2EE doesn’t work). Once decrypted, clients control the data.

  2. Open Source Philosophy: Users can modify their clients. Verification is impossible without trusting the developers.

  3. Privacy First: Alternative (non-E2EE) would let servers see/filter everything, but destroys privacy for all users.

  • For most users (using spec-compliant well-behaved clients):

    • Client-side pre-filtering blocks spam before sending
    • Fast bots catch remaining spam within ~100ms
    • Cooperative deletion removes from group history
    • Experience is effectively spam-free
  • For sophisticated attackers (modified clients):

    • Can see spam briefly before deletion
    • Cannot spam the broader group (fast bot detection)
    • Cannot evade blocks (based on device_id, not address)
    • Eventually removed from group (behavioral detection)
  • Result:

    • Spam protection works for vast majority of users
    • Modified clients exist but have limited impact

All E2EE systems share these fundamental limitations. It comes with the paradigm.