Whitepaper

Lioth Unified Whitepaper

Loading reader path...

Security and Anti-Fraud Mechanisms

The protocol is designed to produce high-quality human work through PHK finality under bounded-risk assumptions. Security combines tiered verification, validator review, integrity signals, operational access controls where present, and due-process enforcement. Different deployment phases can activate narrower or broader subsets of that design without changing the PHK trust boundary.

Threat Model

The protocol considers these threats:

  • automated or AI-generated submissions intended to earn rewards without genuine human work;
  • Sybil farming with many low-cost identities to increase throughput or bypass caps;
  • collusion between contributors and validators to approve low-quality or fraudulent work;
  • validator corruption or laziness, approving bad work to maximize rewards;
  • spam and throughput attacks designed to overload validation capacity;
  • dataset manipulation attempts such as bias injection, duplication, poisoning, or low-effort volume; and
  • requester abuse, including frivolous disputes or adversarial rubrics intended to deny payouts.

The protocol does not claim to prevent all attacks. The security objective is bounded-risk:

  • keep SAR below tier targets when assistance is disallowed;
  • keep Audit Overturn Rate below tier targets as a proxy for verification reliability; and
  • make adversarial participation operationally weak and economically unattractive through caps, routing restrictions, audits, and due-process penalties where enabled.

Campaigns must declare their assistance policy and tier configuration so verification strength and risk posture are explicit.


Implementation note: Current live bootstrap/testnet security behavior is documented in App Status. The sections below define the broader security model and trust boundaries; not every module is active in every deployment phase.


Claim Policy

The protocol does not claim to prove "human-only" origin. It claims to commit rule configurations by hash, enforce verification workflows and due process, and publish bounded-risk metrics at campaign and tier level.

Security Primitives

Lioth relies on five security primitives:

  • finalized-outcome-based reputation and, where used, separate operational access controls;
  • throughput caps, settlement delays, and launch restrictions for new or weak identities;
  • multi-party verification with audits and challenge paths;
  • off-chain integrity signals anchored to finalized outcomes; and
  • due-process penalties where enabled after PHK finality.

Contributor-Level Defenses

Contributors are the main source of outputs, so the system prioritizes early detection and bounded impact:

  • Fresh-identity supervision: new identities operate under stricter constraints such as lower caps, lower routing priority, higher settlement delay, and narrower launch access.
  • Similarity and duplication defenses: near-duplicate work is treated as a primary signal of automation or farming.
  • Behavioral anomaly detection: timing, overlap, and workflow patterns can trigger increased scrutiny.
  • Economic deterrence after proof: where campaigns enable it, confirmed fraud can still lead to reputation loss, payout loss, access restrictions, and optional slashing.

Flagging alone does not finalize truth or apply confirmed-fraud penalties.

Cluster Confidence and Operational Restriction Levels

Implementations may use cluster-confidence scoring to summarize overlap posture across accounts. This can produce operational restriction levels that affect routing priority, audit pressure, validator availability, or whether an account is available for automatic routing. These levels are not themselves canonical proof of collusion or fraud. They are provisional integrity posture used to reduce risk while PHK, audits, and due process continue to govern truth.

This distinction is critical:

  • Provisional integrity posture: cluster confidence, anomaly flags, similarity signals, overlap metrics.
  • Confirmed violation: fraud/collusion/negligence established through PHK-linked audit/arbitration or other finalized enforcement path.

The former can justify operational restrictions. The latter is what can justify finalized penalties.

Validator Integrity and Anti-Collusion

Validators are a critical trust bottleneck. The protocol designs validator power to be measurable and punishable.

  • Validator review remains canonical: final outcomes are produced by validator decisions under PHK, not by protocol services or operator tooling.
  • Assignment integrity: validators are routed under the active assignment policy, with pairing limits, conflict controls, and integrity restrictions where applicable.
  • Multi-party review: each task is reviewed by the number of validators required by the relevant tier/runtime rules.
  • Audits and challenge paths: some tasks may be audited regardless of outcome, and additional audits may trigger on disagreement or risk posture.
  • Validator incentives: validators are rewarded for decisions that remain aligned with finalized outcomes, not for volume alone.

Bootstrap Validator Supply and Force Assignment

Bootstrap/testnet deployments may use audited validator approval or assignment overrides to keep validation capacity usable during cold start or incident response. These controls can affect who is available to review, including cases where automatic routing would otherwise block a validator, but they do not let operators insert approve/reject outcomes. The assigned validator still produces the PHK decision.

What Can Be Independently Verified

The protocol can be audited for steering and collusion resistance by verifying:

  • that validator quorums and assignment events match the active routing policy and recorded workflow events;
  • that repeated pairing limits were enforced where applicable;
  • that any manual or force assignment was explicitly logged with actor, reason, and target;
  • that final outcomes match quorum thresholds and finality receipts; and
  • that operational safety actions, if any, did not rewrite canonical PHK truth.

This does not claim perfect censorship resistance of off-chain routing. It provides auditability and enforceable constraints under committed rules.

Campaign-Level Controls Against Spam and Overload

Spam is handled primarily through throughput control and bounded access. Campaigns and runtime policy can enforce:

  • per-identity caps and concurrency limits;
  • operational access ceilings for task types or launch access;
  • queue throttling and adaptive routing;
  • higher audit pressure when abnormal patterns appear; and
  • safety-overlay holds where active incidents require operational containment.

These controls prevent validation bottlenecks and preserve usability.

Dataset and Deliverable Integrity

Both protocol output types require integrity guarantees. Mechanisms include:

  • deduplication and novelty checks for dataset campaigns;
  • anomaly detection for label distributions, entropy, cohort drift, and suspicious validator agreement;
  • schema compliance and quality reports for dataset artifacts;
  • artifact hashing, manifest generation, and delivery receipt metadata for requester-facing delivery; and
  • provenance anchoring and licensing references without exposing content.

Anomalies do not automatically invalidate data. They trigger deeper audits, more conservative routing, or further review.

Requester Abuse Resistance

Anti-fraud must apply to requesters as well. This includes:

  • bounded dispute windows;
  • requester dispute deposits where enabled;
  • arbitration quorums for subjective deliverables; and
  • settlement finality rules so payouts and reputation updates occur only after resolution.

Operational safety controls may also hold settlement or release paths during incidents without changing PHK truth.