Request AI • Record results • Build rules

Onchain AI decisions, verified by the network.

ColossAI Web3 Network turns AI output into a usable onchain signal: create a request in a smart contract, execute inference offchain, and read the fulfillment onchain — ready for real product rules.

AI is already part of product logic — but in Web3 it often remains an external service with no shared source of truth.

Smart contracts are isolated and cannot fetch external data directly, so any onchain ↔ offchain link requires infrastructure.

ColossAI Web3 Network introduces a simple primitive: request an AI decision and read a recorded onchain result you can use in rules.

Onchain request / onchain fulfillment
Multiple providers per task
Simple integration path for apps
Scales with nodes and tasks
Auditable, recorded, traceable outputs
FAQ teaser
Latency? Costs? Privacy? Providers? We’ll answer the practical questions — without hype.
Execution loop
request → event → fulfillment
Smart contract emits a request event.
Worker executes inference offchain and signs output.
Fulfillment is written onchain for verification and reuse.
Example request params
{
  "task": "vision.classify",
  "src_uri": "https://…/image.jpg",
  "params": { "top_k": 5 }
}
Why events/logs
In EVM ecosystems, events/logs are the standard way to publish a fact cheaply.
Logs are easy to filter by indexed fields without forcing everything into contract storage.
Workers can react in real time via websocket subscriptions (e.g., eth_subscribe).
Designed for onchain ↔ offchain coordination.
Live demo
Drop a 10–20s video clip here, or a screenshot of the current UI.

Who it’s for

Two audiences: builders who integrate the signal, and operators who fulfill requests.

For Web3 builders (dApps / teams)
Automate product rules using an AI signal (access, gating, scoring).
Moderate user-generated content (anti-spam / anti-abuse).
NFT experiences: dynamic minting, gated mint, asset / metadata checks.
DeFi / credit: risk signals as policy inputs (no promises of “accuracy”).
DAO tools: automate procedures and filters using recorded signals.
For node operators (network executors)
Serve as an infrastructure operator that fulfills AI requests for Web3 apps.
Publish a market of tasks/models: one node can offer multiple task profiles.
Scale the network by adding more nodes and more offers over time.
Fee opportunities grow as usage grows — without token hype or fantasies.

How it works

Three steps: onchain request → offchain execution → onchain response.

Minimal diagram
Smart Contract → AI Worker(s) → AI Relay / Models → Smart Contract
1) Onchain request
A user or app creates a request in a smart contract (source URL, task/model, parameters).
2) Offchain execution
AI workers watch events and run inference via a relay / model pipeline, then sign the result.
3) Onchain response
The fulfillment is written back onchain so apps can read it via RPC/SDK and use it in logic.

What it unlocks

A protocol primitive you can build product rules on — without turning AI into a black box.

AI signal as a rule
Treat model output as a policy input for access, gating, scoring, or moderation.
Provable history
Requests and fulfillments become part of the network history — auditable by design.
Composable outputs
The same recorded result can be reused by other contracts, indexers, and apps.
Provider market
Multiple nodes can serve the same task — choose by price, profile, or availability.
Scales by nodes & tasks
Capacity grows as more nodes publish offers and more task profiles appear.

Use cases

Verticals where a recorded AI signal becomes part of product logic.

Synthetic media gate
Flag suspicious images before they become onchain primitives.
Content moderation
Filter user-generated uploads with a protocol-level signal.
Dynamic NFT rules
Let AI output update metadata rules or unlock traits.
Spam / abuse scoring
Use model score as a throttle or gate — probabilistic by nature.
Reputation hints
Optional AI hints to support safer UX flows and decisions.
Curated feeds
AI-ranking signals with an auditable trail of execution.
Important: avoid “accuracy guarantees” in AI. Treat outputs as signals and design policies with thresholds, fallbacks, and appeals.

For builders & for node operators

Integrate once, then use the output like a reusable onchain signal — or run infrastructure that fulfills requests.

For builders
Build rules on top of a recorded AI signal (access, gating, moderation, scoring).
Call the protocol
Create a request (source URL, task/model, parameters) and emit an onchain event.
Wait / read result
Read events & status; treat the flow as asynchronous by default.
Use in your logic
Apply thresholds, policies, fallbacks — use signals, not “absolute truth”.
For node operators
Serve tasks, publish offers, fulfill requests — fee opportunities grow as usage grows.
Choose tasks you serve
Expose task profiles (models/pipelines) your node supports.
Price your offers
Publish conditions and pricing for fulfillment (per task/profile).
Fulfill & earn fees
Execute offchain, write the result onchain; fee opportunities scale with usage.

Proof of execution (current demo)

Not promises — a visible flow: request → event → fulfillment recorded onchain.

What you can do right now
Connect a wallet (e.g., MetaMask).
Paste a public image URL.
Select an AI task/model.
Choose a provider offer (node offer) from available options.
Watch the onchain request event.
Watch the fulfillment event and read the recorded summary onchain.
Demo media
Add a short video (10–20s) or UI screenshot here to show the end-to-end flow.

Trust & design principles

No “trustless AI” claims — just recorded, auditable outputs and practical integration patterns.

Auditability by design
Results are recorded and readable onchain — transparent verification and reuse.
Event-driven integration
Apps and workers react to onchain events; logs are a native integration path for EVM.
Composable outputs
Recorded results can feed indexers, analytics, and other contracts without re-running the job.
Safe wording
signal / hint / score / policy input
auditable / recorded / traceable
provider market / multi-node / scalable by design
Avoid wording
guaranteed
fraud-proof
100% detection
trustless AI
Legal disclaimer: AI outputs are probabilistic signals. Use them as inputs to policies, not as absolute truth.

FAQ

Practical questions — answered directly.

How long does a response take?
It depends on the task and load. The UX should assume async execution: create request → wait for fulfillment.
Can I choose a node/provider?
Yes. If multiple providers serve a task, the user/app can choose an offer based on price/profile/availability.
What if the model is wrong?
AI output is a probabilistic signal. Use it as a policy input (thresholds, fallbacks, appeals), not as absolute truth.
Why do we need an offchain layer at all?
Smart contracts cannot access external data directly. An external executor must bring results back onchain.

Roadmap

Clear phases, realistic priorities. Live / In progress / Next / Later.

  1. Foundation
    Live
    Core flow is live on testnet: request → event → fulfillment recorded.
  2. Reliability
    In progress
    Timeouts, retries, observability, predictable UX under load.
  3. Provider expansion
    Next
    More nodes/offers: task registry, node ops recipes, quality signals.
  4. Developer experience
    Next
    SDKs (TS/Go), integration examples, clearer formats & docs.
  5. Security & economics
    Later
    Rules of the game: fees, audits, and economic commitment for nodes.
Foundation (Live)
Onchain flow: request → event → fulfillment recorded.
Multiple tasks/models in the UI.
Provider selection (node offer).
Basic testnet UX with proof-of-execution.
Reliability (In progress)
Stable retries and execution time management.
Response size limits for details/attachments.
Better observability: logs, statuses, predictable UX.
Provider expansion (Next)
One-click node launch recipes (docker/k8s).
Node-level task/model registry (what I serve).
Basic quality policies (SLA-like metrics without “magic”).
Developer experience (Next)
SDK (TS/Go) to create requests and read results.
Integration examples: NFT gating, moderation, scoring.
Docs for options/details formats and event-driven patterns.
Security & economics (Later)
Node staking/deposits as economic commitment.
Fee policy (node fee / protocol fee).
Audit/review of key contracts.
Privacy & advanced workflows (Later)
Encrypted requests/responses.
Async queues and long-running tasks.
More modalities (beyond images) and advanced pipelines.
Quality controls and transparency over “accuracy guarantees”.
Community

Join discussions, follow updates, and reach out. If you’re building with onchain AI signals or running infrastructure, this is where the network connects.

For investors
AI outputs are probabilistic signals. Use them as inputs to policies, not as absolute truth.