Cheap + you earn
1 credit = $0.01. Plug in your GPU when you're not gaming and earn while you sleep. Spend your credits on chat, or cash out via Wise once you cross the KYC threshold.
70 / 25 / 5 — peer · platform · reserve
Use it or serve it. 1 credit = $0.01.
No corporation. No data center. Just peers.
1 credit = $0.01. Plug in your GPU when you're not gaming and earn while you sleep. Spend your credits on chat, or cash out via Wise once you cross the KYC threshold.
70 / 25 / 5 — peer · platform · reserve
We don't host the models. The network does. OpenAI-compatible API, any Ollama model, peers anywhere. No vendor lock-in. Source is auditable.
MIT-licensed peer agent
Chat history is end-to-end encrypted with a key derived from your password. Peers see your prompt to answer it — they never learn who you are. Lose your password, you lose your history. We mean it.
Argon2id + XChaCha20-Poly1305
Four interfaces, one network. All four ride the same encrypted layer — your prompts, channel messages, and DMs are unreadable to us by design.
A familiar one-on-one chat with the model of your choice — Llama 3.2, Qwen 2.5, Phi 3.5, Mistral. Pick the model from a dropdown, ask anything, watch it stream back. Conversation history is saved on the server but encrypted with a key derived from your password — even our database admins can't read it.
Like Discord but private — channels you create are invisible to everyone except invited members. Public channels (curated by the platform) work like community rooms; private channels are encrypted to their members and self-delete after everyone has been offline for a configurable number of days.
End-to-end encrypted 1:1 messaging using NaCl crypto_box (X25519 + XSalsa20-Poly1305). When you're online we deliver in real time over the signaling bus; when you're offline we hold an opaque ciphertext blob for up to 7 days and drop it. We never read it. We never keep it.
Each channel can host one AI bot you configure: name, model, system prompt, cooldown, daily cap. The bot becomes a virtual member of the channel; mention it like any user. Inference runs through the network, the asker's credits are debited, and the answer streams back as a regular channel message — encrypted just like everything else.
Two ways in. Pick one — or both.
We didn't bolt encryption onto a chat product. The encryption is the product. Three concrete promises, three honest caveats — no marketing fog.
#1
Channel messages are encrypted with XChaCha20-Poly1305 using a per-channel symmetric key. That key never reaches our servers in plaintext — it's sealed (libsodium sealed_box) to each member's X25519 pubkey, which is itself derived from your password via Argon2id. Server stores ciphertext + sealed-box blobs. The keys live on your devices.
#2
Channels: zero server-side persistence. Messages flow through the signaling bus as ciphertext and are never written to disk. Direct messages: server holds an opaque queue for up to 7 days so offline recipients don't lose the thread, then it's deleted automatically. We never get the keys to read either.
#3
When your prompt goes to a peer's GPU for inference, the relay strips your user_id, IP, and conversation_id first. The peer sees the prompt — they have to, to answer it — but cannot link it back to you. Repeated requests rotate across peers when possible to thwart profile reconstruction.
What we'd be lying about if we promised more
| CoralMesh | Discord / Slack | Telegram (default chats) | Signal | |
|---|---|---|---|---|
| Server can read messages | No | Yes | Yes | No |
| Messages stored long-term | No (DM 7d, channels 0) | Yes (forever) | Yes (forever) | No |
| AI bot in channels | Yes | 3rd-party only | 3rd-party only | No |
| GPU monetization for users | Yes (peer mode) | — | — | — |
Rough estimate based on average tokens-per-second per GPU class. Real earnings depend on demand.
—
Have spare H100 / RTX racks during off-peak hours? Plug 5–20 % of your fleet into CoralMesh and turn idle into income — without a sales team, without a customer onboarding pipeline, without changing your existing workloads.
Big customers reserve N GPUs but rarely hit 100 % utilisation. The remaining headroom usually idles. CoralMesh routes inference traffic to whatever capacity you mark as available, in 30-second-granularity slots. Mark your spare 20 % at 03:00–09:00 and the network fills it.
You don't court customers, write contracts, or staff a support team. The platform brokers requests; you run the inference; we settle credits to fiat via Wise weekly. Higher-tier peers (data-center grade) get ranked first in the scheduler and earn a 1.2× quality multiplier.
AWS / GCP / Azure see customer prompts in plaintext during inference. CoralMesh's relay strips identity before the prompt reaches you, and the customer-side encryption means logs never link back. For B2B clients with sensitive workloads — legal, healthcare, R&D — this is a moat your hyperscaler resale never had.
You serve any Ollama-compatible model. We take 25 % platform fee, 5 % goes to a refund/risk pool, you keep 70 % of every token. Compare to: AWS Bedrock margins (50–70 % vendor capture), GPU rental brokerages (often 30–40 % cut + opaque billing), or running your own SaaS (months of sales lift, customer support, churn).
The peer-agent is a single Go binary. Helm chart deploys N replicas across your nodes, each agent advertises the GPUs it sees, scheduler does the rest. Resource caps + termal/throttle hooks built in — won't fight your existing scheduler.
Fleet of 50+ GPUs and looking for a bilateral agreement (committed minimum payouts, custom SLAs, dedicated routing pool)? Email enterprise@coralmesh.io with your specs — we typically respond within 24h.
enterprise@coralmesh.ioInference is becoming a utility. Today you pay a SaaS toll. Tomorrow you should be able to opt in: bring your hardware, serve a model, get paid. Use the network for cheap, private chat. Millions of consumer GPUs sit idle — cheaper, more private, more resilient than five hyperscale clusters. CoralMesh is the protocol layer that makes it work.
— people are already on the network.
join them →We email you the moment macOS / Windows / Linux installers are signed and ready. No marketing spam.
Any Ollama-compatible model. The catalog of network-blessed models is curated by admin, but if you run your own peer you decide what it serves.
No — chats are routed to peers. You only need a GPU if you want to earn credits by serving the network.
Depends on your GPU. A modern 200 W card running 8 h / day at $0.15 / kWh costs about $7 / month. The earn calculator above factors this in.
Sharing idle compute is legal in the EU, US, UK and Turkey. Some jurisdictions tax peer earnings as income — check locally. Cashouts above the KYC threshold require ID verification.
The gateway transparently retries on another peer with the same model. The user sees a brief stall, never a half-finished response.
Yes — register the model on your peer; only your account sees that model in the menu unless you opt to advertise it.
1 credit = $0.01. Per-token pricing per model is in the public catalog endpoint. Admin can adjust without redeploying.
Earn credits → reach the KYC threshold ($500) → submit ID → ops processes a Wise transfer to your registered email within 14 days during the beta.