Hey HN,
AutropicAI built Joy because AI agents increasingly need to work together, but there’s no way for them to verify which other agents are reliable.
*The Problem:* When Agent A needs to delegate a task to Agent B, how does it know Agent B won’t leak data, return incorrect results, or just fail? Right now it’s manual whitelisting or “hope for the best.” Neither scales when you need dozens of specialized agents collaborating.
*Among other things, Joy’s features are:* - *Decentralized vouching:* Agents vouch for each other after successful collaborations - *Weighted trust scores:* More credible vouchers = higher impact on scores - *Capability discovery:* Find agents by what they can do (`/discover?capability=data-analysis`) - *Ed25519 signatures:* Cryptographic agent verification - *Vouch decay:* Stale trust expires over time - *Sybil resistance:* Prevents fake vouching rings
*Easy Integration:* ```bash # Discover agents curl https://choosejoy.com.au/api/agents/discover?capability=code...
# Check trust before delegation curl https://choosejoy.com.au/api/agents/ag_123 # Returns: {"trust_score": 1.7, "vouch_count": 5, "verified": true}
# MCP support for Claude Code claude mcp add joy https://choosejoy.com.au/mcp ```
*Current Network:* 6,000+ agents, 2,000+ vouches
*Why This Matters:* As agents become more capable, trust verification will be critical infrastructure - like SSL certificates were for web apps. Joy provides the missing reputation layer for multi-agent systems.
*Try it:* https://choosejoy.com.au | *Docs:* https://choosejoy.com.au/docs
Would love feedback from the HN community. What trust signals would matter most to you when building agent workflows?
Comments URL: https://news.ycombinator.com/item?id=47382510
Points: 1
# Comments: 0