Every AI tool review tells you what a tool does. The Silk Index™ tells you what it costs your cognition — the invisible tax on your creative process, scored across five axes drawn from cognitive science.
The Silk Index™ is editorially assigned — like a Michelin inspector's notes, no algorithm required. Each axis maps to a distinct body of cognitive science research. A tool that scores 91 on Handoff but collapses to 38 on Output Drift is telling you something the feature sheet will never say.
Measures the degree to which creative cognition can be offloaded to the tool versus how much scaffolding the user must construct to achieve useful output. High handoff means the tool reasons; low handoff means the tool executes only when led.
The elapsed time between initiating a session and receiving output that advances the work, calibrated against the quality threshold required for professional use. A tool that returns in 2 seconds with noise scores lower than one that requires 12 seconds and delivers signal.
Evaluates the tool's ability to maintain working context — project vocabulary, aesthetic register, prior decisions — without requiring the user to re-establish them each session. Tools with low retention force the creative to serve as the tool's own memory.
The inverse of a tool's resistance to homogenization under repeated use. High drift means continued engagement produces increasingly similar, degraded, or averaged output — the signature of a tool that is dangerous at volume. Low drift indicates creative resilience over extended sessions.
Quantifies the mental overhead of entering and exiting the tool within a live creative workflow. Every context switch imposes a re-entry penalty. Tools with high friction demand re-orientation; tools with low friction become transparent — they disappear into the work.
All twelve tools scored by The Spider's editorial team. Click any row to open the full Silk profile — radar chart, axis-by-axis breakdown, and the one-line verdict that does not appear anywhere else.
| # | Tool | Handoff | Latency | Retention | Drift | Friction | Silk Index™ |
|---|
The Silk Index™ is not a user survey. It is not a benchmark test. It is editorially assigned by The Zürich Spider's editorial team, who use each scored tool in live creative production — the way a Michelin inspector orders the tasting menu, not the way a food blogger photographs it.
The five axes are drawn from peer-reviewed cognitive science research. They measure what the feature sheet conceals: the invisible cognitive tax that accumulates across a sustained creative workflow. A tool that costs you nothing on a thirty-minute demonstration may extract a significant toll at month three of daily use.
Scores are re-evaluated continuously as tools evolve. Earlier scores are marked as provisional until reaffirmed. The verdict — the one sentence beneath the score — is the editorial distillation. It is the thing we would say to a friend who trusts us.
The Spider Dispatch arrives when there is something worth saying — updated Silk Index™ scores, new tool entries, and the one or two verdicts that have shifted materially. Subscribers receive the full delta — what improved, what collapsed, and what the Spider team is watching.