LOGIC is the result of over a year of hard work from the @inference_net team. It is the most robust way to make sure your inference provider isn't doing anything sketchy. It works out-of-the-box with with @vllm_project, @OpenRouterAI, and any provider that return logprops
Inference
InferenceNov 8, 2025
Today, we release LOGIC: A novel method for verifying LLM inference in trustless environments. - Detects model substitution, quantization, and decode-time attacks - Works out of the box with @vllm_project, @sgl_project, @OpenRouterAI, and more (just need logprops) - Robust across GPU types and hardware configurations - Low computational overhead (~1% of total cost) Blog: Code:
@inference_net @vllm_project @OpenRouterAI This was an incredibly tricky problem to solve, with many nuances and edge cases. Shoutout to @AmarSVS, @francescodvirga and @bonham_sol for all their hard work on this
4.1K