Verifiable AI inference and consensus mechanisms based on zero-knowledge proofs @wardenprotocol , @inference_labs , @miranetwork Despite the high utility of AI inference, it has a structural limitation in that it is difficult to verify its internal operations. The black-box structure where model weights and training data are not disclosed, the indeterminacy caused by probabilistic sampling and floating-point operations, and the execution integrity issue that prevents verification of whether the correct model was actually executed make it hard to trust the output results of AI on their own. These characteristics have repeatedly been pointed out as problems in areas with low error tolerance, such as finance, security, and automated decision-making. One of the technical approaches to solve this trust issue is the verifiable AI inference structure utilizing zero-knowledge proof-based machine learning, or zkML. zkML adopts a method that cryptographically proves that the operations were performed according to the correct weights and rules without disclosing the internal operations of the model. This allows users to assess the legitimacy of the results based on the mathematical proof itself, without needing to trust the model provider to trust the AI's output. In this structure, the Warden Protocol, responsible for the execution layer, applies a statistical execution proof method called SPEX to the inference tasks performed by AI agents. Instead of re-executing the entire operation, SPEX summarizes the computational states generated during the inference process using a Bloom filter and verifies the consistency of execution through random sampling. This process operates with just a pair of solver and verifier, providing a high probabilistic trust level with very low computational overhead compared to full re-execution. Thus, the Warden Protocol plays a role in confirming that execution has indeed taken place with moderate costs and delays. In the verification layer, Omron, operated by Inference Labs, plays a key role. Omron is a zkML-specialized infrastructure composed of a subnet of the Bittensor network, verifying that actual AI model inference was executed according to the correct weights and operation order with complete zero-knowledge proof. Omron compiles large models in a DSperse manner and improves processing speed through parallel proof generation. Through this structure, hundreds of millions of zkML proofs have been generated and verified, and practical operational cases have been accumulated for small models and medium-sized neural networks. However, due to high computational costs and memory requirements, there are realistic constraints for very large models. In the consensus layer, the Mira Network complements the trustworthiness of output results through a multi-model-based consensus mechanism. The Mira Network does not simply adopt the output of a single model but compares the results of several AI models with different structures and training backgrounds. Outputs are decomposed into independently verifiable claim units, and their factuality is assessed through consensus among multiple models. This process combines proof-of-work elements that demonstrate that actual inference has been performed and proof-of-stake elements where verifiers participate with certain assets as collateral. Economic penalties are imposed if incorrect results are repeatedly approved or malicious behavior is confirmed, thus maintaining the integrity of the consensus. These three layers are separate yet complementarily connected, forming a single verifiable AI stack. The Warden Protocol provides fast and cost-effective statistical execution proofs at the execution stage, and complete zkML-based verification is performed through Omron when a high level of trust is required. Subsequently, the interpretation of results and assessment of factuality are reinforced through the multi-model consensus of the Mira Network, with the authenticity of execution, trustworthiness of output, and economic safety of consensus verified through different mechanisms. This structure demonstrates a realistic design in that it applies different verification means according to risk levels and cost structures, unlike approaches that attempt to prove all AI inferences in a single manner. Cryptographic proofs are applied to high-value inferences that can bear high costs, while statistical verification and consensus-based verification handle large-scale processing and scalability. Through this hierarchical combination, verifiable AI inference is establishing itself as a practical operational technology system beyond a theoretical concept. $WARD $MIRA