Today, I actually did a bit of work, but reading papers was annoying, so I asked GPT to find papers on < this topic~ >. But if there are none, it should just say so clearly, right? Yet, it keeps coming up with titles that don't exist, haha. Exactly the kind of vibe I want!! It sounds plausible!!!! It could exist!!!!! This is really frustrating because it could be that the AI is mislinking, so I have to search on Google Scholar or Sci Hub, and if nothing comes up, I have to check if there are actual references and look at the paper's IF, and if it overlaps too much, I have to think about what kind of identity it could have or change my topic or something ;;; When I search? It turns out everything was a scam and doesn't exist!!!!! In times like this, I really feel the need for @Mira_Network's AI hallucination reduction technology. Mira Network adopts a consensus-based multi-model verification to reduce hallucination rates and ensures reliability and transparency through a blockchain-based verification layer. Similarly, @AlloraNetwork is developing a zkML-based distributed machine intelligence network to reduce this AI's 'false confidence'. This system has a context-aware + prediction loss function-based evaluation structure, and with this technology, AIs evaluate each other and have a self-improving structure. Through a structure where AI verifies AI, they share a common goal of minimizing misinformation or fictional answers (hallucinations), and I believe this direction is the way to solve the real problems we face with AI right now. This article was written after conducting research through the crypto research platform @Surf_Copilot. This video was produced through @Everlyn_ai. @auz2or @subijjjang @iinging747