【Breaking News Summary】 The historic trial "Getty Images vs Stability AI" regarding AI image generation has finally reached a conclusion. How far does the information learned by AI relate to copyright? Creators and technologists around the world have been watching closely. For those busy, just read the ⭐️. ⭐️ For the first time, whether AI learning falls under copyright has been questioned. In January 2023, major photo service Getty Images sued AI company Stability AI in the UK. The reason was that their photos were used for AI training without permission. The key issue was whether "AI learning constitutes copying under copyright law," and the mechanism of AI was examined in court for the first time. ⭐️ On November 4, 2025, Stability AI won almost entirely. Judge Joanna Smith of the High Court in London ruled that the internal data (model weights) of Stable Diffusion do not store copyrighted works. This is merely numerical information obtained through learning and does not hold the works themselves. In other words, the AI model itself was recognized as not being a "copy." ⭐️ The phenomenon I always refer to as unintended memory was also understood. The phenomenon I call unintended memory, where AI accidentally outputs images or features it has seen during training, was also discussed as a technical concern. The ruling stated that "the model does not retain data," while acknowledging the possibility of similar outputs occurring. The court understood this mechanism while legally distinguishing it from infringement. It feels like the conclusion I have been explaining all along. ⭐️ Trademarks are an exceptional case. It was confirmed that some images generated by Stable Diffusion contained Getty's watermark, leading to a trademark violation. Even if it was unintentional, it was deemed unacceptable at the moment it appeared. However, Stability AI had already completed corrections, and the damages were minimal. ⭐️ No injunction was issued, and the model's release continues. Getty sought to halt the distribution of the model, but the court rejected this. Since copyright infringement was not recognized, there was no reason to stop the release. As a result, Stable Diffusion will continue to be distributed and used in the future. ⭐️ A ruling that shows the balance of AI freedom and responsibility. This trial officially recognized the principle that "AI's internal data is not a copy of copyrighted works." On the other hand, if trademarks or existing works appear directly in the output, responsibility arises. In other words, while creating AI itself is legal, caution is needed in its use. I believe this ruling provides a realistic answer that balances the development of AI and the rights of creators. Attention is likely to turn to similar lawsuits in North America next. ...