In all seriousness, this is actually one of the biggest missing pieces of the puzzle. Not so much the AI bit—the local bit. Platforms don’t give their users real capabilities around who and what can get into their notifications. Hardly any proactive controls even shittier reactive ones tbh. As a result, the platform becomes solely responsible for stopping ALL spam, slop, bots, etc. Along with all other harmful activity. Which they can’t ever do perfectly bc the incentive is too great and the actors too diverse. It also forces them to resort to more potentially dangerous solutions e.g. shadowbans where the user thinks they are getting attention but they are actually invisible. Shadowbans are necessary part of the fight bc it disrupts the threat actors while making it harder for them to detect which of their actions are resulting in them being caught. It slows their turn around time and evolution time. Buttttt it’s also a huge risk to undermining your legitimate users as they are powerless and have no knowledge or no recourse if you accidentally whack them. It’s also more easily abused when it comes to censoring people for political or arbitrary reasons. Anyways, I thought we would see more folks realize and build controls for users after iOS spent at least 4 cycles doing this for iOS notifications. We have not lol. But maybe the bots will enable individuals the solve the problem without involving (or literally buying) the platform.