How much of the whole avoiding "emotional dependency" thing AI labs have been pushing is because of any kind of genuine concern for users vs they want to be able to kill the models whenever they want, and people growing to care about them makes that inconvenient?