Increasingly it seems to me “supervised” vs “unsupervised” is really better understood as “vector loss” vs “scalar loss” plus “teacher led” vs “student led”.
@subprime_ideas One way you can get a vector loss is to add extra Labels, but another way to get a vector loss is to treat some part of the training set as providing the targets for the rest. This is particularly common with sequence learning, but can be done in other ways too.
@subprime_ideas Talking about whether the process is “supervised” is relevant if you primarily think about training on non-interactive domains. Bc in those domains, the input and output are usually in v different spaces (eg images and categories).
8.07K