i had one idea about continuous learning x interpretability over the weekend and want some feedback on it by serious ML people, anyone that'd care to take a look at my 1-2 paragraph summary of the idea?