is talking to a model fine-tuned on your own thoughts (say, for investment judgement) a sign of madness? given this kind of iteration can help models, maybe it helps humans "silicon ducking" instead of rubber ducking