quite honestly, given how we have absolutely no clue what mechanistic explanations there are for conscious experience, it’s not at all absurd to think that large language models have a chance of being conscious, albeit only instantaneously