Fascinating paper from @ritualnet that addresses the data privacy issue with 3rd party LLM inference by splitting the prompt among nodes and performing sharded LLM inference:
713