OK, here is Round 2 of the Super Big Brained Optimizer Prompt. This post can fortunately be much shorter than the quoted post, because the entire workflow after the initial prompt is identical to Round 1, just replacing "1" with "2" in the filenames. Here is the prompt: --- First read ALL of the AGENTS md file and README md file super carefully and understand ALL of both! Then use your code investigation agent mode to fully understand the code, and technical architecture and purpose of the project. Then, once you've done an extremely thorough and meticulous job at all that and deeply understood the entire existing system and what it does, its purpose, and how it is implemented and how all the pieces connect with each other, I need you to hyper-intensively investigate and study and ruminate on these questions as they pertain to this project: Are there any other gross inefficiencies in the core system? places in the code base where 1) changes would actually move the needle in terms of overall latency/responsiveness and throughput; 2) such that our changes would be provably isomorphic in terms of functionality so that we would know for sure that it wouldn't change the resulting outputs given the same inputs; 3) where you have a clear vision to an obviously better approach in terms of algorithms or data structures (note that for this, you can include in your contemplations lesser-known data structures and more esoteric/sophisticated/mathematical algorithms as well as ways to recast the problem(s) so that another paradigm is exposed, such as the list shown below (Note: Before proposing any optimization, establish baseline metrics (p50/p95/p99 latency, throughput, peak memory) and capture CPU/allocation/I/O profiles to identify actual hotspots): - convex optimization (reformulation yields global optimum guarantees) - submodular optimization (greedy gives constant-factor approximation) - semiring generalization (unifies shortest path, transitive closure, dataflow, parsing) - matroid structure recognition (greedy is provably optimal) - linear algebra over GF(2) (XOR systems, toggle problems, error correction) - reduction to 2-SAT (configuration validity, implication graphs) - reduction to min-cost max-flow (assignment, scheduling, resource allocation) - bipartite matching recognition (Hungarian, Hopcroft-Karp) - DP as shortest path in implicit DAG (enables priority-queue DP, Dijkstra-style optimization) - convex hull trick / Li Chao trees (O(n²) DP → O(n log n)) - Knuth's optimization / divide-and-conquer DP - Hirschberg's space reduction (when applicable beyond alignment) - FFT/NTT for convolution (polynomial multiplication, sequence correlation) - matrix exponentiation for linear recurrences - Möbius transform / subset convolution - persistent/immutable data structures (versioning, rollback, speculative execution)...