Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Neuromorphic photonic computing meets analog memory: eliminating the data conversion bottleneck
Most AI hardware runs on electronic chips where computations are carried out by moving electrons through transistors. But there is a growing alternative: performing neural network operations using light. In photonic processors, matrix multiplications are executed by encoding data as optical signals and passing them through arrays of tunable microring resonators, each acting as a synaptic weight. The physics of light enables massive parallelism and ultralow-latency propagation, promising speeds and energy efficiencies that electronics alone cannot match.
Yet there's a surprisingly mundane bottleneck: every synaptic weight needs a dedicated digital-to-analog converter (DAC) to continuously drive the modulator that encodes it. For a weight matrix of size n×n, that's n² DACs running nonstop—not to compute, but just to hold voltages in place. The energy cost of shuttling data between digital memory and analog compute risks negating the very advantages that make photonics attractive.
Sean Lam and coauthors demonstrate an elegant fix: place a tiny capacitor directly on each optical modulator. Once charged, it holds the weight without needing a DAC to stay active. DACs are then shared along columns and activated only when weights need updating—scaling as n instead of n². The concept, called dynamic electro-optic analog memory (DEOAM), is fabricated on a monolithic silicon photonic chip in a standard 90 nm foundry process, achieving over 26× power savings compared to conventional designs.
The experimental numbers frame the tradeoffs clearly. Write time is ~40–50 ns, retention time ~0.83 ms, energy per write ~56 pJ, and bit precision around 5.6 bits. Retention depends on incident optical power—more light means more leakage—creating a direct tension between signal quality and memory lifetime.
To understand what these specs mean for real workloads, the authors emulate a three-layer neural network on MNIST. A retention-to-network-latency ratio of just 100 suffices to keep inference accuracy above 90%. Networks trained with the leakage included become substantially more robust—a vivid example of hardware-aware training paying off.
The same principle driving electronic in-memory computing—that moving data costs more energy than computing with it—applies here in a hybrid electro-optic domain. Most photonic processors today can only do inference with weights trained offline on GPUs. DEOAM opens a path toward on-chip, online training where the network adapts continuously to new data and hardware drift.
Paper:

Top
Ranking
Favorites
