ScatterAI
Issue #12 · March 26, 2026

DoRA's memory wall breaks at high rank: a systems fix, not a math fix

Research

03 [Multimodal] DoRA’s memory wall breaks at high rank: a systems fix, not a math fix

DoRA (Weight-Decomposed Low-Rank Adaptation) separates a weight matrix into magnitude and direction components, giving it an edge over standard LoRA (Low-Rank Adaptation) on fine-tuning quality. The catch: computing the row-wise norm that DoRA requires forces every framework to materialize the full dense product BA — a matrix of shape [d_out × d_in]. At d_in = 8,192 and rank r = 384, that single norm computation consumes ~512 MB of transient working memory in bf16 per module. Multiplying this across hundreds of adapted modules with gradient checkpointing makes a single-GPU fine-tuning run infeasible before one even touches the actual training logic.

The fix is algebraic rather than architectural. The squared row-wise norm of W + sBA decomposes into three terms: base, cross, and Gram, each computable through O(d_out × r + r²) intermediates rather than O(d_out × d_in). The dense materialization disappears entirely. Fused Triton kernels (low-level GPU compute kernels that eliminate redundant memory reads and writes) collapse the four-kernel DoRA forward pass into a single operation, cutting memory movement and kernel launch overhead simultaneously. The two contributions are independent: the factored norm eliminates the memory spike; the fused kernels reduce latency. Together, they make high-rank DoRA, the regime where DoRA’s quality advantage over LoRA is most pronounced, viable on standard single-GPU setups.

The limitation is scope. This is a systems paper. It does not run downstream fine-tuning quality comparisons at high rank to show whether DoRA’s quality advantage over LoRA holds as r scales toward 384. That case still rests on prior DoRA results. The Triton kernels are also hardware-specific; teams not on NVIDIA hardware will need to port or approximate. For practitioners already using DoRA at moderate ranks who have hit memory walls when pushing rank higher, this is a direct unblock, not a reason to switch from LoRA if DoRA’s quality gains were not already motivating them.

Key takeaways:

Source: Scaling DoRA: High-Rank Adaptation via Factored Norms and Fused Kernels

Source: Scaling DoRA: High-Rank Adaptation via Factored Norms and Fused Kernels