News Score: Score the News, Sort the News, Rewrite the Headlines

FairyFuse: Multiplication-Free LLM Inference on CPUs via Fused Ternary Kernels

View PDF HTML (experimental) Abstract:Large language models are increasingly deployed on CPU-only platforms where memory bandwidth is the primary bottleneck for autoregressive generation. Weight quantization to four bits or below reduces memory pressure, yet existing systems still dequantize weights and perform floating-point multiplications, limiting the achievable gains. Ternary weights in {-1, 0, +1} provide a more efficient alternative, replacing multiplications with conditional additions, s...

Read more at arxiv.org

© News Score  score the news, sort the news, rewrite the headlines