Addition is All You Need for Energy-Efficient Language Models (2024)

Hongyin Luo & Wei Sun
BitEnergy AI, Inc.
Cambridge, MA 02142, USA
{hongyin,wei}@bitenergy.ai

Abstract

Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision.We propose the linear-complexity multiplication (\mathcal{L}caligraphic_L-Mul) algorithm that approximates floating point number multiplication with integer addition operations.The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision.Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation.Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the \mathcal{L}caligraphic_L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element-wise floating point tensor multiplications and 80% energy cost of dot products.We calculated the theoretical error expectation of \mathcal{L}caligraphic_L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering.Our numerical analysis experiments agree with the theoretical error estimation, which indicates that \mathcal{L}caligraphic_L-Mul with 4-bit mantissa achieves comparable precision as float8_e4m3 multiplications, and \mathcal{L}caligraphic_L-Mul with 3-bit mantissa outperforms float8_e5m2.Evaluation results on popular benchmarks show that directly applying \mathcal{L}caligraphic_L-Mul to the attention mechanism is almost lossless.We further show that replacing all floating point multiplications with 3-bit mantissa \mathcal{L}caligraphic_L-Mul in a transformer model achieves equivalent precision as using float8_e4m3 as accumulation precision in both fine-tuning and inference.

1 Introduction

Modern artificial intelligence (AI) systems are significant energy consumers.Because of the large scale computation needed for neural network inference, AI applications based on such models are consuming a considerable amount of electricity resource.Reportly, the average electricity consumption of ChatGPT service in early 2023 was 564 MWh per day, equivalent to the total daily electricity usage of 18,000 families in the United States111https://www.eia.gov/tools/faqs/faq.php?id=97. It is estimated that Google’s AI service could consume as much electricity as Ireland (29.3 TWh per year) in the worst-case scenario (deVries, 2023).

Reducing the amount of computation needed by neural networks is the key to reduce both energy consumption and inference speed for large-scale AI models.Neural networks, especially large language models (LLMs) (Radford etal., 2019; Brown, 2020; Achiam etal., 2023; Touvron etal., 2023; Team etal., 2023), contain a large number of floating point parameters involved in element-wise and matrix multiplication computations.In transformer (Vaswani, 2017) based LLMs, the attention mechanism is a major bottleneck that limits the computation efficiency. Given a input context of N𝑁Nitalic_N tokens, the complexity of standard attention mechanism computation is O(N2)𝑂superscript𝑁2O(N^{2})italic_O ( italic_N start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ), involving multiplying high dimensional tensors.Besides attention, there are also a large amount of element-wise multiplication and linear transformation computations.In this work, we propose the linear-complexity multiplication (\mathcal{L}caligraphic_L-Mul) algorithm, which approximates floating point multiplication with integer addition operations. The algorithm can be integrated into existing models at various levels, such as replacing the multiplication in the attention mechanism or substituting all matrix and element-wise multiplications.

The proposed \mathcal{L}caligraphic_L-Mul method will lead to a significantly reduced energy consumption for both model training and inference.In modern computing hardware, multiplications between floating point numbers consumes significantly higher energy than addition operations (Horowitz, 2014). Specifically, multiplying two 32-bit floating point numbers (fp32) costs four times energy as adding two fp32 numbers, and 37 times higher cost than adding two 32-bit integers (int32). The rough energy costs for various operations are shown in Table 1.In PyTorch (Paszke etal., 2019), the default precision for accumulating tensor multiplication results is set to fp32. While I/O and control operations are not considered, approximating fp32 multiplications with int32 additions consumes only 1/372.7%137percent2.71/37\approx 2.7\%1 / 37 ≈ 2.7 % of the energy. When the accumulation precision is reduced to fp16, integer addition consumes approximately 4.7%percent4.74.7\%4.7 % of the energy required for floating-point multiplication.

OperationIntegerFloating Point
8-bit32-bit16-bit32-bit
Addition0.03 pJ0.1 pJ0.4 pJ0.9 pJ
Multiplication0.2 pJ3.1 pJ1.1 pJ3.7 pJ

We evaluate the numerical precision of \mathcal{L}caligraphic_L-Mul algorithm on transformer-based language models with a wide range of language and vision tasks.Experiments with full-precision model weights show that replacing standard multiplication operations with \mathcal{L}caligraphic_L-Mul in the attention mechanism is almost lossless for transformer-based LLMs.On natural language reasoning tasks, the average performance loss of \mathcal{L}caligraphic_L-Mul-based attention is 0.07%percent0.070.07\%0.07 % across commonsense, structured reasoning, language understanding. On vision tasks, \mathcal{L}caligraphic_L-Mul-based attention gained 0.12%percent0.120.12\%0.12 % accuracy improvement on visual question answering, object hallucination, and free-form visual instruction tasks.The experiment results are obtained by directly adapting pretrained LLMs with the standard attention implementation to the new \mathcal{L}caligraphic_L-Mul-based attention mechanism without any additional training.

The error estimation and ablation study show that under the training-free setting, \mathcal{L}caligraphic_L-Mul with 4-bit mantissa can achieve comparable precision as multiplying float8_e4m3 numbers, and \mathcal{L}caligraphic_L-Mul with 3-bit mantissa outperforms float8_e5m2 multiplication.We also show that fine-tuning can fix the performance gap between \mathcal{L}caligraphic_L-Mul and the standard multiplication.Fine-tuning a model where all multiplication operations in attention mechanisms, linear transformations, and element-wise products are replaced by 3-bit-mantissa \mathcal{L}caligraphic_L-Mul results in comparable performance to fine-tuning a standard model with an accumulation precision of float8_e4m3.

In the expansive landscape of AI efficiency research, our approach centers on enhancing the efficiency of tensor arithmetic algorithms—a direction that is orthogonal yet complementary to prevailing efforts in I/O and control optimization (Jouppi etal., 2017; Choquette etal., 2021; Abts etal., 2022)222Due to the absence of native implementation, GPUs cannot fully exploit the efficiency of the \mathcal{L}caligraphic_L-Mulalgorithm. We recommend training and hosting \mathcal{L}caligraphic_L-Mul-based models on devices integrated with specialized architectural designs. Patent pending.. We believe that truly energy- and compute-efficient AI computation will emerge from a holistic integration of optimizations across I/O, control, and arithmetic operations.

2 Method

2.1 Background: Floating-point Numbers and Tensors

Most machine learning models, including neural networks, use floating point (FP) tensors to represent their inputs, outputs, and trainable parameters. Typical choices are 32-bit and 16-bit FP tensors (fp32 and fp16) defined by the IEEE 754 standard shown in Figure 1.

Addition is All You Need for Energy-Efficient Language Models (1)

Multiplication operations are generally more complicated than additions, and FP operation are more costly than integers (Horowitz, 2014). Table 1 shows that multiplying two fp32 numbers consumes 37 times higher energy than adding two 32-bit integers.While the complexity of integer addition is O(n)𝑂𝑛O(n)italic_O ( italic_n ) where n𝑛nitalic_n is the number of bits used for representing the number, FP multiplication requires O(e)𝑂𝑒O(e)italic_O ( italic_e ) exponent addition, O(m2)𝑂superscript𝑚2O(m^{2})italic_O ( italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) mantissa multiplication, and rounding. Here e𝑒eitalic_e and m𝑚mitalic_m stand for the number of bits used for exponent and mantissa parts of the FP numbers.

Modern LLM training and inference involves a large number of FP calculations in tensor computation. Consider calculating the element-size and dot products of two 2-D tensors:

Y1=AX,Y2=AXT;A,XR(N,k)formulae-sequencesubscript𝑌1𝐴𝑋formulae-sequencesubscript𝑌2𝐴superscript𝑋𝑇𝐴𝑋superscript𝑅𝑁𝑘Y_{1}=A\circ X,\;Y_{2}=A\cdot X^{T};\;A,X\in R^{(N,k)}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_A ∘ italic_X , italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = italic_A ⋅ italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ; italic_A , italic_X ∈ italic_R start_POSTSUPERSCRIPT ( italic_N , italic_k ) end_POSTSUPERSCRIPT

Calculating Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT involves N2superscript𝑁2N^{2}italic_N start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT FP multiplications (Mul). If A𝐴Aitalic_A and X𝑋Xitalic_X are both fp32 tensors, AX𝐴𝑋A\circ Xitalic_A ∘ italic_X consumes 37373737 times higher energy than adding two int32 matrices of the save size. Similarly,Calculating Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT involves (m×n×k)𝑚𝑛𝑘(m\times n\times k)( italic_m × italic_n × italic_k ) FP Mul and the same number of FP additions (Add). When A𝐴Aitalic_A and X𝑋Xitalic_X are fp32 tensors, each Mul-Add operation for two numbers consumes 0.9+3.7=4.60.93.74.60.9+3.7=4.60.9 + 3.7 = 4.6 (pJ) energy. If we replace the fp32 Mul with int32 Add, the energy cost becomes 0.1+0.9=1.00.10.91.00.1+0.9=1.00.1 + 0.9 = 1.0 (pJ), only 21.7% of the original cost. Similarly, if the inference is conducted in fp16, replacing fp16 Mul with int16 Add result in a 1(0.05+0.4)/(1.1+0.4)=7010.050.41.10.4701-(0.05+0.4)/(1.1+0.4)=701 - ( 0.05 + 0.4 ) / ( 1.1 + 0.4 ) = 70% energy saving.

2.2 Linear-complexity Multiplication (\mathcal{L}caligraphic_L-Mul)

We propose \mathcal{L}caligraphic_L-Mul, a FP multiplication algorithm with O(n)𝑂𝑛O(n)italic_O ( italic_n ) complexity, where n𝑛nitalic_n is the bit size of its FP operands. Consider two FP numbers x,y𝑥𝑦x,yitalic_x , italic_y, whose exponents and fractions are xe,yesubscript𝑥𝑒subscript𝑦𝑒x_{e},y_{e}italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT and xm,ymsubscript𝑥𝑚subscript𝑦𝑚x_{m},y_{m}italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT respectively, the vanilla FP Mul result is

Mul(x,y)=(1+xm)2xe(1+ym)2ye=(1+xm+ym+xmym)2xe+ye𝑀𝑢𝑙𝑥𝑦1subscript𝑥𝑚superscript2subscript𝑥𝑒1subscript𝑦𝑚superscript2subscript𝑦𝑒1subscript𝑥𝑚subscript𝑦𝑚subscript𝑥𝑚subscript𝑦𝑚superscript2subscript𝑥𝑒subscript𝑦𝑒Mul(x,y)=(1+x_{m})\cdot 2^{x_{e}}\cdot(1+y_{m})\cdot 2^{y_{e}}=(1+x_{m}+y_{m}+%x_{m}\cdot y_{m})\cdot 2^{x_{e}+y_{e}}italic_M italic_u italic_l ( italic_x , italic_y ) = ( 1 + italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ⋅ ( 1 + italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT = ( 1 + italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT + italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ⋅ italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT

plus an xor operation (direct-sum\oplus) to decide the sign of the result. Assume xmsubscript𝑥𝑚x_{m}italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT and ymsubscript𝑦𝑚y_{m}italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT are mantissas of m𝑚mitalic_m bits. The O(m2)𝑂superscript𝑚2O(m^{2})italic_O ( italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) mantissa multiplication operation is the complexity bottleneck of this calculation. We remove this operation and introduce a new multiplication algorithm that processes mantissas with a computational complexity of O(m)𝑂𝑚O(m)italic_O ( italic_m ):

-Mul(x,y)=(1+xm+ym+2l(m))2xe+ye,l(m)={mifm3,3ifm=4,4ifm>4.formulae-sequence-Mul𝑥𝑦1subscript𝑥𝑚subscript𝑦𝑚superscript2𝑙𝑚superscript2subscript𝑥𝑒subscript𝑦𝑒𝑙𝑚cases𝑚if𝑚33if𝑚44if𝑚4\mathcal{L}\text{-Mul}(x,y)=(1+x_{m}+y_{m}+2^{-l(m)})\cdot 2^{x_{e}+y_{e}},\;%\;l(m)=\begin{cases}m&\text{if }m\leq 3,\\3&\text{if }m=4,\\4&\text{if }m>4.\end{cases}caligraphic_L -Mul ( italic_x , italic_y ) = ( 1 + italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT + 2 start_POSTSUPERSCRIPT - italic_l ( italic_m ) end_POSTSUPERSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , italic_l ( italic_m ) = { start_ROW start_CELL italic_m end_CELL start_CELL if italic_m ≤ 3 , end_CELL end_ROW start_ROW start_CELL 3 end_CELL start_CELL if italic_m = 4 , end_CELL end_ROW start_ROW start_CELL 4 end_CELL start_CELL if italic_m > 4 . end_CELL end_ROW(1)

The offset exponent l(m)𝑙𝑚l(m)italic_l ( italic_m ) is defined according the observation shown in Figure 3. In the following sections, we show that (1) the \mathcal{L}caligraphic_L-Mul operation can be implemented by integer Adders, and (2) the algorithm is more accurate and efficient than fp8 multiplications.

The implementation of the algorithm is shown in Figure 2, where we also added the Inline PTX Assembly code we used to simulate the process on Nvidia GPUs.

Addition is All You Need for Energy-Efficient Language Models (2)

While Equation (1) contains 4 addition operations, the bit format design of FP numbers helps us implement the \mathcal{L}caligraphic_L-Mul algorithm with one adder. Since the FP format handles 1+xm1subscript𝑥𝑚1+x_{m}1 + italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT implicitly, we do not have to compute the value of (1+)1(1+\dots)( 1 + … ). The integer addition operation also automatically send the mantissa carry to the exponent. If the mantissa sum is greater than 2, a carry is automatically added to the exponent. This is different from the rounding process in traditional FP multiplier, where the fraction is manually rounded to 1.xformulae-sequence1𝑥1.x1 . italic_x and the carry is added to the exponent as an independent step. As a result, the \mathcal{L}caligraphic_L-Mul algorithm is more efficient than traditional FP multiplication by skipping both mantissa multiplication and rounding operations.

The construction of \mathcal{L}caligraphic_L-Mul results can be expressed using the following equation, where all bit-level calculations are performed as operations between unsigned integers.

-mul(x,y)[0]=x[0]y[0]-mul(x,y)[1:]=x[1:]+y[1:]offset-mul𝑥𝑦delimited-[]0direct-sum𝑥delimited-[]0𝑦delimited-[]0-mul𝑥𝑦delimited-[]1:𝑥delimited-[]1:𝑦delimited-[]1:offset\begin{split}\mathcal{L}\text{-mul}(x,y)[0]&=x[0]\oplus y[0]\\\mathcal{L}\text{-mul}(x,y)[1\text{:}]&=x[1\text{:}]+y[1\text{:}]-\text{offset%}\end{split}start_ROW start_CELL caligraphic_L -mul ( italic_x , italic_y ) [ 0 ] end_CELL start_CELL = italic_x [ 0 ] ⊕ italic_y [ 0 ] end_CELL end_ROW start_ROW start_CELL caligraphic_L -mul ( italic_x , italic_y ) [ 1 : ] end_CELL start_CELL = italic_x [ 1 : ] + italic_y [ 1 : ] - offset end_CELL end_ROW(2)

We further implement the attention mechanism with \mathcal{L}caligraphic_L-Mul. In transformer models, the attention mechanism has a high computation cost because of its O(|C|2)𝑂superscript𝐶2O(|C|^{2})italic_O ( | italic_C | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) complexity to process the input context C𝐶Citalic_C. We found that \mathcal{L}caligraphic_L-Mul can replace the complicated tensor multiplications with minimal performance loss needing no additional training. In this work we implement a more efficient attention mechanism as follows,

K=HWk,Q=HWq,V=HWVA=softmax[-matmul(Q,KT)d],H=-matmul(A,H)formulae-sequenceformulae-sequence𝐾𝐻subscript𝑊𝑘formulae-sequence𝑄𝐻subscript𝑊𝑞𝑉𝐻subscript𝑊𝑉𝐴𝑠𝑜𝑓𝑡𝑚𝑎𝑥delimited-[]-matmul𝑄superscript𝐾𝑇𝑑superscript𝐻-matmul𝐴𝐻\begin{split}K=H\cdot W_{k},\;Q=H\cdot W_{q},\;V&=H\cdot W_{V}\\A=softmax\left[\frac{\mathcal{L}\text{-matmul}(Q,K^{T})}{\sqrt{d}}\right],\;H^%{\prime}&=\mathcal{L}\text{-matmul}(A,H)\end{split}start_ROW start_CELL italic_K = italic_H ⋅ italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_Q = italic_H ⋅ italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT , italic_V end_CELL start_CELL = italic_H ⋅ italic_W start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL italic_A = italic_s italic_o italic_f italic_t italic_m italic_a italic_x [ divide start_ARG caligraphic_L -matmul ( italic_Q , italic_K start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) end_ARG start_ARG square-root start_ARG italic_d end_ARG end_ARG ] , italic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_CELL start_CELL = caligraphic_L -matmul ( italic_A , italic_H ) end_CELL end_ROW(3)

where -matmul(Q,KT)-matmul𝑄superscript𝐾𝑇\mathcal{L}\text{-matmul}(Q,K^{T})caligraphic_L -matmul ( italic_Q , italic_K start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) stands for a matrix multiplication operation where all regular FP multiplications are implemented in \mathcal{L}caligraphic_L-Mul. By doing this, all FP multiplications are replaced with integer additions, which consumes significantly lower computation resource.

2.3 Precision and Cost Analysis

In this section, we show that \mathcal{L}caligraphic_L-Mul is more precise than fp8_e4m3 multiplications but uses less computation resource than fp_e5m2. To be concise, we do not consider the rounding to nearest even mode in both error analysis and complexity estimation for both Mul and \mathcal{L}caligraphic_L-Mul.

2.3.1 Precision Estimation

The goal of the precision analysis is to find the precision of the \mathcal{L}caligraphic_L-Mul algorithm is equivalent to rounding the fraction of a FP number to how many bits, e.g., fp8 with 2- or 3-bit mantissas (e5m2 or e4m3). Consider positive FP numbers x=(1+xm)2xe𝑥1subscript𝑥𝑚superscript2subscript𝑥𝑒x=(1+x_{m})\cdot 2^{x_{e}}italic_x = ( 1 + italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and y=(1+ym)2ye𝑦1subscript𝑦𝑚superscript2subscript𝑦𝑒y=(1+y_{m})\cdot 2^{y_{e}}italic_y = ( 1 + italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, they can be written in the following format if we explicitly highlight the k𝑘kitalic_k bits to be kept after rounding:

x=(1+xk+xr)2xe,x=(1+xk)2xeformulae-sequence𝑥1subscript𝑥𝑘subscript𝑥𝑟superscript2subscript𝑥𝑒superscript𝑥1subscript𝑥𝑘superscript2subscript𝑥𝑒x=(1+x_{k}+x_{r})\cdot 2^{x_{e}},\;\;x^{\prime}=(1+x_{k})\cdot 2^{x_{e}}italic_x = ( 1 + italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ( 1 + italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT
y=(1+yk+yr)2ye,y=(1+yk)2yeformulae-sequence𝑦1subscript𝑦𝑘subscript𝑦𝑟superscript2subscript𝑦𝑒superscript𝑦1subscript𝑦𝑘superscript2subscript𝑦𝑒y=(1+y_{k}+y_{r})\cdot 2^{y_{e}},\;\;y^{\prime}=(1+y_{k})\cdot 2^{y_{e}}italic_y = ( 1 + italic_y start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ( 1 + italic_y start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT

where xk,yksubscript𝑥𝑘subscript𝑦𝑘x_{k},y_{k}italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT are the first k bits of xm,ymsubscript𝑥𝑚subscript𝑦𝑚x_{m},y_{m}italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, and xr,yrsubscript𝑥𝑟subscript𝑦𝑟x_{r},y_{r}italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT are the value of remaining bits that will be ignored after the k-bit rounding. x,ysuperscript𝑥superscript𝑦x^{\prime},y^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT are the rounded value of x,y𝑥𝑦x,yitalic_x , italic_y by keeping the first k bits of the mantissa. Consider x𝑥xitalic_x and y𝑦yitalic_y has m𝑚mitalic_m-bit mantissa in their full precision. For example, Float16 numbers have 10-bit mantissa and BFloat16 contain 7 bits.The error of Mul(x,y)=xy𝑀𝑢𝑙𝑥𝑦𝑥𝑦Mul(x,y)=x\cdot yitalic_M italic_u italic_l ( italic_x , italic_y ) = italic_x ⋅ italic_y and its expectation are

emulk=Mul(x,y)Mul(x,y)=(xkyr+ykxr+xr+yr+xryr)2xe+yeE[emulk]=f1(m,k)E[2xe+ye]superscriptsubscript𝑒𝑚𝑢𝑙𝑘𝑀𝑢𝑙𝑥𝑦𝑀𝑢𝑙superscript𝑥superscript𝑦subscript𝑥𝑘subscript𝑦𝑟subscript𝑦𝑘subscript𝑥𝑟subscript𝑥𝑟subscript𝑦𝑟subscript𝑥𝑟subscript𝑦𝑟superscript2subscript𝑥𝑒subscript𝑦𝑒𝐸delimited-[]superscriptsubscript𝑒𝑚𝑢𝑙𝑘subscript𝑓1𝑚𝑘𝐸delimited-[]superscript2subscript𝑥𝑒subscript𝑦𝑒\begin{split}e_{mul}^{k}=Mul(x,y)-Mul(x^{\prime},y^{\prime})&=(x_{k}y_{r}+y_{k%}x_{r}+x_{r}+y_{r}+x_{r}y_{r})\cdot 2^{x_{e}+y_{e}}\\E[e_{mul}^{k}]&=f_{1}(m,k)\cdot E[2^{x_{e}+y_{e}}]\end{split}start_ROW start_CELL italic_e start_POSTSUBSCRIPT italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT = italic_M italic_u italic_l ( italic_x , italic_y ) - italic_M italic_u italic_l ( italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_CELL start_CELL = ( italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL italic_E [ italic_e start_POSTSUBSCRIPT italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] end_CELL start_CELL = italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_m , italic_k ) ⋅ italic_E [ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ] end_CELL end_ROW(4)

Comparing with a k𝑘kitalic_k-bit mantissa FP multiplication, the error of k𝑘kitalic_k-bit mantissa \mathcal{L}caligraphic_L-Mul is

elmulk=emulk+(xkyk2l(k))2xe+yeE[elmulk]=E[emulk]+E[xkyk2l(k)]E[2xe+ye]superscriptsubscript𝑒𝑙𝑚𝑢𝑙𝑘superscriptsubscript𝑒𝑚𝑢𝑙𝑘subscript𝑥𝑘subscript𝑦𝑘superscript2𝑙𝑘superscript2subscript𝑥𝑒subscript𝑦𝑒𝐸delimited-[]superscriptsubscript𝑒𝑙𝑚𝑢𝑙𝑘𝐸delimited-[]superscriptsubscript𝑒𝑚𝑢𝑙𝑘𝐸delimited-[]subscript𝑥𝑘subscript𝑦𝑘superscript2𝑙𝑘𝐸delimited-[]superscript2subscript𝑥𝑒subscript𝑦𝑒\begin{split}e_{lmul}^{k}&=e_{mul}^{k}+(x_{k}\*y_{k}-2^{-l(k)})\cdot 2^{x_{e}+%y_{e}}\\E[e_{lmul}^{k}]&=E[e_{mul}^{k}]+E[x_{k}\,y_{k}-2^{-l(k)}]\cdot E[2^{x_{e}+y_{e%}}]\end{split}start_ROW start_CELL italic_e start_POSTSUBSCRIPT italic_l italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT end_CELL start_CELL = italic_e start_POSTSUBSCRIPT italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT + ( italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⁢ italic_y start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - 2 start_POSTSUPERSCRIPT - italic_l ( italic_k ) end_POSTSUPERSCRIPT ) ⋅ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL italic_E [ italic_e start_POSTSUBSCRIPT italic_l italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] end_CELL start_CELL = italic_E [ italic_e start_POSTSUBSCRIPT italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] + italic_E [ italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - 2 start_POSTSUPERSCRIPT - italic_l ( italic_k ) end_POSTSUPERSCRIPT ] ⋅ italic_E [ 2 start_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT + italic_y start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ] end_CELL end_ROW(5)

With the equations above, we can compute the expectation of the precision gap between k𝑘kitalic_k-bit \mathcal{L}caligraphic_L-Mul and FP multiplication:

E[elmulk]E[emulk]=f2(k)E[2ex+ey],E[elmulk]=[f1(m,k)+f2(k)]E[2ex+ey]formulae-sequence𝐸delimited-[]superscriptsubscript𝑒𝑙𝑚𝑢𝑙𝑘𝐸delimited-[]superscriptsubscript𝑒𝑚𝑢𝑙𝑘subscript𝑓2𝑘𝐸delimited-[]superscript2subscript𝑒𝑥subscript𝑒𝑦𝐸delimited-[]superscriptsubscript𝑒𝑙𝑚𝑢𝑙𝑘delimited-[]subscript𝑓1𝑚𝑘subscript𝑓2𝑘𝐸delimited-[]superscript2subscript𝑒𝑥subscript𝑒𝑦E[e_{lmul}^{k}]-E[e_{mul}^{k}]=f_{2}(k)\cdot E[2^{e_{x}+e_{y}}],\;\;E[e_{lmul}%^{k}]=[f_{1}(m,k)+f_{2}(k)]\cdot E[2^{e_{x}+e_{y}}]italic_E [ italic_e start_POSTSUBSCRIPT italic_l italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] - italic_E [ italic_e start_POSTSUBSCRIPT italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] = italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_k ) ⋅ italic_E [ 2 start_POSTSUPERSCRIPT italic_e start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT + italic_e start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ] , italic_E [ italic_e start_POSTSUBSCRIPT italic_l italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] = [ italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_m , italic_k ) + italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_k ) ] ⋅ italic_E [ 2 start_POSTSUPERSCRIPT italic_e start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT + italic_e start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ]

When xm,ymsubscript𝑥𝑚subscript𝑦𝑚x_{m},y_{m}italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT are evenly distributed, we can calculate the following expectations,

E[xk]=12(12k),E[xr]=12(2k2m)formulae-sequence𝐸delimited-[]subscript𝑥𝑘121superscript2𝑘𝐸delimited-[]subscript𝑥𝑟12superscript2𝑘superscript2𝑚E[x_{k}]=\frac{1}{2}(1-2^{-k}),\;E[x_{r}]=\frac{1}{2}(2^{-k}-2^{-m})italic_E [ italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ] = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( 1 - 2 start_POSTSUPERSCRIPT - italic_k end_POSTSUPERSCRIPT ) , italic_E [ italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ] = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( 2 start_POSTSUPERSCRIPT - italic_k end_POSTSUPERSCRIPT - 2 start_POSTSUPERSCRIPT - italic_m end_POSTSUPERSCRIPT )

By estimating f1(m,k)subscript𝑓1𝑚𝑘f_{1}(m,k)italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_m , italic_k ) and f2(k)subscript𝑓2𝑘f_{2}(k)italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_k ) and further inferring E[elmulk]𝐸delimited-[]superscriptsubscript𝑒𝑙𝑚𝑢𝑙𝑘E[e_{lmul}^{k}]italic_E [ italic_e start_POSTSUBSCRIPT italic_l italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ] and E[emulk]𝐸delimited-[]superscriptsubscript𝑒𝑚𝑢𝑙𝑘E[e_{mul}^{k}]italic_E [ italic_e start_POSTSUBSCRIPT italic_m italic_u italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ],we find that \mathcal{L}caligraphic_L-Mul is more accurate than fp8_e5m2 with evenly distributed operands.However, the weight distribution is often biased in pretrained LLMs. Based on the combined weight distribution of five popular LLMs, we find that \mathcal{L}caligraphic_L-Mul can achieve higher precision beyond fp8_e4m3 with 5-bit mantissa operands in practice. We support both claims with estimated errors detailed in Appendix A.

2.3.2 Gate Complexity Estimation

In this section, we make a rough estimation for the amount of gate-level computations needed by \mathcal{L}caligraphic_L-Mul and fp8 multiplications. Multiplying two fpn_eimj number require the following computation: sign prediction, exponent addition with offset, a j+1𝑗1j+1italic_j + 1-bit mantissa multiplication, and exponent rounding. The mantissa multiplication includes (j+1)2superscript𝑗12(j+1)^{2}( italic_j + 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT AND operations, 3333 half adders and 2j22𝑗22j-22 italic_j - 2 full adders. The exponent rounding needs i𝑖iitalic_i half adders. In a regular circuit design, a full adder involves 2 AND, 2 XOR, and 1 OR. Each XOR has 4 NAND gates. As a result, a full adder consumes 11 gate-level computation, while a half adder (no incoming carry) consumes 5 gate-level computations (1 AND and 1 XOR).

In conclusion, the total amount of gate-level computation needed by fp8 Mul can be estimated as

Nfp16×584,Nfp8-e4m3×325,Nfp8-e5m2×296formulae-sequencesuperscriptsubscript𝑁fp16584formulae-sequencesuperscriptsubscript𝑁fp8-e4m3325superscriptsubscript𝑁fp8-e5m2296N_{\text{fp16}}^{\times}\approx 584,\;N_{\text{fp8-e4m3}}^{\times}\approx 325,%\;N_{\text{fp8-e5m2}}^{\times}\approx 296italic_N start_POSTSUBSCRIPT fp16 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT × end_POSTSUPERSCRIPT ≈ 584 , italic_N start_POSTSUBSCRIPT fp8-e4m3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT × end_POSTSUPERSCRIPT ≈ 325 , italic_N start_POSTSUBSCRIPT fp8-e5m2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT × end_POSTSUPERSCRIPT ≈ 296(6)

\mathcal{L}caligraphic_L-Mul consumes 1111 XOR for sign prediction, 1111 half adder, and k2𝑘2k-2italic_k - 2 full adders. The total gate count needed by 16-bit and 8-bit \mathcal{L}caligraphic_L-Mul can be estimated as follows,

Neimj-mul=N1+Nint(i+j)++Nint8+Nfp16-mul256,Nfp8-mul157formulae-sequencesuperscriptsubscript𝑁𝑒𝑖𝑚𝑗-mulsuperscriptsubscript𝑁1direct-sumsuperscriptsubscript𝑁𝑖𝑛𝑡𝑖𝑗superscriptsubscript𝑁𝑖𝑛𝑡8superscriptsubscript𝑁fp16-mul256superscriptsubscript𝑁fp8-mul157\begin{split}N_{eimj}^{\mathcal{L}\text{-mul}}=N_{1}^{\oplus}+N_{int(i+j)}^{+}%&+N_{int8}^{+}\\N_{\text{fp16}}^{\mathcal{L}\text{-mul}}\approx 256,\;N_{\text{fp8}}^{\mathcal%{L}\text{-mul}}&\approx 157\end{split}start_ROW start_CELL italic_N start_POSTSUBSCRIPT italic_e italic_i italic_m italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_L -mul end_POSTSUPERSCRIPT = italic_N start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊕ end_POSTSUPERSCRIPT + italic_N start_POSTSUBSCRIPT italic_i italic_n italic_t ( italic_i + italic_j ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT end_CELL start_CELL + italic_N start_POSTSUBSCRIPT italic_i italic_n italic_t 8 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL italic_N start_POSTSUBSCRIPT fp16 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_L -mul end_POSTSUPERSCRIPT ≈ 256 , italic_N start_POSTSUBSCRIPT fp8 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_L -mul end_POSTSUPERSCRIPT end_CELL start_CELL ≈ 157 end_CELL end_ROW(7)

\mathcal{L}caligraphic_L-Mul with fp8_e4m3 and fp8_e5m2 operands have similar complexity since exponent offsets are typically implemented by 8-bit unsigned integer adders. As estimated, fp16 \mathcal{L}caligraphic_L-Mul requires less gates than fp8 multiplications, and fp8 \mathcal{L}caligraphic_L-Mul is significantly more efficient.

To summarize the error and complexity analysis, \mathcal{L}caligraphic_L-Mul is both more efficient and more accurate than fp8 multiplication.

3 Experiments

To prove the theoretical precision estimation and find out how \mathcal{L}caligraphic_L-Mul-based LLMs perform on real tasks, we conducted experiments on various benchmarks with different transformer-based large language models. We evaluated Llama-3.1-8b-Instruct (Dubey etal., 2024), mistral-7b-v0.3-Instruct (Jiang etal., 2023), Gemma2-2b-It (Team etal., 2024), and Llava-v1.5-7b (Liu etal., 2024) models, and found that the proposed method can replace different modules in transformer layers under fine-tuning or training-free settings. In this section, we first introduce the benchmarks and tasks used for evaluation, then compare the numerical error of the \mathcal{L}caligraphic_L-Mul algorithm against models with fp8 parameters. We also report the benchmarking results of LLMs under different precision settings.

3.1 Tasks

Massive Multitask Language Understanding (MMLU) (Hendrycks etal., 2020) contains 57 multi-choice natural language understanding tasks covering various high-school and college subjects. With 5 few-shot examples, the LLMs for evaluation are required to find the most appropriate answer option to each question. The benchmark focuses on evaluating the language understanding and knowledge abilities related to given subjects.

BigBench-Hard (BBH) (Srivastava etal., 2023) contains a set of complex symbolic tasks to evaluate the structural and logic reasoning abilities of language models. In this work, we select a subset of 17 multi-choice tasks to evaluate Llama and Mistral LLMs. We evaluate language models under the few-shot prompting setting for all BBH tasks.

Common Sense. We put together a set of 5 question answering tasks to evaluate the commonsense knowledge reasoning ability of LLMs. The set of task includes ARC-Challenge (Clark etal., 2018), CSQA (Saha etal., 2018), OBQA (Mihaylov etal., 2018), PIQA (Bisk etal., 2020), and SIQA (Sap etal., 2019), covering different aspects of factual and social knowledge.

Visual Question Answering. We select a set of multi-choice question answering tasks based on images for evaluating both vision and language understanding abilities of visual language models. The tasks include VQAv2 (Goyal etal., 2017), VizWiz (Gurari etal., 2018), and TextVQA (Singh etal., 2019), containing both unanswerable and answerable questions with different types of answers.

Visual Instruction following. We test the instruction following ability of Llava-1.5-7b model with the Llava-bench task (Liu etal., 2024) by generating free-form responses given images and corresponding instructions. Following the official evaluation guide, we evaluate the instruction following quality with GPT4o and compare the relative performance.

Object Hallucination. We explore if conducting inference with lower precision infects the truthfulness of the Llava model using the POPE benchmark (Li etal., 2023), which prompt visual language models with a sequence of yes/no questions about positive and negative objects.

GSM8k (Cobbe etal., 2021) consists of 8.5k human-crafted grade school math problems, with a test split of 1,000 problems designed to evaluate the arithmetic capabilities of language models. We conduct experiments on GSM8k in two different settings. In the training-free setting, we assess LLMs with few-shot, chain-of-thought prompting (Wei etal., 2022). Additionally, we fine-tune the Gemma2-2b-It model on the training split and evaluate its performance in a zero-shot setting.

3.2 Precision Analysis

Selection of l(k)𝑙𝑘l(k)italic_l ( italic_k ). We first visualize the mean square errors obtained by different l(k)𝑙𝑘l(k)italic_l ( italic_k ) selections with different models on the GSM8k dataset in Figure 3. In the plot, we highlight the l(k)𝑙𝑘l(k)italic_l ( italic_k ) configurations that leads to lower average error than float8_e4m3 multiplications in model inference in red, and the k,l(k)𝑘𝑙𝑘k,l(k)italic_k , italic_l ( italic_k ) combinations leading to an error between e4m3 and e5m2 are underlined. In both models, \mathcal{L}caligraphic_L-Mul with 3-bit mantissas is more accurate than fp8_e5m2 and \mathcal{L}caligraphic_L-Mul with 4-bit mantissas achieves comparable or lower error than fp8_e4m3.

Addition is All You Need for Energy-Efficient Language Models (3)

Mantissa size. In section 2.3.1, we argued that the error expectation of \mathcal{L}caligraphic_L-Mul can be lower than multiplying fp8_e4m3 multiplication while using less computation resource than multiplying fp8_e5m2 numbers. We hereby confirm the correctness of our theoretical precision estimates for the \mathcal{L}caligraphic_L-Mul algorithm with experimental analysis. The average errors of Llama and Gemma models are illustrated in Figure 4.

Addition is All You Need for Energy-Efficient Language Models (4)

The experiments demonstrated that across various sizes of LLMs, the \mathcal{L}caligraphic_L-Mul algorithm using 6-bit mantissa FP operands approximates the lowest average error, significantly outperforming both fp8 formats. Additionally, the 3- and 4-bit mantissa \mathcal{L}caligraphic_L-Mul achieved accuracy on par with or exceeding that of fp8_e5m2 and fp8_e4m3 multiplication operations, respectively.

In the IEEE 754 format (with a 1-bit sign and a 5-bit exponent), using a 6-bit mantissa is equivalent to rounding fp16 numbers down to fp12. By applying the complexity estimation method outlined in Equation (7), we can compute the gate count for 12-bit \mathcal{L}caligraphic_L-Mul operations as follows:

N12-mul201<Nfp8×300superscriptsubscript𝑁12-mul201superscriptsubscript𝑁𝑓𝑝8300N_{12}^{\mathcal{L}\text{-mul}}\approx 201<N_{fp8}^{\times}\approx 300italic_N start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_L -mul end_POSTSUPERSCRIPT ≈ 201 < italic_N start_POSTSUBSCRIPT italic_f italic_p 8 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT × end_POSTSUPERSCRIPT ≈ 300(8)

The experimental results further confirm that \mathcal{L}caligraphic_L-Mul is more efficient and accurate than fp8 multiplication. Although we estimated gate counts as an indicator of computational complexity, the actual difference in energy cost is greater than the complexity gap suggests. Based on the energy consumption reported in Horowitz (2014), an fp8 multiplication consumes approximately 0.25 pJ to 0.4 pJ, while a 16-bit \mathcal{L}caligraphic_L-Mul uses around 0.06 pJ of energy.

3.3 Benchmarking

In this section, we demonstrate that \mathcal{L}caligraphic_L-Mul can replace tensor multiplications in the attention mechanism without any loss of performance, whereas using fp8 multiplications for the same purpose degrades inference accuracy. This indicates that we can achieve the same model inference performance while reducing the energy cost of attention computations by 80%. Additionally, we present the full-model fine-tuning performance when all tensor multiplication operations are replaced with \mathcal{L}caligraphic_L-Mul on the GSM8k benchmark.

Textual tasks. Table 2 presents the evaluation results of the Llama and Mistral models on various natural language benchmarks, including MMLU, BBH, ARC-C, CSQA, PIQA, OBQA, and SIQA. In these experiments, the matrix multiplications in the attention layers, both before and after the softmax operation, were replaced with 8-bit tensor computations in different formats or \mathcal{L}caligraphic_L-Matmul following the implementation we discussed in Equation (3).

PrecisionBBHMMLUARC-RCSQAOBQAPIQASIQAAvg.
Mistral-7b-Instruction-v0.3
BFloat1655.8562.2075.9471.4276.2080.7444.8369.83
Float8_e4m355.1662.1875.3971.2576.0080.4744.6369.55
Float8_e5m253.2061.7574.9171.2574.4079.7644.5268.97
\mathcal{L}caligraphic_L-Mul55.8762.1976.1171.0976.6080.5245.3469.93
Llama-3.1-8B-Instruct
BFloat1670.7968.8682.5174.5384.2084.0045.9674.24
Float8_e4m369.9168.1681.6674.2882.2083.5145.3473.40
Float8_e5m262.9466.6180.1273.3079.4081.0745.3971.86
\mathcal{L}caligraphic_L-Mul70.7868.5482.1774.2884.2083.3046.0674.00

The results indicate that \mathcal{L}caligraphic_L-Mul not only requires significantly fewer computational resources but also delivers higher precision than float8-e4m3 tensors in 12 out of 14 experiments using Mistral and Llama models. This leads to a minimal performance gap when compared to bf16 inference. On average, across the two models, the performance difference between bf16 and \mathcal{L}caligraphic_L-Mul is just 0.07%. These findings suggest that matrix multiplication operations in the attention mechanism can be seamlessly replaced with the \mathcal{L}caligraphic_L-Mul algorithm without any loss of accuracy or the need for additional training.

GSM8k.We evaluated the performance of three language models—Mistral-7b-Instruct-v0.3, Llama3.1-7b-Instruct, and Gemma2-2b-It—on the GSM8k dataset using few-shot prompting and \mathcal{L}caligraphic_L-Mul-based attention. The models were tested under different numerical precision formats: bf16, fp8_e4m3, fp8_e5m2, and the \mathcal{L}caligraphic_L-Mul method. The results are summarized in Table3.

Notably, the \mathcal{L}caligraphic_L-Mul-based attention mechanism slightly improved the average performance compared to the bf16 baseline.Mistral-7b-Instruct-v0.3 and Gemma2-2b-It both exhibited improved accuracies with \mathcal{L}caligraphic_L-Mul, achieving 52.92% and 47.01% respectively. Llama3.1-7b-Instruct’s accuracy with \mathcal{L}caligraphic_L-Mul was slightly lower than its bf16 performance but still higher than with fp8_e4m3 and fp8_e5m2. On contrary, rounding the tensors in the attention computation to fp8_e5m2 leads to a significant performance drop although it’s more complicated than \mathcal{L}caligraphic_L-Mul.

ModelBfloat16Float8_e4m3Float8_e5m2\mathcal{L}caligraphic_L-Mul
Mistral-7b-Instruct-v0.352.5452.3950.1952.92
Llama3.1-7b-Instruct76.1275.4471.8075.63
Gemma2-2b-It45.8745.9444.4347.01
Average58.1757.9255.4758.52

Vision-language tasks. The performance of the Llava-v1.5-7b model on VQA, object hallucination, and instruction following tasks are shown in Table 4. Similar to the experiments on language tasks, the attention computation is conducted with different precision/methods while other linear transformation layers are unchanged. Except for TextVQA where the accuracy gap is 0.5%percent0.50.5\%0.5 %, the performance of \mathcal{L}caligraphic_L-Mul and BFloat16 attentions are comparable. The VQA tasks are evaluated with the official evaluation scripts and the Llava-Bench results are generated by GPT-4o.

TaskPOPELlava-BenchTextVQA
Splitrand.adv.pop.allcomp.conv.detail.allall
BFloat1686.2083.1785.1384.8366.8057.6041.4057.5057.90
\mathcal{L}caligraphic_L-Mul86.5783.1985.3485.0364.9058.7043.3057.5057.41
TaskVQAv2VizWiz
Splityes/nonum.otherallyes/nonum.unans.otherall
BFloat1691.8859.0470.5678.0377.1945.2471.7538.1949.31
\mathcal{L}caligraphic_L-Mul91.7858.9370.7378.0678.5450.4873.7838.4150.16

\mathcal{L}caligraphic_L-Mul with fewer bits. In this section, we explore how \mathcal{L}caligraphic_L-Mul-based attention precision influences the overall model performance using the MMLU benchmark with Mistral and Llama models. We implement the attention mechanism with \mathcal{L}caligraphic_L-Mul and only keep the first k𝑘kitalic_k bits of the operand tensors. The results of \mathcal{L}caligraphic_L-Mul attention with different precision are listed in Table 6.As expected, using \mathcal{L}caligraphic_L-Mul with a 4-bit mantissa achieves performance comparable to or slightly better than that of bf16 and fp8_e4m3. However, performance drops proportionally to the estimated error depicted in Figure4. When k=3𝑘3k=3italic_k = 3, both models significantly outperform their fp8_e5m2 counterparts, with the Llama model’s performance remaining close to that of fp8_e4m3. When k=2𝑘2k=2italic_k = 2, the Llama model’s performance is comparable to that of fp8_e5m2 rounding. This suggests that with the Llama model, we can perform \mathcal{L}caligraphic_L-Mul directly on fp8 models without compromising performance.

Modele4m3e5m2k=4𝑘4k=4italic_k = 4k=3𝑘3k=3italic_k = 3k=2𝑘2k=2italic_k = 2
Mitral62.1861.7562.1662.0661.08
Llama68.1666.6168.4368.1266.67
8bit Acc.e4m3e5m2\mathcal{L}caligraphic_L-Mul
GSM8k36.097.9637.91

Full-model fine-tuning.To further explore the impact of the \mathcal{L}caligraphic_L-Mul algorithm, we go beyond implementing attention layers with \mathcal{L}caligraphic_L-Mul by replacing all multiplication operations—including matrix multiplications in linear transformations, element-wise multiplications, and those within attention layers—with fp8_e4m3 \mathcal{L}caligraphic_L-Mul for the Gemma2-2b-It model. We then fine-tune the updated model on the training set of the GSM8k corpus and evaluate both the fine-tuned fp8 and \mathcal{L}caligraphic_L-Mul models under a zero-shot setting on the GSM8k test set. Note that the \mathcal{L}caligraphic_L-Mul operations in this experiment takes operands with 3-bit mantissas (k=3𝑘3k=3italic_k = 3) and the accumulation precision is fp8_e4m3 to explore an extremely efficient setting.

The experimental results demonstrate that a fine-tuned fp8_e4m3 \mathcal{L}caligraphic_L-Mul model achieves performance comparable to a standard fine-tuned fp8_e4m3 model under fp8 accumulation precision. This suggests that \mathcal{L}caligraphic_L-Mul can enhance training efficiency without compromising the fine-tuned model’s performance. Moreover, it reveals the potential of training \mathcal{L}caligraphic_L-Mul native LLMs for accurate and energy-efficient model hosting.

4 Related Work

Reducing the computation needed by neural networks while maintain the performance is an important problem which entailed multiple research directions. Typical methods include neural network pruning, quantization, and improved tensor I/O implementations.

Pruning. Neural network pruning focuses on improving the inference efficiency by reducing the number of connections among layers (Han etal., 2015a; b; Wang etal., 2020). Neural network pruning methods usually involves training. After important weights are identified, the neural networks are re-trained to further update the selected weights for specific tasks. Different from model pruning, the method we proposed is designed for general tasks, requiring no task-specific re-training.

Optimizing tensor I/O. On regular GPUs, moving tensors between GPU SRAM and high-bandwidth memory (HBM) is the main bottleneck of time and energy consumption. Reducing the I/O operations in transformer models and making the best use of the HBM can significantly improve the efficiency of AI training and inference (Dao etal., 2022; Dao, ; Kwon etal., 2023). Our method, which focuses on optimizing arithmetic operations, is orthogonal to this direction.

Rounding and quantization. Standard neural network weights are stored as 32-bit or 16-bit FP tensors. However, the full-sized weights takes a considerable amount of GPU memory. To improve the storage efficiency, both weights storage and computation can be conducted in a lower precision, for example, using 16-bit, 8-bit, or 4-bit FP and Int (fp16, bf16 (Kalamkar etal., 2019), fp8-e4m3, fp8-e5m2 (Micikevicius etal., 2023), int8 (Dettmers etal., 2022), fp4, and int4 (Dettmers etal., 2024)) tensors to represent model weights. Inference with lower-bit parameters usually hurts the computation accuracy and impacts the performance of pretrained models, and Integer-based quantization methods spend significant time to handle outlier weights. comparing to the quantization methods, our method requires less computation but achieves higher accuracy.

5 Future Work

To unlock the full potential of our proposed method, we will implement the \mathcal{L}caligraphic_L-Mul and \mathcal{L}caligraphic_L-Matmul kernel algorithms on hardware level and develop programming APIs for high-level model design. Furthermore, we will train textual, symbolic, and multi-modal generative AI models optimized for deployment on \mathcal{L}caligraphic_L-Mul native hardware. This will deliver high-speed and energy-efficient AI hosting solutions, reducing the energy cost for data centers, robotics, and a wide spectrum of edge-computing devices.

6 Conclusion

In this paper, we introduced \mathcal{L}caligraphic_L-Mul, an efficient algorithm that approximates floating-point multiplication using integer addition. We first demonstrated that the algorithm exhibits linear complexity relative to the bit size of its floating-point operands. We then showed that the expected accuracy of \mathcal{L}caligraphic_L-Mul surpasses that of fp8 multiplications while requiring significantly less computational power. To assess the practical impact of \mathcal{L}caligraphic_L-Mul, we evaluated it on natural language, vision, and mathematics benchmarks using popular language models. Our experiments indicate that \mathcal{L}caligraphic_L-Mul outperforms 8-bit transformers with lower computational consumption and achieves lossless performance when applied to computation-intensive attention layers without additional training. Based on this evidence, we argue that tensor multiplications in language models can be effectively implemented using \mathcal{L}caligraphic_L-Mul to preserve performance while enabling energy-efficient model deployment.

References

  • Abts etal. (2022)Dennis Abts, Garrin Kimmell, Andrew Ling, John Kim, Matt Boyd, Andrew Bitar, Sahil Parmar, Ibrahim Ahmed, Roberto DiCecco, David Han, etal.A software-defined tensor streaming multiprocessor for large-scale machine learning.In Proceedings of the 49th Annual International Symposium on Computer Architecture, pp. 567–580, 2022.
  • Achiam etal. (2023)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal.Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023.
  • Bisk etal. (2020)Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, etal.Piqa: Reasoning about physical commonsense in natural language.In Proceedings of the AAAI conference on artificial intelligence, volume34, pp. 7432–7439, 2020.
  • Brown (2020)TomB Brown.Language models are few-shot learners.arXiv preprint arXiv:2005.14165, 2020.
  • Choquette etal. (2021)Jack Choquette, Wishwesh Gandhi, Olivier Giroux, Nick Stam, and Ronny Krashinsky.Nvidia a100 tensor core gpu: Performance and innovation.IEEE Micro, 41(2):29–35, 2021.
  • Clark etal. (2018)Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.Think you have solved question answering? try arc, the ai2 reasoning challenge.arXiv preprint arXiv:1803.05457, 2018.
  • Cobbe etal. (2021)Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, etal.Training verifiers to solve math word problems.arXiv preprint arXiv:2110.14168, 2021.
  • (8)Tri Dao.Flashattention-2: Faster attention with better parallelism and work partitioning.In The Twelfth International Conference on Learning Representations.
  • Dao etal. (2022)Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré.Flashattention: Fast and memory-efficient exact attention with io-awareness.Advances in Neural Information Processing Systems, 35:16344–16359, 2022.
  • deVries (2023)Alex deVries.The growing energy footprint of artificial intelligence.Joule, 7(10):2191–2194, 2023.
  • Dettmers etal. (2022)Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer.Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale.Advances in Neural Information Processing Systems, 35:30318–30332, 2022.
  • Dettmers etal. (2024)Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer.Qlora: Efficient finetuning of quantized llms.Advances in Neural Information Processing Systems, 36, 2024.
  • Dubey etal. (2024)Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, etal.The llama 3 herd of models.arXiv preprint arXiv:2407.21783, 2024.
  • Goyal etal. (2017)Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh.Making the v in vqa matter: Elevating the role of image understanding in visual question answering.In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904–6913, 2017.
  • Gurari etal. (2018)Danna Gurari, Qing Li, AbigaleJ Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and JeffreyP Bigham.Vizwiz grand challenge: Answering visual questions from blind people.In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608–3617, 2018.
  • Han etal. (2015a)Song Han, Huizi Mao, and WilliamJ Dally.Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.arXiv preprint arXiv:1510.00149, 2015a.
  • Han etal. (2015b)Song Han, Jeff Pool, John Tran, and William Dally.Learning both weights and connections for efficient neural network.Advances in neural information processing systems, 28, 2015b.
  • Hendrycks etal. (2020)Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.Measuring massive multitask language understanding.In International Conference on Learning Representations, 2020.
  • Horowitz (2014)Mark Horowitz.1.1 computing’s energy problem (and what we can do about it).In 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC), pp. 10–14. IEEE, 2014.
  • Jiang etal. (2023)AlbertQ Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, DevendraSingh Chaplot, Diego delas Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, etal.Mistral 7b.arXiv preprint arXiv:2310.06825, 2023.
  • Jouppi etal. (2017)NormanP Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, AlBorchers, etal.In-datacenter performance analysis of a tensor processing unit.In Proceedings of the 44th annual international symposium on computer architecture, pp. 1–12, 2017.
  • Kalamkar etal. (2019)Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, DharmaTeja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, etal.A study of bfloat16 for deep learning training.arXiv preprint arXiv:1905.12322, 2019.
  • Kwon etal. (2023)Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, CodyHao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica.Efficient memory management for large language model serving with pagedattention.In Proceedings of the 29th Symposium on Operating Systems Principles, pp. 611–626, 2023.
  • Li etal. (2023)Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Xin Zhao, and Ji-Rong Wen.Evaluating object hallucination in large vision-language models.In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 292–305, Singapore, December 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.emnlp-main.20.URL https://aclanthology.org/2023.emnlp-main.20.
  • Liu etal. (2024)Haotian Liu, Chunyuan Li, Yuheng Li, and YongJae Lee.Improved baselines with visual instruction tuning.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26296–26306, 2024.
  • Micikevicius etal. (2023)Paulius Micikevicius, Stuart Oberman, Pradeep Dubey, Marius Cornea, Andres Rodriguez, Ian Bratt, Richard Grisenthwaite, Norm Jouppi, Chiachen Chou, Amber Huffman, etal.Ocp 8-bit floating point specification (ofp8).Open Compute Project, 2023.
  • Mihaylov etal. (2018)Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal.Can a suit of armor conduct electricity? a new dataset for open book question answering.In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381–2391, 2018.
  • Paszke etal. (2019)Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, etal.Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems, 32, 2019.
  • Radford etal. (2019)Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, etal.Language models are unsupervised multitask learners.OpenAI blog, 1(8):9, 2019.
  • Saha etal. (2018)Amrita Saha, Vardaan Pahuja, Mitesh Khapra, Karthik Sankaranarayanan, and Sarath Chandar.Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph.In Proceedings of the AAAI conference on artificial intelligence, volume32, 2018.
  • Sap etal. (2019)Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi.Social iqa: Commonsense reasoning about social interactions.In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4463–4473, 2019.
  • Singh etal. (2019)Amanpreet Singh, Vivek Natarajan, Meet Shah, YuJiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach.Towards vqa models that can read.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8317–8326, 2019.
  • Srivastava etal. (2023)Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu AwalMd Shoeb, Abubakar Abid, Adam Fisch, AdamR Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, etal.Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.Transactions on Machine Learning Research, 2023.
  • Team etal. (2023)Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, AndrewM Dai, Anja Hauth, etal.Gemini: a family of highly capable multimodal models.arXiv preprint arXiv:2312.11805, 2023.
  • Team etal. (2024)Gemma Team, Morgane Riviere, Shreya Pathak, PierGiuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, etal.Gemma 2: Improving open language models at a practical size.arXiv preprint arXiv:2408.00118, 2024.
  • Touvron etal. (2023)Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, etal.Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971, 2023.
  • Vaswani (2017)AVaswani.Attention is all you need.Advances in Neural Information Processing Systems, 2017.
  • Wang etal. (2020)Ziheng Wang, Jeremy Wohlwend, and Tao Lei.Structured pruning of large language models.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6151–6162, 2020.
  • Wei etal. (2022)Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, EdChi, QuocV Le, Denny Zhou, etal.Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems, 35:24824–24837, 2022.

Appendix A Error Estimation

We calculate the error expectations with different (n,k)𝑛𝑘(n,k)( italic_n , italic_k ) combinations as follows in Table 7.The values are calculated with the actual parameters of Mistral, Llama, and Gemma models. For even distribution, we use the expectations introduced in Section 2.3.1. For real distribution, we estimate the average value of possible operands using the parameters of five popular pretrained LLMs.

K values123456
abs[f1(n=7,k)]𝑎𝑏𝑠delimited-[]subscript𝑓1𝑛7𝑘abs[f_{1}(n=7,k)]italic_a italic_b italic_s [ italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_n = 7 , italic_k ) ]0.680.350.170.0810.0350.012
EvenDistributionabs[f1(n=7,k)+f2(k)]𝑎𝑏𝑠delimited-[]subscript𝑓1𝑛7𝑘subscript𝑓2𝑘abs[f_{1}(n=7,k)+f_{2}(k)]italic_a italic_b italic_s [ italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_n = 7 , italic_k ) + italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_k ) ]0.680.430.300.240.200.19
abs[f1(n=7,k)]𝑎𝑏𝑠delimited-[]subscript𝑓1𝑛7𝑘abs[f_{1}(n=7,k)]italic_a italic_b italic_s [ italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_n = 7 , italic_k ) ]0.610.330.160.0770.0330.011
RealDistributionabs[f1(n=7,k)+f2(k)]𝑎𝑏𝑠delimited-[]subscript𝑓1𝑛7𝑘subscript𝑓2𝑘abs[f_{1}(n=7,k)+f_{2}(k)]italic_a italic_b italic_s [ italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_n = 7 , italic_k ) + italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_k ) ]0.160.180.180.120.150.14

We find that when the operands are distributed evenlly, \mathcal{L}caligraphic_L-Mul is more accurate than float8_e5m2 multiplications. However with real models, \mathcal{L}caligraphic_L-Mul can achieve higher precision than float8_e4m3 calculations.

Addition is All You Need for Energy-Efficient Language Models (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Dan Stracke

Last Updated:

Views: 6428

Rating: 4.2 / 5 (63 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Dan Stracke

Birthday: 1992-08-25

Address: 2253 Brown Springs, East Alla, OH 38634-0309

Phone: +398735162064

Job: Investor Government Associate

Hobby: Shopping, LARPing, Scrapbooking, Surfing, Slacklining, Dance, Glassblowing

Introduction: My name is Dan Stracke, I am a homely, gleaming, glamorous, inquisitive, homely, gorgeous, light person who loves writing and wants to share my knowledge and understanding with you.