# Golang floating point precision float32 vs float64

I wrote a program to demonstrate floating point error in Go:

``````func main() {
a := float64(0.2)
a += 0.1
a -= 0.3
var i int
for i = 0; a < 1.0; i++ {
a += a
}
fmt.Printf("After %d iterations, a = %e\n", i, a)
}
``````

It prints:

``````After 54 iterations, a = 1.000000e+00
``````

This matches the behaviour of the same program written in C (using the `double` type)

However, if `float32` is used instead, the program gets stuck in an infinite loop! If you modify the C program to use a `float` instead of a `double` , it prints

``````After 27 iterations, a = 1.600000e+00
``````

Why doesn’t the Go program have the same output as the C program when using `float32` ?

Agree with ANisus, go is doing the right thing. Concerning C, I’m not convinced by his guess.

The C standard does not dictate, but most implementations of libc will convert the decimal representation to nearest float (at least to comply with IEEE-754 2008 or ISO 10967), so I don’t think this is the most probable explanation.

There are several reasons why the C program behavior might differ… Especially, some intermediate computations might be performed with excess precision (double or long double).

The most probable thing I can think of, is if ever you wrote 0.1 instead of 0.1f in C.
In which case, you might have cause excess precision in initialization
(you sum float a+double 0.1 => the float is converted to double, then result is converted back to float)

If I emulate these operations

``````float32(float32(float32(0.2) + float64(0.1)) - float64(0.3))
``````

Then I find something near 1.1920929e-8f

After 27 iterations, this sums to 1.6f

Refer: stakoverflow