Good question, though I don't feel comfortable giving a totally absolute answer. It's one of those "it depends" things as with any very technical matter.
But as a rule of thumb, when working with double-precision floating point numbers most operations are only trustworthy to about a precision of 10^-13 or so. This is because, among other reasons, that while double can store numbers much more precise than that, precision losses occur when subtracting two similar numbers for example. It's hard to avoid these kinds of operations in most algorithms of course.
Here is a sample calculation I just did in the Julia REPL to illustrate this:
Start by generating some random numbers:
julia> r1 = rand()
0.034238108102722764
julia> r2 = rand()
0.7802465824048777
julia> r3 = rand()
0.09507249159735309
Now do some arithmetic and then undo it
julia> x = (r1/r2)*r3-100
-99.9958281108584
julia> y = (100+x)/r3*r2 # y should equal r1 if using exact arithmetic
0.03423810810274365
julia> r1
0.034238108102722764 # you can see the last few digits differ from y
julia> y-r1
2.0886070650760757e-14