Help Center> CloudTable Service> FAQ> Data Read/Write> Why Is the Precision Lost Sometimes When a Large Integer Is Written into OpenTSDB?

Why Is the Precision Lost Sometimes When a Large Integer Is Written into OpenTSDB?

Assume that the following data is written into OpenTSDB.

Table 1 Sample data

Metrics

Timestamp

Tag

Value

Money

1483200000

Card1

9223372036854775709

1483200001

Card1

9223372036854775709

1483200002

Card1

922337203685477.12

1483200003

Card1

9223372036854775700

The query result is as follows:

{"1483200000":9223372036854775709,"1483200001":9.223372036854776E18, "1483200002":9.223372036854771E14,"1483200003":9223372036854775700}

The value of the timestamp 1483200001 is changed to 9.223372036854776E18. The data precision has been changed compared with the original value 9223372036854775709.

The reason is as follows: If the next piece of data (that is, "1483200002":9.223372036854771E14) is a floating point number, OpenTSDB will convert the current piece of data into a floating point number for return.

At the same time, the integer value 9223372036854775709 can only be represented as 9.223372036854776E18 if it is expressed in a double-precision (Double) floating point number. This is because Double is represented by scientific notation in memory.

1bit (symbol bit) 11bits (exponent bit) 52bits (mantissa bit)

The precision is determined by the mantissa bit. Therefore, the precision of Double is 2^52 = 4503599627370496. There are 16 bits in total. The maximum precision of Double is 16 bits. In this case, the integer 9223372036854775709 (19 bits) cannot be completely represented.

You are advised not to mix integer and floating point data for the same metric. In this way, similar problems can be avoided.