Impact of too many decimal places on query performance
Posted: Sun Feb 20, 2011 8:57 pm
Hi,
Does anyone know if having too many decimal points will reduce the performance of calculations in TM1?
I have 2 measures in a cube that goes through the same rules and aggregations, and they have records exactly in the same coordinates, that means they have exactly the same number of records in the cube. However one of these is generated through a division operation of the other in a TI process. Therefore it has lots of decimals in most of the records.
I observed that the queries against the TI calculated measure are returning much slower than the other. My hypothesis is that having too many decimals in one of the measures causes it to take more time in calculations, and therefore reducing the query performance. I know I can round the numbers in the TI process after the division operation, but before doing this, I wanted to check comments from the experts.
Any comments will be appreciated.
Regards,
Does anyone know if having too many decimal points will reduce the performance of calculations in TM1?
I have 2 measures in a cube that goes through the same rules and aggregations, and they have records exactly in the same coordinates, that means they have exactly the same number of records in the cube. However one of these is generated through a division operation of the other in a TI process. Therefore it has lots of decimals in most of the records.
I observed that the queries against the TI calculated measure are returning much slower than the other. My hypothesis is that having too many decimals in one of the measures causes it to take more time in calculations, and therefore reducing the query performance. I know I can round the numbers in the TI process after the division operation, but before doing this, I wanted to check comments from the experts.
Any comments will be appreciated.
Regards,