Exponential Problem with Natural Consolidation
-
- Posts: 41
- Joined: Thu Jul 07, 2011 7:50 am
- OLAP Product: TM1
- Version: 9.5.2
- Excel Version: Excel 2010
Exponential Problem with Natural Consolidation
Whats wrong with TM1 displaying values like (0,000000000000)?
I have a natural consolidation [CD Amt], calculated using [Std Amt] - [Act Amt].
By calculating it manually, i get a value of 0.
But the natural consolidation gives a (0,0000000000) value as shown in the attachment.
I have checked that there is no rounding issues and the values added up indeed is 0.
May i know what else could be the problem here?
I have a natural consolidation [CD Amt], calculated using [Std Amt] - [Act Amt].
By calculating it manually, i get a value of 0.
But the natural consolidation gives a (0,0000000000) value as shown in the attachment.
I have checked that there is no rounding issues and the values added up indeed is 0.
May i know what else could be the problem here?
Server OS: Windows Server 2008 R2 (64-bit)
TM1: v9.5.2 FP1
MS Office: Microsoft Excel 2003, 2010
TM1: v9.5.2 FP1
MS Office: Microsoft Excel 2003, 2010
-
- Posts: 33
- Joined: Tue Jan 29, 2013 2:52 pm
- OLAP Product: TM1
- Version: 9.0 SP3 9.5.1 10.1.1
- Excel Version: excel 2010 2007 2003
Re: Exponential Problem with Natural Consolidation
Whilst I agree with Isocat that zero is not always zero, you should be able to control this affect in the formatting properties for the element in question.
- ioscat
- Regular Participant
- Posts: 209
- Joined: Tue Jul 10, 2012 8:26 am
- OLAP Product: Contributor
- Version: 9.5.2 10.1.1 10.2
- Excel Version: 07+10+13
- Contact:
Re: Exponential Problem with Natural Consolidation
And you should be very careful in using this non-zero values
-
- MVP
- Posts: 600
- Joined: Wed Aug 17, 2011 1:19 pm
- OLAP Product: TM1
- Version: 9.5.2 10.1 10.2
- Excel Version: 2003 2007
- Location: York, UK
Re: Exponential Problem with Natural Consolidation
For those that don't have access to the IBM APAR system the link ioscat includes has this to say
which gives 9007199254740900 and not 0.
Like ioscat said, you need to be careful.
An example of this is when you calculate the reciprocal of one minus ten tenths :-Calculations may result in a very small number, but not zero. If this value is used as the denominator with the 'save divide' operator "\", the division will be done resulting in a very large number, instead of treating the very small calculated number as zero.
Code: Select all
1 \ ( 1 - ( 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 ) )
Like ioscat said, you need to be careful.
-
- MVP
- Posts: 733
- Joined: Wed May 14, 2008 11:06 pm
Re: Exponential Problem with Natural Consolidation
Duncan, that's interesting - if you substitute these rules then you get the correct answer of 0 per usage of 'safe divide':Duncan P wrote:An example of this is when you calculate the reciprocal of one minus ten tenths :-
Code: Select all
1 \ ( 1 - ( 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 ) )
which gives 9007199254740900 and not 0.
Code: Select all
1 \ ( 1 - ( 0.5 + 0.5 ) );
Code: Select all
1 \ ( 1 - ( 0.25 + 0.25 + 0.25 + 0.25 ) );
Code: Select all
1 \ ( 1 - ( 0.2 + 0.2 + 0.2 + 0.2 + 0.2 ) );
Code: Select all
1 \ ( 1 - ( 0.125 + 0.125 + 0.125 + 0.125 + 0.125 + 0.125 + 0.125 + 0.125 ) );
Code: Select all
1 \ ( 1 - ( 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05 + 0.05) );
This strikes me as a bug, wouldn't you agree? Plenty of customers that I've worked with think this problem should go away.
Robin Mackenzie
-
- MVP
- Posts: 352
- Joined: Wed May 14, 2008 1:37 pm
- OLAP Product: TM1
- Version: 2.5 to PA
- Excel Version: Lots
- Location: Sydney
- Contact:
Re: Exponential Problem with Natural Consolidation
I would go for it being a limitation of IEEE-754 and the conversion of decimal to binary rather than a bug.
Do the same calculations (switching the \ to /) in Excel and you'll get the same results.
Now try doing the same with 64ths i.e. a number that can be represented in a precise manner in both decimal and binary (and is short enough to fit in to IEEE-754)
in Excel you get a #Div/0! error and you get a full fat zero in TM1.
Do the same calculations (switching the \ to /) in Excel and you'll get the same results.
Now try doing the same with 64ths i.e. a number that can be represented in a precise manner in both decimal and binary (and is short enough to fit in to IEEE-754)
Code: Select all
1 \ ( 1 - ( 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 + 0.015625 ) )
Andy Key
-
- MVP
- Posts: 733
- Joined: Wed May 14, 2008 11:06 pm
Re: Exponential Problem with Natural Consolidation
Seems to work ok on the calculator on the Windows desktopAndy Key wrote:I would go for it being a limitation of IEEE-754 and the conversion of decimal to binary rather than a bug.
Do the same calculations (switching the \ to /) in Excel and you'll get the same results.

Seriously, I've seen TM1 give some flakey rounding errors on standard consolidations. When the balance sheet stops balancing, it becomes a problem for people for whom an explanation involving IEEE-754 just doesn't cut it. On that point, I'm familiar with some of the issues in binary floating point arithmetic but are we saying that there's no solution, or advance, on a problem that's inherent to a standard dating back to, what, 1985??
Robin Mackenzie
-
- MVP
- Posts: 600
- Joined: Wed Aug 17, 2011 1:19 pm
- OLAP Product: TM1
- Version: 9.5.2 10.1 10.2
- Excel Version: 2003 2007
- Location: York, UK
Re: Exponential Problem with Natural Consolidation
The problem is that the only way to get the speed is to use a representation on which the standard arithmetic operations can be done in hardware. IEEE754-2008 defines a number of decimal floating point representations but they are only supported in a small range of hardware (e.g. IBM Power6 and z9) and not on the standard Intel stuff. Hence we are stuck with the problems of rounding recurring binary representations of 1/5.
-
- MVP
- Posts: 3701
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Exponential Problem with Natural Consolidation
head_is_spinning 

-
- MVP
- Posts: 733
- Joined: Wed May 14, 2008 11:06 pm
Re: Exponential Problem with Natural Consolidation
Hey Duncan, thanks for the response - always welcome and appreciated. Just to play devil's advocate for a moment - wouldn't it be an interesting avenue to explore for TM1 to offer the same 'Precision as displayed' option that is referred to in this Microsoft support note? That's kind of what the OP was referring to in the third post of the thread and not entirely worlds apart from the existing SpreadingPrecision parameter available today in the tm1s.cfg ??
Robin Mackenzie
-
- MVP
- Posts: 733
- Joined: Wed May 14, 2008 11:06 pm
Re: Exponential Problem with Natural Consolidation
She'll be rightlotsaram wrote:head_is_spinning

Robin Mackenzie
-
- Posts: 41
- Joined: Thu Jul 07, 2011 7:50 am
- OLAP Product: TM1
- Version: 9.5.2
- Excel Version: Excel 2010
Re: Exponential Problem with Natural Consolidation
So in summary, it is not possible to get a zero, unless using the "safe divide" method in the rule itself?
Server OS: Windows Server 2008 R2 (64-bit)
TM1: v9.5.2 FP1
MS Office: Microsoft Excel 2003, 2010
TM1: v9.5.2 FP1
MS Office: Microsoft Excel 2003, 2010
- ioscat
- Regular Participant
- Posts: 209
- Joined: Tue Jul 10, 2012 8:26 am
- OLAP Product: Contributor
- Version: 9.5.2 10.1.1 10.2
- Excel Version: 07+10+13
- Contact:
Re: Exponential Problem with Natural Consolidation
Possibly yes and possibly no
Depends on
Depends on
-
- Posts: 41
- Joined: Thu Jul 07, 2011 7:50 am
- OLAP Product: TM1
- Version: 9.5.2
- Excel Version: Excel 2010
Re: Exponential Problem with Natural Consolidation
depends on what?ioscat wrote:Possibly yes and possibly no
Depends on
It really is annoying to get something like this with only 2 decimal point involve: look at all the (0.00)...
Server OS: Windows Server 2008 R2 (64-bit)
TM1: v9.5.2 FP1
MS Office: Microsoft Excel 2003, 2010
TM1: v9.5.2 FP1
MS Office: Microsoft Excel 2003, 2010
-
- MVP
- Posts: 733
- Joined: Wed May 14, 2008 11:06 pm
Re: Exponential Problem with Natural Consolidation
No, the safe divide is no guarantee of getting absolute zero. The summary of the problem is that human numbering system (decimal) and computer numbering system (binary) don't always place nicely. Sometimes there are extremely small rounding errors meaning that the zero value is actually an extremely small number. This leads to your zero suppression problem. Maybe what we need is some sort of config parameter like:conray wrote:So in summary, it is not possible to get a zero, unless using the "safe divide" method in the rule itself?
Code: Select all
ResolveToNullAtNthDecimalPlace=2
Robin Mackenzie
Re: Exponential Problem with Natural Consolidation
IBM states that this would be fixed in TM1 10.2 but it is not entirely if even or IBM means a Version above 10.2.0.
Create a formula as follows:
['a'] = N: ['b'];
where values for 'a' are entered directly and afterwards there is plain consolidation.
Then put 'a' and 'b' into a consolidation 'c'
where 'a' has the weight 1 and
b has the weight -1
then very often the result is c<>0 (e.g. 0,000000000000019387).
(at least in 10.2.0 and with our model).
This causes zero suppression Problems, calculation Problems (especially when using multiplication or backslash division, overfeeding
This does not happen (in our model) if you can create 'b' as consolidation of 'a' with the weight 1.
Create a formula as follows:
['a'] = N: ['b'];
where values for 'a' are entered directly and afterwards there is plain consolidation.
Then put 'a' and 'b' into a consolidation 'c'
where 'a' has the weight 1 and
b has the weight -1
then very often the result is c<>0 (e.g. 0,000000000000019387).
(at least in 10.2.0 and with our model).
This causes zero suppression Problems, calculation Problems (especially when using multiplication or backslash division, overfeeding
This does not happen (in our model) if you can create 'b' as consolidation of 'a' with the weight 1.
- paulsimon
- MVP
- Posts: 808
- Joined: Sat Sep 03, 2011 11:10 pm
- OLAP Product: TM1
- Version: PA 2.0.5
- Excel Version: 2016
- Contact:
Re: Exponential Problem with Natural Consolidation
Hi
There is a TM1 Rule function ROUNDP
Won't that do what you need?
If all else fails use something like IF( ABS( [ cell ref ] ) < 0.0001 , 0 , [cell ref] )
Regards
Paul Simon
There is a TM1 Rule function ROUNDP
Won't that do what you need?
If all else fails use something like IF( ABS( [ cell ref ] ) < 0.0001 , 0 , [cell ref] )
Regards
Paul Simon
-
- Posts: 25
- Joined: Wed Nov 12, 2014 2:27 pm
- OLAP Product: TM1 + BI
- Version: 10.2 10.2.2 + 10.2.1 10.2.2
- Excel Version: 2010 32 bit
Re: Exponential Problem with Natural Consolidation
When using a binary representation for numbers, it is expected to have very slight variations in numbers. It is because we use the decimal system and the binary system cannot represent all of the decimal numbers. It is vice versa, decimal numbers cannot represent some numbers in different bases as well. For instance, 1/3 is 0.33333333...33(which is never ending) in decimal system and when you multiply that 0.3333....33 with 3, you cannot get exactly 1. Long story short, as others explained, it is due to binary number system.
For financial applications, it is suggested to use decimal system because our monetary system is decimal and we don't have amounts like 1/3 USD which decimal system cannot represent. However, computer interpretation of decimal numbers takes longer time and memory, because the computer hardware works with 0 and 1(which is binary). This is not the case just for TM1 but also for any other software. In a programming language, when you compare the performance of a code piece when you use variable type double(which stores numbers as binary representation) or variable type decimal(stores as decimal representation), you will see that it will be faster when use variable type double. However, with double type you will have number precision issues like you do now with TM1. Also note that this precision issue is not just about 0 but for any number that is derived from some calculations of some noninteger values.
Therefore, what TM1 does is sacrificing monetary(base 10) number precision for faster performance. And, it gives you the chance to use a rounding function when you require the numbers to be exactly zero.
And the reason why 0.5, 0.25 etc can be represented in binary and 0.1, 0.2 cannot is about the exponents of 2. 2 exponent -1 is 1/2(0.5), 2 exponent -2 is 1/4(0.25) etc.
For financial applications, it is suggested to use decimal system because our monetary system is decimal and we don't have amounts like 1/3 USD which decimal system cannot represent. However, computer interpretation of decimal numbers takes longer time and memory, because the computer hardware works with 0 and 1(which is binary). This is not the case just for TM1 but also for any other software. In a programming language, when you compare the performance of a code piece when you use variable type double(which stores numbers as binary representation) or variable type decimal(stores as decimal representation), you will see that it will be faster when use variable type double. However, with double type you will have number precision issues like you do now with TM1. Also note that this precision issue is not just about 0 but for any number that is derived from some calculations of some noninteger values.
Therefore, what TM1 does is sacrificing monetary(base 10) number precision for faster performance. And, it gives you the chance to use a rounding function when you require the numbers to be exactly zero.
And the reason why 0.5, 0.25 etc can be represented in binary and 0.1, 0.2 cannot is about the exponents of 2. 2 exponent -1 is 1/2(0.5), 2 exponent -2 is 1/4(0.25) etc.