Hello,
I am curious to know if you are still using the ConsolidateChildren option for your currency translations? What are your thoughts regarding model performance using feeders vs consolidations? Would you recommend using this method in a cube which contains numerous static versions (they do not require rules or feeders)? My concern is that the system will be continuously calculating fx translations (consolidated values for each currency & ConsolidateChildren) on cells which would otherwise be simple, static, N level copies.
Also, does the ConsolidateChildren ignore Skipcheck? If so, would the system be forced to evaluate every intersection/N level cell to perform the fx translation?
Memory for Feeders in Large Sparce Cubes
-
- Posts: 3
- Joined: Tue Apr 12, 2011 2:55 am
- OLAP Product: TM1
- Version: 9.5.2
- Excel Version: 10
-
- MVP
- Posts: 3698
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Memory for Feeders in Large Sparce Cubes
This is a pretty old thread you have reawakened.
I'll start with this one. Basically "Yes!" ConsolidateChildren does ignore SkipCheck and will stop off at every intersection along the path within the dimension(s) that it has been instructed to. This is the reason it can cause very slow performance and why it should only be used in circumstances where appropriate.hari_cundi wrote:does the ConsolidateChildren ignore Skipcheck? If so, would the system be forced to evaluate every intersection/N level cell to perform the fx translation?
This is a number of questions, and its complicated. I am and remain a big fan and advocate of using consolidations in place of feeders and using C level rules where appropriate, typically for lookup type calculations and ratios that themselves do not need to be consolidated. For calculations that need to be consolidated normal N rules and feeders are better. I am not a fan of using ConsolidateChildren except in exceptional circumstances (small dimensions, small dense cubes, no other calculation possibility). In the situation where there is a single or limited number of input currencies and a large number of potential reporting currencies that may or may not need to have values queried and be reported on then using C rules for Fx conversion can be very effective AS LONG AS the consolidation process is simple, e.g. ConsolidateChildren sums only different rate/month combinations to year totals. If there is an impact on other dimensions or a financial consolidation process with eliminations and CMA then I wouldn't use it. If the data is static, that is without planning input driven by users then I think the best solution is to pre-calculate reporting currencies with TI during the data load as this will always give best performance for users.hari_cundi wrote:I am curious to know if you are still using the ConsolidateChildren option for your currency translations? What are your thoughts regarding model performance using feeders vs consolidations? Would you recommend using this method in a cube which contains numerous static versions (they do not require rules or feeders)? My concern is that the system will be continuously calculating fx translations (consolidated values for each currency & ConsolidateChildren) on cells which would otherwise be simple, static, N level copies.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
- paulsimon
- MVP
- Posts: 808
- Joined: Sat Sep 03, 2011 11:10 pm
- OLAP Product: TM1
- Version: PA 2.0.5
- Excel Version: 2016
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
Hi
A key thing to realise about TM1 is that it only calculates a result for a rule when it is actually requested. This is the opposite to feeders which are created at cube load time, and whenever data changes from a 0 to a non-zero. Therefore even in a cube that converted to hundreds of different currencies performance was still good with the feed by consolidation approach, ie all reporting currencies are parents of transaction currency.
In practice only a few of the hundreds of currencies were ever used on a frequent basis.
However, this was in a largely reporting only cube with few other calculations.
I would not try this approach in a planning cube with a lot of calculations.
In practice I would try an approach whereby there are only three reporting currenciens:
Transaction
Local
Group
Transaction currency is the input
Local Currency is determined by the Entity and is required to meet the need for statutory reporting of the Entity in local currency.
Group currency is the current for the overall group of companies. Potentially there may be a few more eg GBP, USD, EUR, etc for comparison with other companies' results.
If you limit the range of reporting currencies then the N level calc with feeder approach should be feasible. It won't make sense to consolidate a Local Currency result across Entities but that is a training issue.
Regards
Paul Simon
A key thing to realise about TM1 is that it only calculates a result for a rule when it is actually requested. This is the opposite to feeders which are created at cube load time, and whenever data changes from a 0 to a non-zero. Therefore even in a cube that converted to hundreds of different currencies performance was still good with the feed by consolidation approach, ie all reporting currencies are parents of transaction currency.
In practice only a few of the hundreds of currencies were ever used on a frequent basis.
However, this was in a largely reporting only cube with few other calculations.
I would not try this approach in a planning cube with a lot of calculations.
In practice I would try an approach whereby there are only three reporting currenciens:
Transaction
Local
Group
Transaction currency is the input
Local Currency is determined by the Entity and is required to meet the need for statutory reporting of the Entity in local currency.
Group currency is the current for the overall group of companies. Potentially there may be a few more eg GBP, USD, EUR, etc for comparison with other companies' results.
If you limit the range of reporting currencies then the N level calc with feeder approach should be feasible. It won't make sense to consolidate a Local Currency result across Entities but that is a training issue.
Regards
Paul Simon
-
- MVP
- Posts: 3223
- Joined: Mon Dec 29, 2008 6:26 pm
- OLAP Product: TM1, Jedox
- Version: PAL 2.1.5
- Excel Version: Microsoft 365
- Location: Brussels, Belgium
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
I agree with the previous speakers.
In addition:
In addition:
Just a simple rule at C level for consolidated companies, setting the elements to 0 (the usual [Total Company]=C:0 type of rules, or somewhat more involved) is often needed for currency conversions.paulsimon wrote:If you limit the range of reporting currencies then the N level calc with feeder approach should be feasible. It won't make sense to consolidate a Local Currency result across Entities but that is a training issue.
Best regards,
Wim Gielis
IBM Champion 2024-2025
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
Wim Gielis
IBM Champion 2024-2025
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
-
- MVP
- Posts: 600
- Joined: Wed Aug 17, 2011 1:19 pm
- OLAP Product: TM1
- Version: 9.5.2 10.1 10.2
- Excel Version: 2003 2007
- Location: York, UK
Re: Memory for Feeders in Large Sparce Cubes
About this time last year I came across some pretty bizarre characteristics of ConsolidateChildren which will take a while to explain but which can make a huge difference to performance.
Firstly ConsolidateChildren works by going over the direct children on the chosen dimensions, querying their values (whether they are fed or not) and returning the sum of the result.
In normal circumstances (a rule defined at that level that uses other consolidated values at the same level) this will mean that it will end up running at least one stargate consolidation for each child cell. If there quite a few direct children this will mean that many new temporary stargates. In the generation of those stargates SKIPCHECK is observed. The performance hit is because they are being generated at all at the unused intersections.
However when the engine reads a cell it first checks in two places to see if it already has it. The first of these is the calculation cache but the second is the active stargates. If the dimensions over which you are doing the ConsolidateChildren are all on rows and columns of the view that you are reading then the stargate constructed as the first part of reading that view contains all the cells that the ConsolidateChildren is querying. This doesn't even depend on the level of detail you are showing on those dimension. The dimensions on rows and columns of the underlying stargate are expanded down to leaf level, so you can actually put just the total on rows or columns and you will still get the effect. As it is reading the values from a prepared stargate and not calculating each one by creating another stargate the performance can be much better. In my particular case I was running ConsolidateChildren over two currency dimensions in a cube with more than 20 other dimensions and I got a hundred-fold speed-up.
Obviously your mileage will vary but if you have to use ConsolidateChildren (and I couldn't get away from it) and can use a prepared view with the dimensions in the right places it is well worth a try.
Firstly ConsolidateChildren works by going over the direct children on the chosen dimensions, querying their values (whether they are fed or not) and returning the sum of the result.
In normal circumstances (a rule defined at that level that uses other consolidated values at the same level) this will mean that it will end up running at least one stargate consolidation for each child cell. If there quite a few direct children this will mean that many new temporary stargates. In the generation of those stargates SKIPCHECK is observed. The performance hit is because they are being generated at all at the unused intersections.
However when the engine reads a cell it first checks in two places to see if it already has it. The first of these is the calculation cache but the second is the active stargates. If the dimensions over which you are doing the ConsolidateChildren are all on rows and columns of the view that you are reading then the stargate constructed as the first part of reading that view contains all the cells that the ConsolidateChildren is querying. This doesn't even depend on the level of detail you are showing on those dimension. The dimensions on rows and columns of the underlying stargate are expanded down to leaf level, so you can actually put just the total on rows or columns and you will still get the effect. As it is reading the values from a prepared stargate and not calculating each one by creating another stargate the performance can be much better. In my particular case I was running ConsolidateChildren over two currency dimensions in a cube with more than 20 other dimensions and I got a hundred-fold speed-up.
Obviously your mileage will vary but if you have to use ConsolidateChildren (and I couldn't get away from it) and can use a prepared view with the dimensions in the right places it is well worth a try.