I'm in the process of testing this but would be interesting to see what others think too.
If a TI uses a cube view as it's data source, for each line of metadata/data it processes its code for, does it re-evaluate the view it is based on?
I am looking to improve a section of code but my current idea relies upon the theory the view is not cached. I want to use some MDX in a subset which will be affected by the TI as it processes the metadata code (updates an attribute, MDX filters out elements with the attribute). If the view is re-evaluated the MDX should reduce the size of the view significantly as it runs, reducing the amount of data it needs to look at significantly. If it caches the view then changing the attribute would have no effect, and my process will take just as long as it does now.
I'm trying to use data on a fairly large cube (13 dims, several of which are over 1000's of elements) and using top level consolidations is taking a long time to process. My idea is to use this MDX theory to "skip" values once it finds them, and mark them as such to avoid repeating the task. It is taking 20 mins on one server to run the current code but i may need to repeat this on a new server that could take much, much longer. I'm looking to improve what i have now to avoid that potential issue.
TI: Cube View Cached or Re-evaluated?
- Steve Vincent
- Site Admin
- Posts: 1054
- Joined: Mon May 12, 2008 8:33 am
- OLAP Product: TM1
- Version: 10.2.2 FP1
- Excel Version: 2010
- Location: UK
TI: Cube View Cached or Re-evaluated?
If this were a dictatorship, it would be a heck of a lot easier, just so long as I'm the dictator.
Production: Planning Analytics 64 bit 2.0.5, Windows 2016 Server. Excel 2016, IE11 for t'internet
Production: Planning Analytics 64 bit 2.0.5, Windows 2016 Server. Excel 2016, IE11 for t'internet
-
- Community Contributor
- Posts: 103
- Joined: Mon Sep 05, 2011 11:04 pm
- OLAP Product: TM1
- Version: 10.2
- Excel Version: 2010
Re: TI: Cube View Cached or Re-evaluated?
Dynamic subsets are reevaluated any time any data change is made anywhere in the system, so I would think that in theory what you're proposing would work.
However, the time it would take to reevaluate the MDX on every iteration of the TI process would be a considerable performance hit. It's normally hundreds of times faster to use a datasource with no dynamic subsets and rebuild the static subsets in the prologue than to use the MDX subsets in the datasource due to the constant invalidation of MDX cache, but obviously that won't help in this case.
I recently had to manipulate dimensions of close to 3 million elements for manageable display in Active forms. MDX was far too slow to work, and it was actually faster to iterate through the whole dimension in TI and build static subsets for display.
I would guess, therefore, that you're best best would be to start off with a view consisting of static subsets, built in the prologue tab with While loops. Then you could keep a row count in your TI and every 1000 or so rows, re-run the routine from the prologue to cut down your datasource based on the new data added.
If this doesn't work within the data tab, then consider breaking the TI into a parent and sub process, with the parent process rebuilding the datasource for the subprocess in a while loop (say 10 times) and on each iteration in the loop executing the subprocess with the revised datasource. I think this would be much, much faster than anything with MDX, and would definitely work in the 'on the fly' reevalutaion of the datasource doesn't work as expected.
However, the time it would take to reevaluate the MDX on every iteration of the TI process would be a considerable performance hit. It's normally hundreds of times faster to use a datasource with no dynamic subsets and rebuild the static subsets in the prologue than to use the MDX subsets in the datasource due to the constant invalidation of MDX cache, but obviously that won't help in this case.
I recently had to manipulate dimensions of close to 3 million elements for manageable display in Active forms. MDX was far too slow to work, and it was actually faster to iterate through the whole dimension in TI and build static subsets for display.
I would guess, therefore, that you're best best would be to start off with a view consisting of static subsets, built in the prologue tab with While loops. Then you could keep a row count in your TI and every 1000 or so rows, re-run the routine from the prologue to cut down your datasource based on the new data added.
If this doesn't work within the data tab, then consider breaking the TI into a parent and sub process, with the parent process rebuilding the datasource for the subprocess in a while loop (say 10 times) and on each iteration in the loop executing the subprocess with the revised datasource. I think this would be much, much faster than anything with MDX, and would definitely work in the 'on the fly' reevalutaion of the datasource doesn't work as expected.
-
- Community Contributor
- Posts: 103
- Joined: Mon Sep 05, 2011 11:04 pm
- OLAP Product: TM1
- Version: 10.2
- Excel Version: 2010
Re: TI: Cube View Cached or Re-evaluated?
So, thinking about it some more, I would have a parent process with just some code in the prologue:
And then in the subprocess:
The only thing you would then have to do would be to have the corrrect number of iterations in the master process while loop, which you could do by having some sort of reconcilliation figure and then break out of the loop when you had finished the process.
I hope that makes sense...
Code: Select all
i = 1;
while( i<=10);
ExecuteProcess(SubProcess);
i = i + 1;
End;
And then in the subprocess:
Code: Select all
#### Prologue ###
# Dim 1
i = 1;
While(i <= DIMSIZ(Dim1));
## Logic to build static subset here
SubsetElementInsert('Dim 1', 'zSourceViewSubset');
i = i + 1;
End;
Count = 0;
#### Data Tab ####
If (Count < 10000);
processquit;
Endif;
##Main TI logic here ##
Count = Count + 1;
I hope that makes sense...