We've been working on improving the overaal cube performance for one of our models and I was wondering if I could get some thoughts and expert advice on what we have done and/or are looking at to make sure we're approaching this in the right manner. I still have a feeling I'm not fully grasping the way different config parameters are impacting the performance and how you should read and interpret the results logged in the different performance monitor cubes.
To set the scene: We're running TM1 v10.2 on a vritual Windows 2008 server with 4 cores and 32 GB of RAM (I think, I'm not entirely sure about the specs). We've built a number of inter-linked cubes with a fairly typical story behind them I think: We have a couple of cubes that will load detailed actuals on units and overall costs, which are pushed through to some intermediate calculation cubes (one per main cost category), where we also calculate budgeted and forecasted values based on a) budgeted and forecasted volumes and b) input parameters and so on stored in seperate, relatively smaller cubes.
I'm sure we can still improve the overall logic and especially some specific feeders to improve memory consumption but my question here relates to improving the end-user experience when using cube views and slicing results to Excel. So what we have done so far has been primarily focused on the speed it takes to calculate certain views and return these results to the user.
In attachment you can find a recent extract of the StatsForServer and StatsByCube cubes to give you an idea of the overall memory consumption. Also attached is the current version of the tm1.cfg file where you will see we have added a number of parameters:
- UseStargateForRules = T
- ForceReevaluationOfFeedersForFedCellsOnDataChange = T
- MaximumViewSize=3000
- ViewConsolidationOptimization=T
- ViewConsolidationOptimizationMethod=TREE
- LockPagesInMemory=T
- AllRuleCalcStargateOptimization=T
Ok so with this in mind here are a bunch of questions:
- Looking at the StatsByCube and StatsForServer (XLS file): what is the reason for the delta between "Memory Used" - "Memory In Garbage" - Sum(Total Memory Used by Cube)? I would expect this to be zero but there is a small delta which I currently cannot explain.
- Again Looking at the StatsByCube and StatsForServer (XLS file): With the performance monitor running for a couple of weeks now I find it odd that there is only one cube with memory allocated to views. Especially since I have been creating and testing views on the main cube (FOF.CC.Planning) which clearly took more than two seconds to load I would expect a much higher number on memory used (closer to 10 MB) but also on the number of stored views. Is there something that could occur which would erase all of the views loaded into memory?
- The two largest cubes in our model are shown on top. But looking at the memory used for input data (last column) you see big differences, mainly because the number of numeric cells is very different. Which is odd obviously because the FOF.CC.Planning cube has a lot more than 117 cells, the product of all elements is equal to 29,7 B cells. All of which are linked to underlying cubes through DB rules (so no direct user input in this cube).
- I've created two big views on the FOF.CC.Planning cube and saved them as private views. As they initially take some time to load I would expect TM1 to store these in memory after the first time, so that when i request the exact same view a second time it would pop up instantly. This is however not the case, I don't notice any difference in calculation or loading time between the first and a second request of those views. Also, by doing this just now, the memory allocated to views in the StatsByCube was resetted to zero (not only for the FOF.CC.Planning cube but for all of them).