calculated VERSUS stored & frozen historic data

TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

Hi all,
i'm new to TM1 and to this Forum so please bear with me... have a question about calculation and about retrieving data from historic periods & scenarios...

I used to work with Essbase were i could easily retrieve (via ad-hoc or HsGetValue function) data at pretty much every level of a dimensions' hierarchy without much wait for calculation to finish.
When loading new data into let's say a BUDGET scenario i could run a calc script for this scenario and let's say 2014 - the current planning year.
Historic periods such ACTUAL, BUDGET or FORECAST for 2013 where frozen and users could easily extract this data and analyze it...

I've now in a company that uses TM1. From what i understand, all data is only stored at the leaf level and higher-level data points are calculated in-memory (RAM) and discarded when RAM is full or when a new data load happens - data loads are happening from once a day to once an hour...

Now here the shocker: refreshing a report/dashboard in Excel that pulls roughly 1,000 data points (using the DBRW function, which i suppose is TM1's equivalent to Essbase's HsGetValue function) takes about 10 min. This is the case even with data coming form prior year months or this years BUDGET scenario which are/should be "locked". Apparently this is the case b/c all data in non-leaf-level members is calculated on the fly...

Does this setup make sense or is there a way to pre-caclulate and freeze historic data to prevent TM1 from constantly calculating data and then discard it again - it doesn't sound very logic to me...

Thx in advance.
a-sceptical-BI-tool-user

PS: btw, once the data for one Org is calculated and in TM1's cache, it takes about 2 seconds to "re-retrieve" it... until the cache get's dumped... then it has to be re-calc'ed... brrrrr....
Last edited by TM1nowhere on Fri Mar 28, 2014 11:29 pm, edited 2 times in total.
declanr
MVP
Posts: 1817
Joined: Mon Dec 05, 2011 11:51 am
OLAP Product: Cognos TM1
Version: PA2.0 and most of the old ones
Excel Version: All of em
Location: Manchester, United Kingdom
Contact:

Re: calculation or frozen historic data

Post by declanr »

Look at the viewconstruct() TI function to create and store stargate views ready for retrieval. Also look at the VMM and VMT settings as to minimise how often your stargates will be dumped.

However when data changes etc the cache will be invalidated anyway and that's when you will get the long waits you are experiencing; I would say that 10 minutes for 1000 cells seems in the extreme overkill range... chances are that there could be some rules and feeders in the cubes which are far from optimal.

If you are talking about historic data (stuff that should never change again) and there are rules against it; the first step should be to get rid of rules on that data and replace with static numbers - a simple TI to output the rule based data to a SQL db, excel file or temp 2d tm1 cube --- followed by STETing the rules for that particular set of intersections and then loading the data back in will do the trick.
Declan Rodger
tomok
MVP
Posts: 2832
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: calculation or frozen historic data

Post by tomok »

There are sooooo many variables that go into the equation of determining how long it takes to retrieve data from a TM1 cube but 10 mins to calculate 1,000 data points sounds like extremely poor performance in almost any scenario. The first place to start is the Excel workbook in which you have the DBRW formulas. Look at the first argument of the formula and trace it to the cell it points to. Does that cell have a =VIEW formula in it? If not, this could be most, if not all, of your problem. If it does then the problem is likely to do with poor model design or inadequate hardware resources. Troubleshooting this really is an art and it could be any of a number of things or a combination of a number of things.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated or stored & frozen historic data

Post by TM1nowhere »

thx for the quick replies...
are you confirming then, that extracting 1,000 data points from TM1 via DBRW should take maybe a few seconds instead of >10 minutes...?
my fin. systems / BI folks are telling me that all is perfectly set up and that there is nothing that can be done...
with the Essbase experience i have, i'm a bit skeptical when i hear this... and wonder if this isn't simply a poor setup issue...
Thx & best!
User avatar
Alan Kirk
Site Admin
Posts: 6610
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: calculated or stored & frozen historic data

Post by Alan Kirk »

TM1nowhere wrote:thx for the quick replies...
are you confirming then, that extracting 1,000 data points from TM1 via DBRW should take maybe a few seconds instead of >10 minutes...?
my fin. systems / BI folks are telling me that all is perfectly set up and that there is nothing that can be done...
With all due respect to your fin. system / BI folks, do they have even the first frickin' idea of what they are talking about?

Extracting a thousand or so values shouldn't take a few seconds, if the calculations have been cached it'll be so fast that there's no point in measuring it.

If your your fin. system / BI folks have never used TM1 before, and I suspect that they haven't, they're hardly in a position to be telling you that it's perfectly set up because the facts suggest otherwise. You'd be far better off getting in an experienced TM1 consultant, even if it's for only a day, who can look over your system and provide some guidance on what's wrong.
TM1nowhere wrote:with the Essbase experience i have, i'm a bit skeptical when i hear this... and wonder if this isn't simply a poor setup issue...
Tomok is dead right that there are an impossible (in the context of a forum posting) number of variables to consider. Here are a few:
(a) For the historical data, are you currently using rules to calculate it? If you are, you might want to consider whether it would be more efficient (and it would be) to snapshot the data and load it as static values via TI.
(b) One thing that you haven't covered is where this report/ dashboard is being generated. Is it an Excel sheet? (And if so, do you have calculation set to manual to avoid a potentially ridiculous number of iterations?) Is it a TM1 Web sheet? Is it some kind of third party front end? I'd certainly be looking at whether there's a delay between the server and the front end. One way of doing this is to take a look at TM1 Top on the server and see whether the server is actually calculating for the whole time.

One thing that you said in the original post was also bothersome:
higher-level data points are calculated in-memory (RAM) and discarded when RAM is full
The RAM should NEVER be full. TM1 is an in-memory calculation engine and you need to throw enough RAM at it to play with. If you don't it will go to virtual memory and that will slow the crud out of your server. If that's your problem, you need a bigger boat. (Server, anyway.)
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

hi Alan,
thx so much for your answer and input - very helpful!
to address some of your questions and give you a real look at what i was talking about i made screen capture of a sample Excel report - it's in Excel where i intend to pull data to then design reports...
it's a 5 min video where you can see how Excel gets date from TM1 very quickly - if the data is cached... then, when I change the scenario in one column and after i hit the refresh button it freezes...
each column in which i change the scenario has 25 DBRW functions and I change the scenario in three of these columns subsequently...
in the last scenario-change you should be able to see Excel's calculation progress in Excel's status bar on the bottom right...

here the link: http://tinyurl.com/o95onbs

A general question: are reports like this commonly used with TM1 or somewhat atypical?
With Essbase I never had any problem with similar, sometimes much larger reports using HsGetValue and HsDescription...

And LASTLY: is it common that Cognos TM1 users extract data via queries, load this data into Excel and then build reports from that data?
Which often results in large data files that (benefit:) can be used offline but (negative:) loose the Single-Source-Of-Truth principle/benefit...

Thx again for your input and best regards!
User avatar
jim wood
Site Admin
Posts: 3953
Joined: Wed May 14, 2008 1:51 pm
OLAP Product: TM1
Version: PA 2.0.7
Excel Version: Office 365
Location: 37 East 18th Street New York
Contact:

Re: calculated VERSUS stored & frozen historic data

Post by jim wood »

TM1nowhere wrote: A general question: are reports like this commonly used with TM1 or somewhat atypical?
With Essbase I never had any problem with similar, sometimes much larger reports using HsGetValue and HsDescription...
Excel historically was teh front end for TM1. Most reports if not all were built in Excel and to this day Excel is often the reporting weaon of choice. You do however seem to have missed the guys other points regarding the actual data rolled in to your forecasts. If they are rule based rather than TI based then that will make things slower, especially if the rules / feeders are not well written. To give you an idea, Having the values done via rules is like having them done by ASO and the aggregation is wrong. Doing it via TI is like having it operate like a calc script in BSO.

I agree that it sounds like you could do with somebody coming in to asses your system and give you pointers. Also have you considered completing some TM1 developer training?
TM1nowhere wrote: And LASTLY: is it common that Cognos TM1 users extract data via queries, load this data into Excel and then build reports from that data?
Which often results in large data files that (benefit:) can be used offline but (negative:) loose the Single-Source-Of-Truth principle/benefit...
You do see that a lot when the users don't really know what they are doing and need training on how to extract data out of TM1 and how to use it within Excel. Really they should be building as close to the report as they can get in to teh cube viewer, then create a slice (or active form) in Excel and format / amend as needed.

While as Alan said TM1 is more than capable of getting 1000 data cells out quickly, you're kind of missing out on key benefits that the system can bring you.
Struggling through the quagmire of life to reach the other side of who knows where.
Shop at Amazon
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

Hi Jim,
i'm not a TM1 admin... in an old role i wrote the specs for an Essbase planning application and had external implementers do the heavy lifting... i just reviewed, added some KPIs and made some small rules changes in Essbase's admin consol...
now in a new role i'm working for the first time with TM1 and am an end-user - and am somewhat surprised with what i find...
hence i'm just starting to get a grip and try to understand what is "normal" and what is not...
what of the "unnormal" might be due to system limitations or to design choices...
and what of the design choices are poor choices or maybe driven by the BI environment...
example: if daily actuals are loaded into TM1... or if dimensional hierarchies changes happen every month... or if overhead allocations change twice a year... how does that effect value done via rules or via TI...?
And "with all due respect" i don't want to ask my fin. system / BI folks if they have a "frickin' idea of what they are talking about" / what they are doing... ;)
Mostly b/c I don't have a freakin' idea what I'm talking about/dealing with... but with Alan and your comments i might be able to start asking some questions that might lead the BI folks to think things over...
Hope this makes sense... Cheers...
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

:idea: one thing that my BI folks told me: IBM is working on "In dex ed Caching" for the next version of TM1... supposedly that will resolve some of the issues/waiting I'm seeing... have you / has anyone heard of this?
Last edited by TM1nowhere on Thu Apr 03, 2014 10:19 pm, edited 1 time in total.
Duncan P
MVP
Posts: 600
Joined: Wed Aug 17, 2011 1:19 pm
OLAP Product: TM1
Version: 9.5.2 10.1 10.2
Excel Version: 2003 2007
Location: York, UK

Re: calculated VERSUS stored & frozen historic data

Post by Duncan P »

All caches that TM1 uses are already indexed anyway so I am curious as to what they are referring to. If there is any more information that you can share that is not under a non-disclosure agreement it would be interesting to read it.
David Usherwood
Site Admin
Posts: 1457
Joined: Wed May 28, 2008 9:09 am

Re: calculated VERSUS stored & frozen historic data

Post by David Usherwood »

I don't think your BI guys are right. I have heard there are longer term ideas to manage more closely which parts of the cache are invalidated when changes are made.
You should really get an experienced TM1 consultant in for a brief healthcheck - there might be something quite simple to address.
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

unfortunately that's all i have on the cache... i'll try to poke and find out more and will share if there's any meat...
reading again through Jim's post explaining what TM1's ASO and BSO is, can i in laymen's talk "put this into a nutshell" like this:
ASO: aggregation rules aggregate leaf-level data on the fly... which is slow but dynamic...
BSO: TI can load and bring back into TM1 "pre-calculated" non-leaf-level (historical/frozen) data... this data is somehow stored outside of TM1 but through the TI loading available much faster than the dynamically calculated data...
Does that make sense or did i get this up-side-down / backwards / inside-out - aka completely wrong...? :shock:
thanks anyhow!
User avatar
Alan Kirk
Site Admin
Posts: 6610
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: calculated VERSUS stored & frozen historic data

Post by Alan Kirk »

TM1nowhere wrote:unfortunately that's all i have on the cache... i'll try to poke and find out more and will share if there's any meat...
reading again through Jim's post explaining what TM1's ASO and BSO is, can i in laymen's talk "put this into a nutshell" like this:
ASO: aggregation rules aggregate leaf-level data on the fly... which is slow but dynamic...
BSO: TI can load and bring back into TM1 "pre-calculated" non-leaf-level (historical/frozen) data... this data is somehow stored outside of TM1 but through the TI loading available much faster than the dynamically calculated data...
Does that make sense or did i get this up-side-down / backwards / inside-out - aka completely wrong...? :shock:
thanks anyhow!
Kinda wrong, especially with the "stored out of TM1" bit.

Think of it this way. Suppose that you have data for the years 2010 to 2014.

And suppose that you have a lot of heavy-lifting calculations that work out the reported values, based on the raw data that you have stored for those years. (Whether those calculations are optimally designed is another thing. Based on what you've posted I definitely don't recommend relying on the advice that your "BI guys" have given you on that point since I get the impression that their knowledge of TM1 is only slightly greater than that of the [X] key on my keyboard. As others have repeated, you need to get a TM1 specialist in to look at your setup. But let's leave that aside for the moment.)

Here's the thing... unless your accounting systems are really rubbery, that base data is unlikely to change now for 2010, 2011, 2012, maybe 2013.

So what's the point of doing all of those heavy lifting calculations every time the data is looked at (even if it only needs to be done the first time you do it after a server restart) if they'll always and forever return the same numbers anyway?

There are several ways of handling this, but I'll give you one.

You export the data for the older years into a text file. Exclude zeroes. Exclude consolidations. Do NOT exclude rule-calculated values.

Then you change those heavy-lifting rules so that they only apply to the years that can change. This means that the formerly calculated values will be zero, and more importantly you'll be able to upload to them.

Then, you reload the values that you exported into your text file back into the cube.

You end up with the same numbers but instead of TM1 needing to calculate them every time they're looked at, they're static values. The only thing that TM1 needs to calculate is the values for the current year(s) and any consolidations. The data is in TM1, but it's there in the form of hard coded numbers instead of being calculated.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

Hi... and a BIG thx! even the laymen understood... ;)
one concern though... how to you deal with hierarchy changes in for instance the costcenter dimension?
If it's policy to restate historical data to align with the new / current org structure would mean historic data would have to be re-calc'ed and re-loaded every time the dimension changes?
Wondering if this is a very time-consuming process if we're talking about a quite sizable TM1 instance...
again, THANKS a 1,000,000, or let's make it 1,000,000,000...!
Cheers
User avatar
Steve Rowe
Site Admin
Posts: 2424
Joined: Wed May 14, 2008 4:25 pm
OLAP Product: TM1
Version: TM1 v6,v7,v8,v9,v10,v11+PAW
Excel Version: Nearly all of them

Re: calculated VERSUS stored & frozen historic data

Post by Steve Rowe »

You shouldn't need to worry about consolidations recalcing for static periods as these are many times faster than ruled values.

Of course if you don't want history to change when your company changes its org structure then you have a whole different set of problems....

Cheers
Technical Director
www.infocat.co.uk
User avatar
Alan Kirk
Site Admin
Posts: 6610
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: calculated VERSUS stored & frozen historic data

Post by Alan Kirk »

What Steve said, assuming that your rule values are not dependent on your org structure at any given time. Generally speaking they shouldn't be; the calculation of org structure values should generally be the province of your consolidation hierarchies, with rules just handling the generic value calculations that the consolidations then aggregate.

If, on the other hand, your rules are dependent on your org structure... then again it comes back to hiring someone with genuine TM1 skills to come in and see whether they can be implemented in a better way. (Which they almost certainly can from a performance view if nothing else.)
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
overflow
Posts: 22
Joined: Wed Jan 21, 2009 1:24 am
OLAP Product: TM1
Version: 10.2 9.4 10.1
Excel Version: 2013 2010
Location: Melbourne Australia

Re: calculated VERSUS stored & frozen historic data

Post by overflow »

Interesting set of comments!

Just on one point -
one concern though... how to you deal with hierarchy changes in for instance the costcenter dimension?
If it's policy to restate historical data to align with the new / current org structure would mean historic data would have to be re-calc'ed and re-loaded every time the dimension changes?
This comes back to the fundamental points previously made about static storage - if a value is held statically it can be consolidated using appropriate alternative hierarchies in the cost centre dimension to show as the historical and current structures. I am assuming this does not also reassign the data to a new data point. This would involve some thought about how the cost centre dimension is updated - deletion of obsolete members will have the effect of wiping data. Better to have the current structure which does not include those obsolete members, but have them included in an alternate hierarchy.

Hope that help!
Overflow.
TM1nowhere
Posts: 9
Joined: Fri Mar 28, 2014 9:32 pm
OLAP Product: TM1, Jedox/Palo, Essbase
Version: 10.1
Excel Version: 2010

Re: calculated VERSUS stored & frozen historic data

Post by TM1nowhere »

Hi all,
it's been a while but this calc'ed vs stored/frozen data hasn't stopped bugging me - mainly b/c i want to create reports using DBRW() function.
In the meantime i've learned though, why the reports i've created took sooooo long to refresh/recalculate.
From my understanding the Excel TM1 add-in isn't smart enough (as smart as Hyperion's SmartView ;) ) to understand/guess that reports usually contain multiple data points. So when recalculating, it goes one DBRW() function at a time, queries TM1, TM1 calculates this data point, returns this value to Excel, and goes on to the next data point.

However, there's a solution to this and declanr and tomok tried to point to it - novice / amateur as i am, i didn't get it (and it's also not a very intuitive way/solution).

To speed up the data pull my Cognos folks showed me how to use the VIEW() and SUBNM() function.
In a nutshell and from my understanding, using these functions i can query various data points at once and, when a report isn't using all dimensions, it can tell TM1 that and thereby reducing the aggregation.

Attached a small example report - does this makes sense and hopefully it helps someone...
Attachments
solution.xlsx
DBRW() in conjunction with VIEW() and SUBNM() functions to reduce TM1 recalculation time
(12.5 KiB) Downloaded 233 times
Last edited by TM1nowhere on Thu Jul 10, 2014 3:33 am, edited 1 time in total.
User avatar
Alan Kirk
Site Admin
Posts: 6610
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: calculated VERSUS stored & frozen historic data

Post by Alan Kirk »

TM1nowhere wrote:Hi all,
it's been a while but this calc'ed vs stored/frozen data hasn't stopped bugging me - mainly b/c i want to create reports using DBRW() function.
In the meantime i've learned though, why the reports i've created took sooooo long to refresh/recalculate.
From my understanding the Excel TM1 add-in isn't smart enough (as smart as Hyperion's SmartView ;) )
I have a theory that any piece of software that has the word "Smart" at the front of it is guaranteed to have been written by someone who is as dumb as a housebrick. I can't recall seeing any software named Smart-whatever that wasn't a stupidly designed pain in the butt.
TM1nowhere wrote:to understand/guess that reports usually contain multiple data points. So when recalculating, it goes one DBRW() function at a time, queries TM1, TM1 calculates this data point, returns this value to Excel, and goes on to the next data point.
Wow. That is so wrong I barely know where to begin without pointing you to the manuals.

DBR formulas will calculate one at a time. There are reasons why you will sometimes need this, typically if you are using the returned value of one cell as an argument to another function. But for the most part, you don't create reports with DBRs.

DBRWs process the values in batches, whether there is 1 or 1000 data points.
TM1nowhere wrote:However, there's a solution to this and declanr and tomok tried to point to it - novice / amateur as i am, i didn't get it (and it's also not a very intuitive way/solution).

To speed up the data pull my Cognos folks showed me how to use the VIEW() and SUBNM() function.
In a nutshell and from my understanding, using these functions i can query various data points at once and, when a report isn't using all dimensions,
No. Just... no. Every DBRW formula always, without fail, uses all dimensions. Again, this is covered in the manuals. The difference is that a View function will create a (for want of a better term) mini-cube (technically known as a Stargate View) which precalculates the values for the title elements in your report's view. The title elements are specified by adding their cell references or names to the View() function. The ! operators tell the view function to look in the rows and columns to find the elements for those dimensions. But ALL dimensions are used.

Repeat after me, "Values in a TM1 cube are stored at the intersection of ONE (and ONLY one) element from EACH dimension in the cube, without exception". To retrieve any single data point, regardless of whether your report is returning one or a thousand such points, you specify one element from every dimension in every DBR or DBRW formula.

There are no exceptions.

View functions can, if properly used, generally result in pretty impressive speed bumps in performance. On rare occasions they may degrade performance, but generally they improve it. SubNm functions will not make a blind bit of difference; they're only there to allow you to change your selection of title elements by double clicking on a cell.
TM1nowhere wrote: it can tell TM1 that and thereby reducing the aggregation.
Err... no. If you select a consolidation as one of your elements in a DBR or DBRW formula then TM1 will, at some point, need to calculate that value unless it has already been cached. As with any other software application, free lunches are notable only by their absence.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
User avatar
stephen waters
MVP
Posts: 324
Joined: Mon Jun 30, 2008 12:59 pm
OLAP Product: TM1
Version: 10_2_2
Excel Version: Excel 2010

Re: calculated VERSUS stored & frozen historic data

Post by stephen waters »

TM1nowhere wrote:Hi all,In the meantime i've learned though, why the reports i've created took sooooo long to refresh/recalculate.
I haven't read back over the whole of the post but, looking at the youtube video, did anyone point out that you have a 16 dimensional cube? Looking at the dimensions I think is going to be very sparse (meaning you need to get your feeders right) and unwieldy for end users.

Normally in TM1 systems you would set up different cubes to reflect the dimensionality of different functional areas of the system ( eg sales and direct costs by product\and or customer, staff costs by employee, P&L by account and cost centre) and pull them together at the appropriate level using rules (or maybe TI). Putting everything in one massive cube is the sort of approach we get from IT techies or people form a Cognos BI background who don't understand TM1 or the users needs. It really suggests to me you have a poor design.

Some other points ( may already have been mentioned?)
- you have 18(?) tabs. If you recalc the full workbook you are calcing all of theses as well.
- users downloading large amounts of data to manipulate Excel is normally a sign of (i) poorly designed\performing system and\or (ii) poor user training. In your case I suggest option (i) is primary cause
- my understanding is the TM1 dev team are putting a lot of effort in enabling more granular caching in TM1. But will not be ready for a while.
- also, if you are on 10.2.x are you using MTQ?
Post Reply