Dimension size , TM1 caching questions

Post Reply
Laki
Posts: 4
Joined: Tue Dec 04, 2012 5:17 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2010

Dimension size , TM1 caching questions

Post by Laki »

We have a cube that has 8 diemsions and one of the dimensions(GL codes) has around 99000 elements. we have cognos BI reports on top of this cube.
The reports that are at the GL code detail level are taking a long time to run. Is it a good design to have 99000 element diemsion in a cube?
And the runtime of the reports is never constant for the same report parameters. Is this because of Tm1 caching? Can we control the Tm1 caching by configuring any server parameters?

Thanks,
Laki.
User avatar
jim wood
Site Admin
Posts: 3958
Joined: Wed May 14, 2008 1:51 pm
OLAP Product: TM1
Version: PA 2.0.7
Excel Version: Office 365
Location: 37 East 18th Street New York
Contact:

Re: Dimension size , TM1 caching questions

Post by jim wood »

As far as I remember Cognos BI tries load the meta data first so with a large dimension like this in the cube BI may never perform very well. You may want to consider creating a summary cube used for reporting in BI then keep the detailed cube in TM1 for further analysis,


Jim.
Struggling through the quagmire of life to reach the other side of who knows where.
Shop at Amazon
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
rmackenzie
MVP
Posts: 733
Joined: Wed May 14, 2008 11:06 pm

Re: Dimension size , TM1 caching questions

Post by rmackenzie »

Laki wrote:We have a cube that has 8 diemsions and one of the dimensions(GL codes) has around 99000 elements. we have cognos BI reports on top of this cube.
It sounds like you have an opportunity to look for different segments within the GL codes and split the single dimension out into two or more. E.g. if your GL code is 84001234577 then you might have 8400 as the business unit, 12345 as the account and 77 as the legal entity.
Laki wrote:The reports that are at the GL code detail level are taking a long time to run. Is it a good design to have 99000 element diemsion in a cube?
Sometimes it is good design and sometimes it is bad! It could be bad if you need to put in different dimensions for the segments like I just suggested. But perhaps you've already done this and now you just have a large-ish dimension. Some people have much larger dimensions than this (in the millions) e.g. telco subscribers. Do the people accessing the reports through BI need all that granularity or can you just do more summarised reporting as Jim suggests?
Laki wrote:And the runtime of the reports is never constant for the same report parameters. Is this because of Tm1 caching? Can we control the Tm1 caching by configuring any server parameters?
Until you're sure whether the performance hit is being taken on the TM1 server, or within Cognos BI, or in the interface between the two then I'd hang back from making changes to server configuration. Yes, there are a number of things you can alter, but it might not be the correct thing to do until you work out the nature of your problem.
Robin Mackenzie
Laki
Posts: 4
Joined: Tue Dec 04, 2012 5:17 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2010

Re: Dimension size , TM1 caching questions

Post by Laki »

Thanks Jim and Ramckenzie for your response.

Ramackenzie,
You are correct. we have the individual segemets and the GL code as seperate dimensions since we could not generate the GL code dynamically for the reports in cognos BI. Also, the cube itself is performing really well. The problem is with how the Cognos BI report generates the MDX for TM1 server.When I run TM1 top we see the report is trying to generate a dynamic expression (ViewCreateByExpression) and then trying to generate the data ( OLEDBROWSETGETDATA) which are taking the most time around 3000sec to generate the report output.Now which report expression is generating this dynamic subsets is what we do not know. So we are trying different ways to fix this issue and just wanted to confirm if this is a good design in the first place.

BTW all we are trying to do is generate a nice formatted Income statement and Balance sheet at both summary,detail and GL detail level.
rmackenzie
MVP
Posts: 733
Joined: Wed May 14, 2008 11:06 pm

Re: Dimension size , TM1 caching questions

Post by rmackenzie »

Laki wrote:The problem is with how the Cognos BI report generates the MDX for TM1 server.When I run TM1 top we see the report is trying to generate a dynamic expression (ViewCreateByExpression) and then trying to generate the data ( OLEDBROWSETGETDATA) which are taking the most time around 3000sec to generate the report output.
My understanding is that ViewCreateByExpression does not leverage TM1 caching in the same way that e.g. VIEW and TM1RPTVIEW formulas do. I'd be interested to hear more about it from someone who has the detail? My experience with the ODBO interface is that it does not handle sparsity very well. If your cube has the GL code dimension and the segment dimensions then you have introduced a lot of sparsity because for a particular intersection defined by N segment dimension elements then there will only be one corresponding GL code element (if I've got the right picture). Also, the last time I looked at the MDX that Report Studio was generating it seemed extremely verbose and some of the dynamic expressions weren't quite what I would have been tempted to do if it was an MDX query in a VBA Excel report. Can you tweak the Report Studio/ BI generated code or is it still read-only?
Laki wrote:So we are trying different ways to fix this issue and just wanted to confirm if this is a good design in the first place.

BTW all we are trying to do is generate a nice formatted Income statement and Balance sheet at both summary,detail and GL detail level.
Of the three requirements (summary, detail, GL detail level) I take it that the GL code dimension is only supporting the third one (GL detail level). Perhaps you need to extract that dimension from your cube and support the first two requirements through Cognos BI. Then look for a different option (perhaps different cube) to support the third.
Robin Mackenzie
Laki
Posts: 4
Joined: Tue Dec 04, 2012 5:17 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2010

Re: Dimension size , TM1 caching questions

Post by Laki »

To your point on tweaking the BI report , we are working with the IBM support guys to help us with queries because they are extremely verbose and not sure what data item in the report query is creating the dynamic expressions on the Tm1 server.
As you mentioned, looks like the issue is with the sparsity and the ODBO interface.We have implemented all the server configurations on the BI server to deal with the null values/sparsity.
Looks like the IBM cognos BI and TM1 integration has a long way to go . It is not robust enough to support detailed reporting yet.
David Usherwood
Site Admin
Posts: 1458
Joined: Wed May 28, 2008 9:09 am

Re: Dimension size , TM1 caching questions

Post by David Usherwood »

I agree. In my view the two sets of product management and engineers are not talking enough to each other and the result, as regards both performance and functionality, is well below expectations.
PlanningDev
Community Contributor
Posts: 349
Joined: Tue Aug 17, 2010 6:31 am
OLAP Product: Planning Analytics
Version: 2.0.5
Excel Version: 2016

Re: Dimension size , TM1 caching questions

Post by PlanningDev »

I can give you the tips I have found with numerous tickets logged with IBM for BI Performance with TM1 data sources

1. If you don't use DQM option when publishing your package then don't expect performance with cross joins of dimensions and null suppression
2. Implement the various changes in the qfs_config.xml file on your BI dispatchers
3. On your query in the BI report set supression to nulls
4. On your crosstab or list set supression for rows and/or columns (step 3 is different than step 4).

Outside of this you will need to look into filter expressions to reduce the members returned to BI. Basically your goal is to limit what the BI dispatcher has to do. This also includes some calculations.

On another side note, if you are trying to run a dashboard through BI, you will end up with a single query for each chart. This may also be a performance hit.
Laki
Posts: 4
Joined: Tue Dec 04, 2012 5:17 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2010

Re: Dimension size , TM1 caching questions

Post by Laki »

Thanks for the inputs.

1. Our BI package is DQM package
2. Can you give me more details on the qfs settings? Most of the settings seem to be set by default for DQM. "cross join thereshold" is the only setting that we changed in DQM mode.
3. Our query has the null suppression turned on
4. Our crosstab has the zero suppression turned on on rows and columns

Do you have any specific scenarios or examples on how to test filters. Are there any server configuration settings for calculations?
We do not have dashboards yet but that is a good to know when we start buiding dashboards in future.

Thanks,
Laki.
PlanningDev
Community Contributor
Posts: 349
Joined: Tue Aug 17, 2010 6:31 am
OLAP Product: Planning Analytics
Version: 2.0.5
Excel Version: 2016

Re: Dimension size , TM1 caching questions

Post by PlanningDev »

Sounds like you have most of the settings already configured correctly. The trick is to make sure you do what you can to reduce the data coming back from a cross join to the BI Dispatcher.

Sometimes using the dimensional function "FILTER" will help to reduce records being processed.

Here is the link to a PMR that came out of an issue we logged.

http://www-01.ibm.com/support/docview.w ... P7J&mync=R
Post Reply