Data Source cube view optimization in TI

Post Reply
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Data Source cube view optimization in TI

Post by sreesuku »

Hi team,

We have a calculation cube in our model which has 16 dimensions and its heavily rule driven.
Data transfer happens from this calculation cube to reporting cubes whenever user performs some updates from the front end Web. There is a TI which does this data transfer. It creates the calc cube view as a data source in prolog and in data tab its a direct cellputn in to the target reporting cube.

As the source cube is heavy calc driven, it takes a lot of time to create this source view in the TI. What are the recommended ways to improve the TI execution timing, other than checking the feeders ?

NB : In the source cube, we have a dimension called 'View Type' which has a consolidation as 'Calendar' and has Period 1-24 under that as children.
In our source cube view we are assigning this consolidation as part of the view. (As shown below)
Calendar
Period 1
Period 2
Period 4
....
....
Period 24
lotsaram
MVP
Posts: 3663
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: Data Source cube view optimization in TI

Post by lotsaram »

Well first you can make sure that your instance is using MTQ, e.g. MTQ= -1 ot MTQ = All to use all cores, MTQ = -2 to use all cores except 1 or MTQ = 16 to use just 16 cores (assuming the server has more). In all my experience even on big servers and big data models there hasn't ever been a performance gain in using more than 20 cores. Yes there can be a near linear performance gain to about 10 cores but then it tapers and once you get to about 20 cores it seems that the overhead from managing and merging the worker threads outweighs any benefit.

Then to your specific case make sure MTQQuery = T and you use ViewConstruct on the prolog of the TI. This will then pre-calculate the view in MTQ mode. This may take a while depending on the view size. But you should then see much greater throughput on the data tab as you will effectively be processing (pre-calculated) data rather than evaluating calculations in-line on the data tab. The overall end to end time for the process should be less.

And of course you could also partition the view into smaller slices and use RunProcess to do the work in parallel (assuming availability of cores and ParallelInteraction = T).
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
ascheevel
Community Contributor
Posts: 288
Joined: Fri Feb 15, 2013 5:49 pm
OLAP Product: TM1
Version: PA 2.0.9.1
Excel Version: 365
Location: Minneapolis, USA

Re: Data Source cube view optimization in TI

Post by ascheevel »

lotsaram wrote: Wed Jan 04, 2023 3:34 pm ...use ViewConstruct on the prolog of the TI.
I believe ViewConstruct will be limited by the MaximumViewSize config parameter. If the value isn't big enough for the view you're trying to construct, the TI will abort. It's a dynamic parameter so can be adjusted without restarting the service.
User avatar
gtonkin
MVP
Posts: 1204
Joined: Thu May 06, 2010 3:03 pm
OLAP Product: TM1
Version: Latest and greatest
Excel Version: Office 365 64-bit
Location: JHB, South Africa
Contact:

Re: Data Source cube view optimization in TI

Post by gtonkin »

I found that using TI to copy intra-cube or inter-cube can be very slow in certain cases using a CellPutN/CellIncrementN.
In one case it was taking almost 4 hours to create snapshots across a few cubes with Sales being particularly slow due to granularity and rules.

Changing the process to export to file then import brought the entire process down to under 30 minutes. May be worthwhile testing this in your environment. Would still be useful to check everything Lotsa and Andy mention and possibly look at VMM/VMT too. Every bit helps...
burnstripe
Regular Participant
Posts: 198
Joined: Wed May 06, 2020 2:58 pm
OLAP Product: Planning Analytics
Version: 2.0.9
Excel Version: 2016

Re: Data Source cube view optimization in TI

Post by burnstripe »

Are you sure it's just the view calculation that's slow, the other option is the data input into target reporting cube is slow or a combination.

To isolate you could hash out the contents of the data/meta data tabs and put something like keep some syntax in the data tab like x=1; so the data tab isn't skipped.

If it's still slow you know it's the calculation and if it's not then it's the data input or it sounds likely combination of the 2

For the calculation you will be looking at updating rule logic or cube reordering so calculations perform quicker or spreading the calculation query over multiple threads and parrelel running to allow some of the calculations to perform simultenously.

If it's the update put, then cube locking or feeder revaluation could cause slow performance. In reference to gtonkin's answer I experienced similar behaviour but it tends to normally be because the following parameter is set

Forecereevaluationoffedcellsondatachange=t

If the target cube has feeders then if this setting is set to T, performance will be slow as it's reevaluating feeders at each update. Setting this setting to F can yield a huge performance benefit and may be worth exploring
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

lotsaram wrote: Wed Jan 04, 2023 3:34 pm Well first you can make sure that your instance is using MTQ, e.g. MTQ= -1 ot MTQ = All to use all cores, MTQ = -2 to use all cores except 1 or MTQ = 16 to use just 16 cores (assuming the server has more). In all my experience even on big servers and big data models there hasn't ever been a performance gain in using more than 20 cores. Yes there can be a near linear performance gain to about 10 cores but then it tapers and once you get to about 20 cores it seems that the overhead from managing and merging the worker threads outweighs any benefit.

Then to your specific case make sure MTQQuery = T and you use ViewConstruct on the prolog of the TI. This will then pre-calculate the view in MTQ mode. This may take a while depending on the view size. But you should then see much greater throughput on the data tab as you will effectively be processing (pre-calculated) data rather than evaluating calculations in-line on the data tab. The overall end to end time for the process should be less.

And of course you could also partition the view into smaller slices and use RunProcess to do the work in parallel (assuming availability of cores and ParallelInteraction = T).
Hi,

Thanks a lot for your inputs. Our data is flow is a such a way that, users will update their data on a hourly/daily basis from the from the front end. Basically they update some IDs which has to retrigger all calculations as the updated numbers can change ROI, Revenue etc which are rule driven. So every time there is an update this source view need to be recreated. In such cases View construct would help ? as we cannot take the input from a pre calculated view.

Note : We are already using RunProcess and its parallelly running on 4 threads
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

gtonkin wrote: Wed Jan 04, 2023 5:50 pm I found that using TI to copy intra-cube or inter-cube can be very slow in certain cases using a CellPutN/CellIncrementN.
In one case it was taking almost 4 hours to create snapshots across a few cubes with Sales being particularly slow due to granularity and rules.

Changing the process to export to file then import brought the entire process down to under 30 minutes. May be worthwhile testing this in your environment. Would still be useful to check everything Lotsa and Andy mention and possibly look at VMM/VMT too. Every bit helps...
Hi,
Thanks a lot for your inputs.
Would this export and import really work ? I have seen that exporting to an Ascii takes more time compared to direct update in cubes as it need to access some shared drive path to create/update the file right. As per our data flow this source view need to be created every time a user updates some data and this may result in multiple export files in server for different users right. (Though we can delete them after execution)
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

burnstripe wrote: Wed Jan 04, 2023 10:34 pm Are you sure it's just the view calculation that's slow, the other option is the data input into target reporting cube is slow or a combination.

To isolate you could hash out the contents of the data/meta data tabs and put something like keep some syntax in the data tab like x=1; so the data tab isn't skipped.

If it's still slow you know it's the calculation and if it's not then it's the data input or it sounds likely combination of the 2

For the calculation you will be looking at updating rule logic or cube reordering so calculations perform quicker or spreading the calculation query over multiple threads and parrelel running to allow some of the calculations to perform simultenously.

If it's the update put, then cube locking or feeder revaluation could cause slow performance. In reference to gtonkin's answer I experienced similar behaviour but it tends to normally be because the following parameter is set

Forecereevaluationoffedcellsondatachange=t

If the target cube has feeders then if this setting is set to T, performance will be slow as it's reevaluating feeders at each update. Setting this setting to F can yield a huge performance benefit and may be worth exploring
Thanks a lot for your inputs.
We have tested the process, its the prolog section which creates the view takes almost 80 % of the total execution time. Data tab cellputN timings are less.
Note : There are no feeders/rules in target cube, its a direct data copy.
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

I wanted to ask a general question.
Does the Dimension re-ordering in the cube help in anyway for a faster cube data retrieval in TI ? Or it just reduces memory footprint ?

lotsaram , ascheevel ,gtonkin , burnstripe ?
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

ascheevel wrote: Wed Jan 04, 2023 4:04 pm
lotsaram wrote: Wed Jan 04, 2023 3:34 pm ...use ViewConstruct on the prolog of the TI.
I believe ViewConstruct will be limited by the MaximumViewSize config parameter. If the value isn't big enough for the view you're trying to construct, the TI will abort. It's a dynamic parameter so can be adjusted without restarting the service.
Thanks a lot for your inputs
MarenC
Regular Participant
Posts: 357
Joined: Sat Jun 08, 2019 9:55 am
OLAP Product: Planning Analytics
Version: Planning Analytics 2.0
Excel Version: Excel 2016

Re: Data Source cube view optimization in TI

Post by MarenC »

Hi,

I think others have covered everything, I just would have liked a bit more information.

Presume there is a viewzerout of the target cube on the prolog also? Have you tested' how much of the 80% this takes up?

If a user updates a single piece of data and then runs the reporting cube update, does it narrow down the source view based on what the user has updated or does it do a whole cube to cube transfer?
Could you narrow the view down further? In other words how is the source view built (mdx?) and what elements are included in the subsets.

Does the reporting cube always have to be 'in time', would updating the reporting cube overnight be out of the question?

Maren
User avatar
gtonkin
MVP
Posts: 1204
Joined: Thu May 06, 2010 3:03 pm
OLAP Product: TM1
Version: Latest and greatest
Excel Version: Office 365 64-bit
Location: JHB, South Africa
Contact:

Re: Data Source cube view optimization in TI

Post by gtonkin »

sreesuku wrote: Thu Jan 05, 2023 10:40 am ...
Hi,
Thanks a lot for your inputs.
Would this export and import really work ? I have seen that exporting to an Ascii takes more time compared to direct update in cubes as it need to access some shared drive path to create/update the file right. As per our data flow this source view need to be created every time a user updates some data and this may result in multiple export files in server for different users right. (Though we can delete them after execution)
Per my post, it works in my case 4 hours down to 30 minutes - you would need to try this to see if it works for you.

The file can be created on the TM1 server, no need for a share folder or network share. After import, you can delete via Asciidelete.
In terms of when to run and the frequency, business rules and requirements would dictate. You are doing this already and would need to decide on the source view, timings etc. Typically a snapshot is made at a point in time when a point or milestone has been reached, not immediately after some data updates.
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

gtonkin wrote: Thu Jan 05, 2023 11:08 am
sreesuku wrote: Thu Jan 05, 2023 10:40 am ...
Hi,
Thanks a lot for your inputs.
Would this export and import really work ? I have seen that exporting to an Ascii takes more time compared to direct update in cubes as it need to access some shared drive path to create/update the file right. As per our data flow this source view need to be created every time a user updates some data and this may result in multiple export files in server for different users right. (Though we can delete them after execution)
Per my post, it works in my case 4 hours down to 30 minutes - you would need to try this to see if it works for you.

The file can be created on the TM1 server, no need for a share folder or network share. After import, you can delete via Asciidelete.
In terms of when to run and the frequency, business rules and requirements would dictate. You are doing this already and would need to decide on the source view, timings etc. Typically a snapshot is made at a point in time when a point or milestone has been reached, not immediately after some data updates.

Thanks for the inputs.
I will try this method. However in our case, reporting cube updates need to be real time and whenever user updates some data corresponding calculations need to re-triggered and latest data should be available in reporting cubes. So a point in time snap shot wont help much

I wanted to ask a general question.
Does the Dimension re-ordering in the cube help in anyway for a faster cube data retrieval in TI ? Or it just reduces memory footprint ?
Wim Gielis
MVP
Posts: 3126
Joined: Mon Dec 29, 2008 6:26 pm
OLAP Product: TM1, Jedox
Version: PAL 2.0.9.18
Excel Version: Microsoft 365
Location: Brussels, Belgium
Contact:

Re: Data Source cube view optimization in TI

Post by Wim Gielis »

sreesuku wrote: Thu Jan 05, 2023 10:45 am I wanted to ask a general question.
Does the Dimension re-ordering in the cube help in anyway for a faster cube data retrieval in TI ? Or it just reduces memory footprint ?

lotsaram , ascheevel ,gtonkin , burnstripe ?
Yes. Did you see OptimusPy ?
This tool is trying to find the optimal balance between the 2 factors, by reordering dimensions and testing the impact on both.

PS: Please do not address members in person. Thanks.
Best regards,

Wim Gielis

IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
sreesuku
Posts: 47
Joined: Wed Sep 14, 2011 7:45 am
OLAP Product: Cognos TM1
Version: PAW
Excel Version: 2013

Re: Data Source cube view optimization in TI

Post by sreesuku »

Wim Gielis wrote: Thu Jan 05, 2023 11:32 am
sreesuku wrote: Thu Jan 05, 2023 10:45 am I wanted to ask a general question.
Does the Dimension re-ordering in the cube help in anyway for a faster cube data retrieval in TI ? Or it just reduces memory footprint ?

lotsaram , ascheevel ,gtonkin , burnstripe ?
Yes. Did you see OptimusPy ?
This tool is trying to find the optimal balance between the 2 factors, by reordering dimensions and testing the impact on both.

PS: Please do not address members in person. Thanks.
Thanks a lot for the suggestion. I will try this out.

PS: Please do not address members in person -- My apologies, I will make a note of this
burnstripe
Regular Participant
Posts: 198
Joined: Wed May 06, 2020 2:58 pm
OLAP Product: Planning Analytics
Version: 2.0.9
Excel Version: 2016

Re: Data Source cube view optimization in TI

Post by burnstripe »

The optimuspy results would definitely be interesting, reordering could have a big impact on performance, just watch out for the memory consumption. If you don't have much spare memory capacity it may struggle.

It's also worth looking at the rules in the source cube, could they be optimised. There's a few tips in this link

https://code.cubewise.com/blog/7-tips-t ... tics-rules

And perhaps look for alternative methods for these type of scenarios
- Lookups to previous, future months
- Lookups to consolidations containing 1000s of members for apportionment
- String/Date functions
Post Reply