Very slow DBRW performance when a cube is used as data source

Post Reply
Posts: 1
Joined: Sat Jul 06, 2024 5:36 am
OLAP Product: IBM Planning Analytics
Version: 2.0
Excel Version: Excel 365

Very slow DBRW performance when a cube is used as data source

Post by akarimov »

Hello everyone!

I'm not a TM1 professional, however have some experience that includes several areas of IT. I tried to search for any tips regarding the issue we are facing right now from this forum but was not successful. So, I decided to post it here, hoping to get any suggestions from pros.
We are switching from our on-premises IBM TM1 9.5.2 to the on-cloud IBM P&A 2.0. We have encountered an issue related to DBRW formulas that use one of the cubes with 16 dimensions in order to retrieve values from it. The original Excel template has a data worksheet where formulas are copied from the top row and pasted into the column (e.g. A2:A40000) then they are refreshed, copied and pasted as values to the same range. There are about 150 columns and 35-40 thousand rows overall. All this runs with a simple VBA macro and once one column is done it moves to the next one and so on. Our current TM1 9.5.2 and TM1 Perspectives takes about 10 minutes to run this process, and with IBM P&A and PAX - almost infinity. When we apply this formula to one column (with around 40K rows) it takes about 30-40 seconds to refresh. If we let the entire process run, then after 10-15 minutes (until this problem formula starts) it becomes really slow and the Excel file size starts consuming up to 5 GB memory and 25-40% CPU. We tried with about 2000 rows of data, and it took about 40 minutes to finish the task. Using DBR and tweaking Excel's Auto calculation did not help. Can you please share your knowledge or advise where to look at?
Thank you!
Regular Participant
Posts: 203
Joined: Wed May 06, 2020 2:58 pm
OLAP Product: Planning Analytics
Version: 2.0.9
Excel Version: 2016

Re: Very slow DBRW performance when a cube is used as data source

Post by burnstripe »

I believe there's 2 factors causing this.
1) the cloud environment will have a higher ping (latency) , result in slower response time
2) pax will be recalculating after every change to the worksheet

Some options to address this are:
1) change existing method.
A) Consider outputting the content with a process using asciioutput (recommended for truly large outputs)
B) turn off automatic recalc in the vba whilst building dbrw statements and only refresh content once the whole workbook is compiled. Depending on size you may hit workbookmaxcellcount parameter which could be altered or you could break it up into batches such that you only have x amount of dbrw calculating at once, the hardcoding the results before doing the next batch

2) There could be other factors at play given the version difference and possible different server/model configuration. But I'd say the higher ping count, latency is likely your main issue.

The higher ping, latency is simply the result of the server being on cloud vs on premise. And different approaches will be required to reduce the number of packets sent, received. Pax is optimised for cloud environments but the method your using is forcing excel to send loads of individual packets rather than one big packet
User avatar
Steve Rowe
Site Admin
Posts: 2431
Joined: Wed May 14, 2008 4:25 pm
OLAP Product: TM1
Version: TM1 v6,v7,v8,v9,v10,v11+PAW
Excel Version: Nearly all of them

Re: Very slow DBRW performance when a cube is used as data source

Post by Steve Rowe »

You are also moving from the Perspectives client to the PAfE client as part of this transition (as well as moving from v9 to v11 so many years of updates and changes).

PAfE generally does not deal well with "Mega-slices" versus perspectives but we don't see a significant / multiple orders of magnitude difference between on-premise PAfE / PAW and PAoC PAfE/ PAW.

Just to support Burnstripe on this you are unlikely to get a 5 million cell DBRW working at a good speed but the performance difference is due to PAfE rather than latency (IMO).

We've been through a few iterations of this issue with customers transitioning to PAfE

Option 1 : Move to quick reports if you absolutely have to do this in Excel, if your slice is not suitable to do in a single symmetrical slice then have multiple quick reports and then use Excel formula to refactor the data into your mega slice. Certainly don't continue to spend time trying to get a VBA driven approach to work.

Option 2 : Use a TI process (PAs ETL tool) to reproduce the export in a text file.

PAfE also performs tests against named ranges on every calculation pass (why I don't know...). If your workbook has a lot of named ranges (1,000s) then this also impacts performance. Also watch out for hidden named ranges, I've come across books with many 1,000s of hidden named ranges. Cleaning this up will help.
Technical Director
User avatar
Posts: 43
Joined: Tue Dec 13, 2022 8:54 pm
OLAP Product: TM1 / PA / PAx / PAW
Version: TM1 11
Excel Version: 365

Re: Very slow DBRW performance when a cube is used as data source

Post by WilliamSmith »

Sharing in case this is an option for you. We now use a TM1 library to query data into SQL Server, then use Power Query to load data into Excel / PowerBI. It is very performant. If you are a MS shop the C# library is linked below, or if you're a python dev tm1py should work in the same way. We like SQL Server because it doesn't require any 3rd party ODBC drivers or configuration to work with Excel.
User avatar
Posts: 1216
Joined: Thu May 06, 2010 3:03 pm
OLAP Product: TM1
Version: Latest and greatest
Excel Version: Office 365 64-bit
Location: JHB, South Africa

Re: Very slow DBRW performance when a cube is used as data source

Post by gtonkin »

WilliamSmith wrote: Fri Jul 12, 2024 7:33 pm Sharing in case this is an option for you…
Thanks for sharing!
Post Reply