TI Performance awful reading and writing within same cube

Post Reply
User avatar
PavoGa
MVP
Posts: 622
Joined: Thu Apr 18, 2013 6:59 pm
OLAP Product: TM1
Version: 10.2.2 FP7, PA2.0.9.1
Excel Version: 2013 PAW
Location: Charleston, Tennessee

TI Performance awful reading and writing within same cube

Post by PavoGa »

We are seeing a massive performance degradation in a process that reads and writes within the same cube. The reading is not a problem, we have isolated it to the CellIncrement statement. Run time jumps from 1-3 seconds for just reading the datasource to over ten minutes when the CellIncrement is active.

Here is the basic setup:

We have an allocation requirement that waterfalls allocation amounts from one pool to another. So Pool 10 may allocation amounts to Pool 20, 23, 45, etc. Pool 15 will have amounts allocated to pool 23, 45 and 52. Because it is possible that allocations from different source pools may target the same pool and account, we use a CELLINCREMENT to write to the target cell.

We have a matrix cube that contains all the source pool/account and target pool/accounts flagged for each valid intersection, so the intersection of a source that should allocation to a target is flagged with the value of 1.
  • The matrix cube is the datasource
    All datasource subsets are static.
    CellGetN is used to read the source pool/account cell
    CellIncrement is used to write to the target pool/account.
    Because the matrix cube does not contain a time dimension, the time dimension is looped through in the data section of the TI.
We are reading approximately 12K cells from the datasource and processing about 24K cells to write (not all months/pool/accounts contain values).
We have two lines in the DATA tab that perform the CellIncrement. While a number of test with various sections of code were skipped or commented out, performance was fine until the first CellIncrement line was uncommented. Then we jumped to over a ten minute time. All test results were repeatable.

I tried substituting CELLGETN and CELLPUTN to see if there was some problem with CellIncrement, but performance was the same, over ten minutes.

Could it be that the act of writing to the cube is affecting the reading operation? The source cells are rule-calculated and I've not tested recalculating the allocation in the TI.
Ty
Cleveland, TN
tomok
MVP
Posts: 2836
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: TI Performance awful reading and writing within same cube

Post by tomok »

PavoGa wrote:Could it be that the act of writing to the cube is affecting the reading operation?
Absolutely. You really should not read and write to the same cube inside the same TI process if you have large batches of records (more than a few thousand or so) because it will slow down significantly. I know that improvements have been made in the TM1 engine with parallel interaction in order to help this issue but I never go down this route unless forced to do so. It's really simple to split into two processes, one that creates a flat file extract and a second that loads the flat file. You will see huge performance gains.

Given than you are trying to do an allocation process I don't know how well this approach will work for you, I just wanted to confirm that reading and writing to the same cube in the same process can be really slow. I did a multi-pass allocation model several years ago (multi-pass meaning allocation pass 1 may send numbers to a unit that then allocates itself out in pass 2 which may go to units that allocate out in pass 3, etc.) I tried to do it in a TI and it was stupid slow so I did it with rules and then just used a TI to move the final allocations to the reporting cube. I had a "Pass" dimension to hold the different levels of allocation. Something like this:

Code: Select all

N  Raw Data
N  Allocation Pass 1
C  Total after Pass 1
N Allocation Pass 2
C Total after Pass 2
.......
N Alloction Pass x
C Total after Pass x
Pass 1 rule used data from Raw Data, Pass 2 rule used data from Total after Pass 1, etc. I had 15 levels of allocation with 10,000 cost centers and 1,500 accounts and the performance wasn't too bad. It had the added benefit of traceability. The key here is you don't use the Allocation cube for reporting, just calculating the allocations and exporting them into the final reporting cube.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
User avatar
jim wood
Site Admin
Posts: 3958
Joined: Wed May 14, 2008 1:51 pm
OLAP Product: TM1
Version: PA 2.0.7
Excel Version: Office 365
Location: 37 East 18th Street New York
Contact:

Re: TI Performance awful reading and writing within same cube

Post by jim wood »

It is funny how TI seems to rule the roost these days. When it comes to allocations my first stop is always rules.
Struggling through the quagmire of life to reach the other side of who knows where.
Shop at Amazon
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
User avatar
PavoGa
MVP
Posts: 622
Joined: Thu Apr 18, 2013 6:59 pm
OLAP Product: TM1
Version: 10.2.2 FP7, PA2.0.9.1
Excel Version: 2013 PAW
Location: Charleston, Tennessee

Re: TI Performance awful reading and writing within same cube

Post by PavoGa »

Thank you, Tomok. In light of the performance issues, I was already starting to think rules might be the way to go if the matrix cube can be leveraged properly and I'm thinking that may be possible without redoing the entire calculation cube. Our allocation process is similar to what you are describing with about 23 passes and we use a reporting cube for the final output.

I may try the write out to a text file and read back in just to see the performance difference. You and I have spoken about that before, and I guess, for whatever reason, I never really had a problem with it, but that means a jack load of squat. I'll run it that way before looking at rules and post the results.

Thanks again and Merry Christmas.
Ty
Cleveland, TN
David Usherwood
Site Admin
Posts: 1458
Joined: Wed May 28, 2008 9:09 am

Re: TI Performance awful reading and writing within same cube

Post by David Usherwood »

Since (pending the arrival of smart cacheing) writing to a cube destroys the calculation and view caches, performance won't be good.
In the Rules vs TI debate, rules lead on traceability - TI is a black box for users. Best to use the rules engine (TI's crown jewel) for calculations, then freeze the result to a separate reporting cube.
User avatar
mattgoff
MVP
Posts: 516
Joined: Fri May 16, 2008 1:37 pm
OLAP Product: TM1
Version: 10.2.2.6
Excel Version: O365
Location: Florida, USA

Re: TI Performance awful reading and writing within same cube

Post by mattgoff »

jim wood wrote:It is funny how TI seems to rule the roost these days. When it comes to allocations my first stop is always rules.
Really? My first thought when I'm planning a rule is "do I really need to have this real time" and, if not, move it to TI instead. For us, allocations are very much a non-realtime exercise, so ours are done via TI.

Also, using rules doesn't allow you to have allocators (outbound) sending to other allocators. Using TI I can do two rounds of allocations which pushes ~99.5% of cost to non-cost centers and then a true up to push out the remainder. Technically this could be done with rules, but it would be a lot harder.
Please read and follow the Request for Assistance Guidelines. It helps us answer your question and saves everyone a lot of time.
Wim Gielis
MVP
Posts: 3230
Joined: Mon Dec 29, 2008 6:26 pm
OLAP Product: TM1, Jedox
Version: PAL 2.1.5
Excel Version: Microsoft 365
Location: Brussels, Belgium
Contact:

Re: TI Performance awful reading and writing within same cube

Post by Wim Gielis »

mattgoff wrote:
jim wood wrote:It is funny how TI seems to rule the roost these days. When it comes to allocations my first stop is always rules.
Really? My first thought when I'm planning a rule is "do I really need to have this real time" and, if not, move it to TI instead. For us, allocations are very much a non-realtime exercise, so ours are done via TI.

Also, using rules doesn't allow you to have allocators (outbound) sending to other allocators. Using TI I can do two rounds of allocations which pushes ~99.5% of cost to non-cost centers and then a true up to push out the remainder. Technically this could be done with rules, but it would be a lot harder.
I am with mattgoff on this one. Most of the time it's linked with feeders having to overfeed a lot of cells.
For me as well, only the small, rather simple allocations where feeding is not a problem, would I do with rules. If not, TI.
Best regards,

Wim Gielis

IBM Champion 2024-2025
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
Moh
Posts: 43
Joined: Fri Aug 01, 2014 5:17 pm
OLAP Product: Cognos
Version: 10.1.1
Excel Version: 2010

Re: TI Performance awful reading and writing within same cube

Post by Moh »

Tomok , could you kindly explain in detail by small example how these passes would be made.
User avatar
jim wood
Site Admin
Posts: 3958
Joined: Wed May 14, 2008 1:51 pm
OLAP Product: TM1
Version: PA 2.0.7
Excel Version: Office 365
Location: 37 East 18th Street New York
Contact:

Re: TI Performance awful reading and writing within same cube

Post by jim wood »

mattgoff wrote:
jim wood wrote:It is funny how TI seems to rule the roost these days. When it comes to allocations my first stop is always rules.
Really? My first thought when I'm planning a rule is "do I really need to have this real time" and, if not, move it to TI instead. For us, allocations are very much a non-realtime exercise, so ours are done via TI.

Also, using rules doesn't allow you to have allocators (outbound) sending to other allocators. Using TI I can do two rounds of allocations which pushes ~99.5% of cost to non-cost centers and then a true up to push out the remainder. Technically this could be done with rules, but it would be a lot harder.
It's just that I'm a dinosaur, an aging dinosaur, so I got in to the habit of using rules first. As when I started TI wasn't an option. Don't get me wrong I do use TI an awful lot but see so many people trying to do them with a TI and a button in an excel file when a rule would have been easier. A lot of the time it's because TI is easier to pick up for those not familiar with the technology as it's pretty standard coding where as rules can take a while.
Struggling through the quagmire of life to reach the other side of who knows where.
Shop at Amazon
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
User avatar
PavoGa
MVP
Posts: 622
Joined: Thu Apr 18, 2013 6:59 pm
OLAP Product: TM1
Version: 10.2.2 FP7, PA2.0.9.1
Excel Version: 2013 PAW
Location: Charleston, Tennessee

Re: TI Performance awful reading and writing within same cube

Post by PavoGa »

I had promised I would post some results of testing on the wayward process we were running. To recap, we are running pool allocations which waterfall from one pool to another and then another. The process accomplishes this by reading a matrix or mapping cube that contains the source pool, source account, target pool and target account to build a view of which cells to read and which cells to write to in a calc cube. It processes each pool in order, so it performs this for pool 100, then 110, then 115, etc, via a loop through the pool dimension then calling a sub-process to do the actual CELLGET and CELLINCREMENT in the sub-process data tab.

Processing time had climbed from under ten minutes to over 30. This seemed to have resulted from some changes made to a source cube used to load actuals and as those changes were deemed necessary, my thoughts were that what I was getting away with in the original methodology was going to have to be reviewed. Per Tomok's suggestion and several others, I changed the waterfall control process to call one process to build the extract view and write a text file. That sub process then calls another process in its epilog to read that same text file and write to the calc cube.

Our results: processing time has dropped to around one minute, 10-15 seconds.

Thanks to everyone for their comments and suggestions. I hope this provides tangible results that someone else can use.
Ty
Cleveland, TN
Post Reply