Data Extract - TI v Cube Browser

Post Reply
User avatar
Steve Vincent
Site Admin
Posts: 1054
Joined: Mon May 12, 2008 8:33 am
OLAP Product: TM1
Version: 10.2.2 FP1
Excel Version: 2010
Location: UK

Data Extract - TI v Cube Browser

Post by Steve Vincent »

I've tried doing this a couple of ways now and to be honest its got me stumped. I'm under no illusions that the data i am picking is substantial, but what i cannot understand is why a TI to extract the data is taking 25+ mins and yet a cube view in the same layout only takes 2/3 mins max.

Cube is 12 dims, the max # of data points is 3 * 10 * 604 * 4 * 1276 * 99 * 556 * 11 * 4 * 130 * 8 * 25616 = 5,967,276,689,533,970,000,000 :o It is, thankfully, a very sparse cube tho so we are nowhere near that for data but it does illustrate the monster i am working with.

I need to extract in a tabular format data from 3 of those dims down the left side, with years spanning 2008 to 2023 (consolidations of months) across the top for one of our users.
example output
example output
tab.gif (83.97 KiB) Viewed 3762 times
The issue is that the TI i have written to extract this is taking too much RAM off the server. Its a squeak from 3GB when it finishes so any user grabbing data that has yet to be cached is likely to kill it. Of the 3 dims i am listing down the left, one has 3 elements, another about 50 and the last is circa 22,000. The final output is only about 7,500 rows of data, but its the sheer data it needs to look at (all the other dims are top level consolidations) that is killing it.

My TI is based on a cube view. It only selects the elements it requires with the exception of the year where i have to pick the top level. The TI then has to cycle thru each of the years i need for each data point so that i can generate the tabular formatted export;

Code: Select all


#adds header record to output file
IF ( n = 1 );
    AsciiOutput ( FileOut , 'Status' , 'Contract' , 'Organisation' , 'Resource' , '2008'  , '2009'  , '2010' , '2011' , '2012'  , '2013'  , '2014'  ,
 '2015'  , '2016'  , '2017'  , '2018'  , '2019'  , '2020'  , '2021'  , '2022'  , '2023' ) ;
EndIF;

n = n + 1;

#find value for each year
m2008 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2008' , Version , measure );
m2009 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2009' , Version , measure );
m2010 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2010' , Version , measure );
m2011 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2011' , Version , measure );
m2012 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2012' , Version , measure );
m2013 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2013' , Version , measure );
m2014 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2014' , Version , measure );
m2015 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2015' , Version , measure );
m2016 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2016' , Version , measure );
m2017 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2017' , Version , measure );
m2018 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2018' , Version , measure );
m2019 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2019' , Version , measure );
m2020 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2020' , Version , measure );
m2021 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2021' , Version , measure );
m2022 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2022' , Version , measure );
m2023 = CellGetN ( 'Demand' , ppd , wbs , obs , rbs , nom , Site , res_source , status , risk , '2023' , Version , measure );

#output to file
AsciiOutput ( FileOut , status , wbs , ppd , rbs , NumberToString ( m2008 ) , NumberToString ( m2009 ) , NumberToString ( m2010 ) , NumberToString ( m
2011 ) ,
 NumberToString ( m2012 ) , NumberToString ( m2013 ) , NumberToString ( m2014 ) , NumberToString ( m2015 ) , NumberToString ( m2016 ) , NumberToString
 ( m2017 ) , NumberToString ( m2018 ) , NumberToString ( m2019 ) , NumberToString ( m2020 ) , NumberToString ( m2021 ) , NumberToString ( m2022 ) , Nu
mberToString ( m2023) );
This takes about 25 or so minutes but i can get the exact same view from the cube viewer and exported to Excel in less than 5. I'm sure i must be doing something wrong in the export where its repeating itself for no reason, but i cannot see where especially as a similar extract from a smaller cube takes no time at all to run. Skip consolidations in the view is OFF (and has to be to grab what i need) but skip zero is ON. Its almost as if it's not taking notice of that flag but i can't think of any way to prove it. Has anyone got any ideas, because having to run this the number of times i have done so far is starting to peeve the users off...

TIA
If this were a dictatorship, it would be a heck of a lot easier, just so long as I'm the dictator.
Production: Planning Analytics 64 bit 2.0.5, Windows 2016 Server. Excel 2016, IE11 for t'internet
Malcolm MacDonnell
Posts: 26
Joined: Thu May 29, 2008 2:58 am

Re: Data Extract - TI v Cube Browser

Post by Malcolm MacDonnell »

Skip consolidations in the view is OFF (and has to be to grab what i need) but skip zero is ON. Its almost as if it's not taking notice of that flag but i can't think of any way to prove it.
Presuming Your Datasource has the variable "Value" for the Rolled up Year

IF (Value = 0);
AsciiOutput (FileName, 'ZERO NOT Suppressed');
ENDIF;

Also, Is it possible that there is a lot of historical data (or rule derived future data) that is included in the Consolidated Year, but not in 2008-2023 that you are looking for, and therefore the view you are loading in TI is actually much larger than the view you are looking at in Cube Viewer?

Mal
User avatar
paulsimon
MVP
Posts: 808
Joined: Sat Sep 03, 2011 11:10 pm
OLAP Product: TM1
Version: PA 2.0.5
Excel Version: 2016
Contact:

Re: Data Extract - TI v Cube Browser

Post by paulsimon »

Steve

We had a similar problem with a View going in to TI, where skip consols was off. The problem was that TM1 was calculating all possible intermediate consolidations. I would try tuning your view by ensuring that it only selects what you need.

If that fails, then export with skip consols on and write to a cube that is at the required level using an additive load, then export from there.

Under the hood, I believe that a display view is different to an export view. That may be why you are getting the difference.

Regards

Paul Simon
User avatar
Steve Vincent
Site Admin
Posts: 1054
Joined: Mon May 12, 2008 8:33 am
OLAP Product: TM1
Version: 10.2.2 FP1
Excel Version: 2010
Location: UK

Re: Data Extract - TI v Cube Browser

Post by Steve Vincent »

All the zeroes are being suppressed, the value = 0 statement returned no output and took only a few secs to run.

The view is as tight as i can possibly get it. The year consolidation does cover 2007 to 2049 but there is next to no data in 2007 or years after 2023, so its effect will be minimal. It feels that your idea of the TI picking up far more than a cube view would is right. I'm not sure its worth writing another cube to just deal with this export, but i do understand the idea - many thanks for the suggestion.
If this were a dictatorship, it would be a heck of a lot easier, just so long as I'm the dictator.
Production: Planning Analytics 64 bit 2.0.5, Windows 2016 Server. Excel 2016, IE11 for t'internet
Post Reply