Copy version performance

Post Reply
vladino
Posts: 110
Joined: Sat Nov 06, 2010 10:10 am
OLAP Product: Cognos TM1
Version: 10.2.2
Excel Version: Excel 2013

Copy version performance

Post by vladino »

Hi guys,
I have created a process for copying version. The process takes all cubes that contain version dim, exports its data into file and then imports this file into another version. The script is leveraging Bedrock library (export source version - clear target version - import into target version).

I noticed that first few cubes are processed quite fast but then the process slows down and the performance is terrible. In architect "process progress" window I can see that it takes almost a minute to process one row from the file.

As the "master" process calls Bedrock processes, we cannot use bulk load mode. Logging is not disabled but I don't think this is the reason for slow performance. Maybe unloading the cube after the import is done might be helpful but I'm not sure if this helps.

Is there anything else I can do to make it run faster?

Really appreciate your help!

BR
Vladino
tomok
MVP
Posts: 2831
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: Copy version performance

Post by tomok »

I don't use Bedrock but are you sure the views you are creating for the data source are zero-suppressed? Also, you really do need to turn off logging. Logging the transactions for a version copy is silly. Where would you ever need it? If the process goes wrong you just re-run it instead of trying to recover from a log.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
vladino
Posts: 110
Joined: Sat Nov 06, 2010 10:10 am
OLAP Product: Cognos TM1
Version: 10.2.2
Excel Version: Excel 2013

Re: Copy version performance

Post by vladino »

tomok wrote:I don't use Bedrock but are you sure the views you are creating for the data source are zero-suppressed? Also, you really do need to turn off logging. Logging the transactions for a version copy is silly. Where would you ever need it? If the process goes wrong you just re-run it instead of trying to recover from a log.
Hi,
yes, I'm sure the views are zero-suppressed. I also have disabled logging but it didn't help.

I'm thinking about unloading the cube from memory before the import, what do you think? It might help...
Edward Stuart
Community Contributor
Posts: 247
Joined: Tue Nov 01, 2011 10:31 am
OLAP Product: TM1
Version: All
Excel Version: All
Location: Manchester
Contact:

Re: Copy version performance

Post by Edward Stuart »

You'll get a far better response if you post the code

It is worth checking the View Manipulation TurboIntegrator Functions in the Reference guide specifically:

ViewExtractSkipZeroesSet
ViewExtractSkipRuleValuesSet
ViewExtractSkipConsolidatedStringsSet

CubeUnload will not help you in this scenario but it does raise the question, are you copying ruled values into a static version or are you copying a lot of String data? Are you archiving or setting a new baseline?
tomok
MVP
Posts: 2831
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: Copy version performance

Post by tomok »

It sounds like you are probably exporting a bunch of data points that are calculated via rules, instead of being raw data. If this is the case then TM1 must calculate those data points and this takes up cache. It's likely that by exporting the cube you have exceeded the allowed memory cache for the cube. You probably need to tweak the VMM setting for the cube(s). Search for postings on VMM for help.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
vladino
Posts: 110
Joined: Sat Nov 06, 2010 10:10 am
OLAP Product: Cognos TM1
Version: 10.2.2
Excel Version: Excel 2013

Re: Copy version performance

Post by vladino »

Edward Stuart wrote:You'll get a far better response if you post the code

Code: Select all

vCubeList = '}Cubes';
vNumCubes = DIMSIZ(vCubeList);
vVersionDim = 'Version';

i = 0;

While ( i < vNumCubes );
  i = i + 1;
  j = 0;
  x = 0;

  sCubeName = DIMNM(vCubeList, i);  

  If ( SUBST(sCubeName, 1, 1) @<> '}' );
    sDim = TABDIM(sCubeName, j + 1);
    
    While ( sDim @<> '' & x = 0 );
      j = j + 1;
    
      If ( sDim @= vVersionDim );
        x = 1;
      EndIf;
      
      sDim = TABDIM(sCubeName, j + 1);
    End;
  
    If ( x = 1 );
         ExecuteProcess('Bedrock.Cube.View.Create',
          'pCube',sCubeName,
          'pView',TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy',
          'pFilter',vVersionDim | ': ' | pVersion,
          'pSuppressZero',1,
          'pSuppressConsol',1,
          'pSuppressRules',1,
          'pDimensionDelim','&',
          'pElementStartDelim',':',
          'pElementDelim','+',
          'pDebug',0
          );
         
        ExecuteProcess('Bedrock.Cube.Data.ViewExportToFile',
          'pCube',sCubeName,
          'pExportPath','_copy',
          'pExportFile',TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy.txt',
          'pView',TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy',
          'pSkipRuleValues',1,
          'pSkipCalcValues',1,
          'pSkipNullValues',1,
          'pTitleRecord',0,
          'pDebug',0
          );

        If ( FileExists('_copy\' | TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy.txt') = 1 );
          TextOutput(TODAY(1) | '_' | 'copy_log.txt', 'File "' | TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy.txt" FOUND - start copying.' );

          ExecuteProcess('Bedrock.Cube.Data.Clear',
            'pCube',sCubeName,
            'pView','',
            'pFilter',vVersionDim | ': ' | pTargetVersion,
            'pDimensionDelim','&',
            'pElementStartDelim',':',
            'pElementDelim','+',
            'pDestroyTempObj',1,
            'pDebug',0
            );

          ExecuteProcess('Bedrock.Cube.Data.ImportFromFile',
            'pSourceDir','_copy',
            'pSourceFile',TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy.txt',
            'pCube',sCubeName,
            'pDimension',vVersionDim,
            'pSourceElement',pVersion,
            'pTargetElement',pTargetVersion,
            'pTitleRows',0,
            'pDelimiter',',',
            'pQuote', '"',
            'pCumulate',0,
            'pDebug',0
            );

          ExecuteProcess('Bedrock.Server.SaveDataAll',
            'pDebug',0
            );

          ExecuteProcess('Bedrock.Cube.ViewAndSubsets.Delete',
            'pCube',sCubeName,
            'pView',TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy',
            'pSubset',TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy',
            'pMode',1,
            'pDebug',0
            );
        Else;
          TextOutput(TODAY(1) | '_' | 'copy_log.txt', 'File "' | TODAY(1) | '_' | sCubeName | '_' | pVersion | '_copy.txt" NOT FOUND.' );
        EndIf;
      EndIf;
    EndIf;
  EndIf;
End;
vladino
Posts: 110
Joined: Sat Nov 06, 2010 10:10 am
OLAP Product: Cognos TM1
Version: 10.2.2
Excel Version: Excel 2013

Re: Copy version performance

Post by vladino »

tomok wrote:It sounds like you are probably exporting a bunch of data points that are calculated via rules, instead of being raw data. If this is the case then TM1 must calculate those data points and this takes up cache. It's likely that by exporting the cube you have exceeded the allowed memory cache for the cube. You probably need to tweak the VMM setting for the cube(s). Search for postings on VMM for help.
I'm preparing a new baseline, ie. copy budget version into forecast. I'm copying only manual inputs, not calculated values or consolidations.

Thanks for the hint, I'll take a look on VMM.
Edward Stuart
Community Contributor
Posts: 247
Joined: Tue Nov 01, 2011 10:31 am
OLAP Product: TM1
Version: All
Excel Version: All
Location: Manchester
Contact:

Re: Copy version performance

Post by Edward Stuart »

What's in the message log and what happens when you run it for a single cube?
lotsaram
MVP
Posts: 3651
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: Copy version performance

Post by lotsaram »

Looking through your code here are a few pointers and a few questions.

- why are you using Bedrock.Cube.View.Create followed by Bedrock.Cube.Data.ViewExportToFile instead of Bedrock.Cube.Data.Export which would do the same work in one process?
- how large are some of the data extracts?
- is the instance running on a virtual server?
- what does the server memory usage do during the process?

You are already skipping consolidations and rules in the views. Leaf data processing should always be fast so if things slow down exponentially as the process progresses it makes me suspect that memory could be the issue.

The reason for the last 3 questions is that my concern is you might be running into memory issues during the export. With an x64 operating system the TM1 instance won't just crash when physical memory isn't available but it will become extremely slow as the paging file starts being accessed. The reason why I suspect this could be an issue is that you are running the entire export in serial encapsulated within one nested while loop TI process. When chaining TI processes together with ExecuteProcess TM1 will treat all changes as a single "transaction" with a single commit.

If you shelled out each export / clear target / import group to tm1runti then this would use much less memory as you would have individual commits not just one. Depending on the number of CPUs you have available you might also be able to decrease the end to end time by running each cube in parallel (and if any cube extracts were quite big you could further split out and parallelize these along some other dimension.)
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
vladino
Posts: 110
Joined: Sat Nov 06, 2010 10:10 am
OLAP Product: Cognos TM1
Version: 10.2.2
Excel Version: Excel 2013

Re: Copy version performance

Post by vladino »

Edward Stuart wrote:What's in the message log and what happens when you run it for a single cube?
It depends for which cube it's running... Some cubes are processed really quick but some cubes takes hours to complete.

Some cubes generate "stack overflow" error in the log and it states "process terminated". But I'm not sure if it means that the import has been terminated or not...
vladino
Posts: 110
Joined: Sat Nov 06, 2010 10:10 am
OLAP Product: Cognos TM1
Version: 10.2.2
Excel Version: Excel 2013

Re: Copy version performance

Post by vladino »

lotsaram wrote:- why are you using Bedrock.Cube.View.Create followed by Bedrock.Cube.Data.ViewExportToFile instead of Bedrock.Cube.Data.Export which would do the same work in one process?
The reason is quite simple - Bedrock.Cube.Data.Export does not allow you to skip zero values
lotsaram wrote: - how large are some of the data extracts?
Some are in tens of MBs, some are only few KBs
lotsaram wrote: - is the instance running on a virtual server?
It's the IBM TM1 cloud
lotsaram wrote: - what does the server memory usage do during the process?
As it is IBM cloud I don't actually know this.
lotsaram wrote: You are already skipping consolidations and rules in the views. Leaf data processing should always be fast so if things slow down exponentially as the process progresses it makes me suspect that memory could be the issue.
lotsaram wrote:
I agree - it seems that this might be connected to the memory issue. But the question is how to solve this issue...
Edward Stuart
Community Contributor
Posts: 247
Joined: Tue Nov 01, 2011 10:31 am
OLAP Product: TM1
Version: All
Excel Version: All
Location: Manchester
Contact:

Re: Copy version performance

Post by Edward Stuart »

Stack Overflow IBM Support Notes:

http://www-01.ibm.com/support/docview.w ... wg21457317

http://www-01.ibm.com/support/docview.w ... wg21962608

http://www-01.ibm.com/support/docview.w ... wg21457414

Not to mention many posts within the TM1forum.

I'd isolate one cube which produces an error and work from there to identify the issue and paste any further error logs into your requests
whitej_d
Community Contributor
Posts: 103
Joined: Mon Sep 05, 2011 11:04 pm
OLAP Product: TM1
Version: 10.2
Excel Version: 2010

Re: Copy version performance

Post by whitej_d »

Based on your description of the problem, it's likely to be either rules taking too long to calculate, or feeders taking to long to process on the write side.

You can easily tell which it is by finding out from the log file if it is the export sub process or the import sub process which is taking the time. If the export is always fast but the import is slow, then you can ignore the rules and focus on the feeders of the target version. If the export is slow but the import is fast then it is rules and you must be including some calculated values in your export.

IF you are indeed just copying manual inputs, then it's almost certainly a feeder issue - feeders are created when a cell goes from 0 to non zero, so you would probably find that the first time you run the data copy it is very slow, but on a second run it is very fast. Based on the error in your log file I would guess that you have some rolling time based feeders somewhere which are triggered from some of the data points being imported (cell feeds next month, next month feeds next month ..... etc). Feeders also suffer from stack overflows after a single chain of feeders has too many steps. Also typically rolling feeders like this tend to be extremely slow during data copies, but can perform ok if the feeders are processed after the data is loaded.

If this is actually your problem, then I would suggest doing one of 2 things -

1. Introduce some logic to disable the feeders during the copy, and then reenable afterwards and process the feeders. (not recommended as this will start to get messy and hard to maintain/manage).
2. Look at optimising the feeders - it's almost always significantly faster to feed an 'All months' C-level element rather than roll a feeder through each month one by one, although depending on the design of your time dimension you may need to add some new consolidations to limit the feeder as much as possible.
lotsaram
MVP
Posts: 3651
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: Copy version performance

Post by lotsaram »

vladino wrote:Some cubes generate "stack overflow" error in the log and it states "process terminated". But I'm not sure if it means that the import has been terminated or not...
I wish this had been posted before my answer. I think whitej_d is on the money and your slow performance is due to feeders (and if generating stack overflow probably some bad ones.)
vladino wrote:The reason is quite simple - Bedrock.Cube.Data.Export does not allow you to skip zero values
I don't think so. Have you actually looked at the code which converts the filter expression to a view? Null values are skipped because to TM1 zeros are not data, as zero values don't exist this parameter is assumed to be redundant. If you are exporting data then by definition you are skipping zeros.
vladino wrote:
lotsaram wrote: - what does the server memory usage do during the process?
As it is IBM cloud I don't actually know this.
Yes not having access to the tools you need to manage your application is a problem with the cloud, hopefully this will change.

It might be too late for you in this year's plan cycle and you haven't given any real detail on the structure of your version dimension but by the sounds of it the need to initialize a new version by copying in all input data from the old version indicates a model with multiple calculated version elements where creating a new one means modifying rues and feeders as all versions are calculated, correct?

Generally a much more efficient and easier to manage design is to only have one (or a very limited and known in advance) "live" version which is referenced in all rules and feeders. And several (or an unlimited number of) "frozen" or "timestamped" or "snapshotted" versions which have no rules whatsoever and are created by copying the "live" version (including rule values) at a point in time. With this type of design there is no need to edit rules & feeders as new versions are created and the feeder problem of mass data arriving for the 1st time in a version doesn't exist. Of course if snapshots are taken frequently then data volume will grow, but this is generally a much more manageable problem.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
jcr55
Posts: 54
Joined: Tue May 08, 2012 3:58 pm
OLAP Product: TM1
Version: 9.5.2 FP2
Excel Version: Excel 2007

Re: Copy version performance

Post by jcr55 »

"Generally a much more efficient and easier to manage design is to only have one (or a very limited and known in advance) "live" version which is referenced in all rules and feeders. And several (or an unlimited number of) "frozen" or "timestamped" or "snapshotted" versions which have no rules whatsoever and are created by copying the "live" version (including rule values) at a point in time. With this type of design there is no need to edit rules & feeders as new versions are created and the feeder problem of mass data arriving for the 1st time in a version doesn't exist. Of course if snapshots are taken frequently then data volume will grow, but this is generally a much more manageable problem."

Yes - what lotsaram said.

After a few annual budget cycles, we learned the lesson and now take static (no rule calculated values) snapshots of each budget version. Rules and Feeders are only applied to the 1 live version. This also helps reduce the time for the server to initialize after a restart.
PeteB
Posts: 23
Joined: Sun Sep 23, 2012 2:30 am
OLAP Product: Planning Analytics Local, PAW
Version: PAL 2.0.6, PAW 2.0.45
Excel Version: 2010

Re: Copy version performance

Post by PeteB »

Hi Everyone contributing to this thread,

https://www.ibm.com/developerworks/rfe/ ... _ID=116669 - TM1 Data Snapshot functionality - Copy rule to values in a TM1 cube

I am looking for your vote for an RFE for IBM to provide the function to copy rule based cells to values.

I am told by IBM this should be a trivial piece of work to deliver the copy function from a slice of rules cells to value cells and the performance should be significantly faster for TM1 to internalise the copy rather than the current approach of exporting the rule values and importing into the snapshot version.

We have built our copy using the Bedrock and Hustle code, which has significantly improved our performance of the copy. However, we need to maintain the code and it is not as fast as it would be if it was internalised by TM1. Please vote and please comment.
Post Reply