TI Extract to Relational Table - Poor performance

Post Reply
ashwinchitkara
Posts: 21
Joined: Mon Aug 09, 2010 7:52 pm
OLAP Product: Cognos TM1
Version: 9.5
Excel Version: 2003

TI Extract to Relational Table - Poor performance

Post by ashwinchitkara »

Hello All,

I am doing a Data Extraction from TM1 to Relational Table using the TI Advanced tab 'Prolog' 'Data' and 'Epilog' with ODBCOpen, ODBCOutput, ODBCClose feature. The performance is really slow. < million records an hour.

Is this a normal performance issue due to which I believe it's recommended for large data extracts to use the other option of extracting into flat file using the TI Advanced Data tab with ASCIIOutput feature.

Could someone please let me know if I could use some tips to improve the extraction performance because I would prefer using the direct extract to relational table instead of extracting data into flat file and loading into table.

Thanks & Regards,

Ashwin
David Usherwood
Site Admin
Posts: 1458
Joined: Wed May 28, 2008 9:09 am

Re: TI Extract to Relational Table - Poor performance

Post by David Usherwood »

Your TI is issuing an INSERT command to your database for each value read. There's a number of areas to investigate:
a Write out the data to a flat file - make sure it is the _same view_ you are using for the SQL insert (comment out that and replace by Asciioutput).
b Do you have more data in the view than you need? eg are you using zero suppression and (especially) c level suppression?
c How long did the flat file take to run?
d Is your destination table heavily indexed? I've had major speedups by deleting the indexes before the load and recreating them afterwards.
e And finally, (rather more creatively here) why are you moving the data to relational at all? I've known (for instance) Cognos people to require this because they didn't know that C8/C10 can talk directly to TM1. TM1 should be faster than any relational engine for reporting.
ashwinchitkara
Posts: 21
Joined: Mon Aug 09, 2010 7:52 pm
OLAP Product: Cognos TM1
Version: 9.5
Excel Version: 2003

Re: TI Extract to Relational Table - Poor performance

Post by ashwinchitkara »

Hello David,

Thanks for your response. Following are the responses to your questions:
a. I have already extracted data to a flat file using Asciioutput.
b. I have only used '0' Dimension elements avoiding the need for C level suppression. Also the flat file output does not show any zeros confirming that the suppression is working on the View.
c. The flat file completes within 35minutes for the entire extraction of 27 million records.
d. The destination table does not have indexing as this is being used for staging purpose. The data from this table shall be moved to another table for queries.
e. The need for TM1 data extraction is because Datamart consists of data from other sources and TM1 data needs to be reported along with the other data.

The difference in the time taken in loading directly into relational table vs extracting to flat file is huge. It does not make sense.

Thanks & Regards,

Ashwin
lotsaram
MVP
Posts: 3706
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: TI Extract to Relational Table - Poor performance

Post by lotsaram »

ashwinchitkara wrote:The difference in the time taken in loading directly into relational table vs extracting to flat file is huge. It does not make sense.
While a huge difference might not make sense a difference certainly does. If you are writing out to a (local) flat file then your only limiting factor is disk I/O. When doing a table insert there are multiple variables: network latency, bandwidth, ODBC performance, ... and that's before you get to looking at whether the RDMBS has been optimised for the insert.

Have you tried exporting to file and then importing from the flat file? If this is much quicker then the issue is the speed of the ODBC driver or how you have structured the insert, if not then it in an issue with performance of the database in recieving inserts generally.

Have you really exhausted options looking at optimizing the RDBMS? Maybe the table has no indexing but what about memory allocation, paging, linkages to other queries, ....

Break the problem down.
Kyro
Community Contributor
Posts: 126
Joined: Tue Nov 03, 2009 7:46 pm
OLAP Product: MODLR - The CPM Cloud
Version: Always the latest.
Excel Version: 365
Location: Sydney, Australia
Contact:

Re: TI Extract to Relational Table - Poor performance

Post by Kyro »

Considering your case I would expect it to be your network so I would recommend you batch your INSERT statement.

Change from:

Code: Select all

INSERT INTO tableName (column1,column2,column3) VALUES (1,2,3);
To:

Code: Select all

INSERT INTO tableName (column1,column2,column3) VALUES (1,2,3),(4,5,6),(7,8,9),(10,11,12);
You can do this with some tricky variable play, it minimizes the network usage by making packets less frequent but larger.
ashwinchitkara
Posts: 21
Joined: Mon Aug 09, 2010 7:52 pm
OLAP Product: Cognos TM1
Version: 9.5
Excel Version: 2003

Re: TI Extract to Relational Table - Poor performance

Post by ashwinchitkara »

Thank you for your responses. I have enough testing scenarios to work upon. I am going to test on another server to see if it's related to memory, i/o issue on which the table is located.

Kyro, I like your recommendation and shall be testing once I have DBA create another test table for me.

Thank you all again for your quick responses.

=D

Ashwin
David Usherwood
Site Admin
Posts: 1458
Joined: Wed May 28, 2008 9:09 am

Re: TI Extract to Relational Table - Poor performance

Post by David Usherwood »

You should include in your testing options outputting to a flat file and bulk loading in. The DBMS companies include this option for a reason.
tomok
MVP
Posts: 2836
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: TI Extract to Relational Table - Poor performance

Post by tomok »

I can tell you from experience you're never, ever going to come close to the speed of a flat file dump and then bulk load into SQL if you choose to do an INSERT row by row. There's too much overhead involved in the INSERT. The SQL engine has to check your INSERT statement for syntax, compile, and then execute it AND it has to do this for every single value exported out of the cube. Removing the indexes from the SQL table will help, having the SQL database on the same network strand will help, but in the end the dump and bulk load will always be faster. The larger the dump the greater the overall time difference. As a matter of rule I generally will only do the straight into SQL load from TM1 when the number of records is never going to exceed around 100K or so.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
Post Reply