Page 1 of 1

What would be the most efficient way to feed Oracle with TM1 data ?

Posted: Tue May 23, 2017 7:33 am
by BobMilli
Hi,
We've got a bunch of TM1 databases and for some of them, whether it's about TM1 calculated data or users data entry,we need to push TM1 Data to Oracle.
What would be the best way to do so.
Usually, we use the ODBC stuff in the following manner :
  • open the odbc connection in the Prolog
    on the Data tab generate insert SQL commands and run them through a ODBCOutput TI command
    still on the Data tab, for each 1000 records, run a SQL Commit command otherwise the Oracle tablespaces are growing too much
    run a Commit and an ODBC close on the Epilog
The only issue I have with this is the extreme slowness of the process.
Yesterday we've made a test and the process took 7 minutes to export the data in a text file and more than 6 hours to do the same in an Oracle table.

For sure we'll need to change our method.
I've got 2 solutions in mind at this time : use the exported text file through an external ETL which will feed Oracle in a more efficient way or generate a SQL+ script that could be fired in the Epilog.

Is there anybody in the forum who faced the same kind of need ? What would be the more efficient way ?
Thanks for the help.
Regards,
Bob

Re: What would be the most efficient way to feed Oracle with TM1 data ?

Posted: Tue May 23, 2017 7:51 am
by Wim Gielis
Bonjour Bob,

What I did in the past is output to a text file and then in the Epilog, do a Bulk Insert statement. This is with SQL Server, not sure what this would be in Oracle. The time spent is only a fraction of the method of individual Insert statements (or variants thereof by sending more than 1 record at a time).

Hope this helps,

Wim

Re: What would be the most efficient way to feed Oracle with TM1 data ?

Posted: Tue May 23, 2017 10:53 am
by lotsaram
Depending on the length of your SQL statement for the ODBC insert you can probably achieve somewhere from a 10x to 100x throughput improvement by not only running the SQL Commit every 1000 records but also batching the ODBCOutput so that you don't execute it for each TM1 cell that is processed. The concept is simple; TM1 TI variables (and hopefully the Oracle ODBC driver) can handle very long strings, at least 64000 unicode characters. For every record of the data source check the length of the SQL statement and when you get to 50K characters or thereabouts then do the ODBCOutput, else just keep on concatenating the Insert Into or Update Into or whatever your SQL statement happens to be. This will make the ODBC channel much less "chatty" and can easily give a 1 or 2 order of magnitude speed improvement. Note that with this method there will always be a left over uncommitted string that needs to be handled with a ODBCOutput on the Epilog as well as the Data tab.

For a really big insert the best is going to be as already suggested to export a flat file and then run a stored procedure to grab the file and perform a bulk update. But in many cases "batching" the ODBCOutput statements could be good enough, as good or better.