Hi,
I know this sounds quite strange, but this is my only way to load into the sql database.
I have a text file as a source of data and would like to load this data via a TI process into a sql database. I have setup a normal TI selecting text as source and then tried to build an ODBC output statement. I am not loading the data 1st into a cube, I am trying to go direct from the text file with the TI process to the ODBC output.
Somehow it does not work as I would like....I have a process which takes data from a cube and there the ODBC output works fine.
Does anyone know if this will work at all ?
Thanks for your help....Manfred
Using Ti to transfer txt file to a SQL database
-
- Site Admin
- Posts: 1458
- Joined: Wed May 28, 2008 9:09 am
Re: Using Ti to transfer txt file to a SQL database
Shouldn't be a problem.
Write the generated SQL commands out to a text file and check they are valid - this is normally what breaks.
Write the generated SQL commands out to a text file and check they are valid - this is normally what breaks.
-
- Posts: 15
- Joined: Sun Mar 03, 2013 10:18 am
- OLAP Product: TM1
- Version: 10.2
- Excel Version: 2010
Re: Using Ti to transfer txt file to a SQL database
I have done this and the generated SQL statement looks like this:
INSERT INTO GP_Customer.dbo.[Billing_Report] (Unit,CustomerID,CustomerName,Corporate,Dept,Account,Product,Year,Period,Invoice,Lohnart,Descr,Qty,UnitOfMeasure,UnitPrice,Percent,Amount,UnitPriceInclFactor,Employee,EmployeeName,AssignmentNumber,JobOrder,MpPositionTitle,TimesheetNumber,PORef,CRF-CustomerCostCentre,AcctgDate,InvoiceLine,TNStartDate,TNEndDate,JournalID,JournalDate,LineNr,VATBasis,Schedule,Tarifkennzeichen,EGG) Values (DE001,0000100016,Christ - Elektronik GmbH,0000100016,0325,400100,040,2015,11,DE01141601,6500,Normalstunden,12,2000,HRS,27,7000,0,337,94,27,70,80155755,Ludwig, Frank,6700432037,0000087871,Elektriker/in,587809,,,15.10.2015,1,01.10.2015,04.10.2015,BI01028857,15.10.2015,134,337,940,3,02)
my problem is, that I need the values in ' ' like ......Values ('DE001','00000100016',......
Any idea ?
Thanks, Manfred
INSERT INTO GP_Customer.dbo.[Billing_Report] (Unit,CustomerID,CustomerName,Corporate,Dept,Account,Product,Year,Period,Invoice,Lohnart,Descr,Qty,UnitOfMeasure,UnitPrice,Percent,Amount,UnitPriceInclFactor,Employee,EmployeeName,AssignmentNumber,JobOrder,MpPositionTitle,TimesheetNumber,PORef,CRF-CustomerCostCentre,AcctgDate,InvoiceLine,TNStartDate,TNEndDate,JournalID,JournalDate,LineNr,VATBasis,Schedule,Tarifkennzeichen,EGG) Values (DE001,0000100016,Christ - Elektronik GmbH,0000100016,0325,400100,040,2015,11,DE01141601,6500,Normalstunden,12,2000,HRS,27,7000,0,337,94,27,70,80155755,Ludwig, Frank,6700432037,0000087871,Elektriker/in,587809,,,15.10.2015,1,01.10.2015,04.10.2015,BI01028857,15.10.2015,134,337,940,3,02)
my problem is, that I need the values in ' ' like ......Values ('DE001','00000100016',......
Any idea ?
Thanks, Manfred
-
- MVP
- Posts: 2836
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Using Ti to transfer txt file to a SQL database
Yes. Modify your code to put the apostrophe in. It's simple, just insert CHAR(39) before and after each field.manne wrote:Any idea ?
-
- Posts: 15
- Joined: Sun Mar 03, 2013 10:18 am
- OLAP Product: TM1
- Version: 10.2
- Excel Version: 2010
Re: Using Ti to transfer txt file to a SQL database
Do you mean like this ?
cxSQLfeldlist=sDim1|','|sDim2|';
vSQL= 'INSERT INTO GP_Customer.dbo.[Billing_Report] ( | cxSQLfeldlist | ') Values ( CHAR(39) | vUnit | CHAR(39) | CHAR(39) | vCustomer_id | CHAR(39) | )';
cxSQLfeldlist=sDim1|','|sDim2|';
vSQL= 'INSERT INTO GP_Customer.dbo.[Billing_Report] ( | cxSQLfeldlist | ') Values ( CHAR(39) | vUnit | CHAR(39) | CHAR(39) | vCustomer_id | CHAR(39) | )';
- qml
- MVP
- Posts: 1097
- Joined: Mon Feb 01, 2010 1:01 pm
- OLAP Product: TM1 / Planning Analytics
- Version: 2.0.9 and all previous
- Excel Version: 2007 - 2016
- Location: London, UK, Europe
Re: Using Ti to transfer txt file to a SQL database
No man, more like this:
Surely, you could have figured this out for yourself with some trial and error instead of relying on others to write your exact code for you?
Code: Select all
vSQL = 'INSERT INTO GP_Customer.dbo.[Billing_Report] ( ' |
cxSQLfeldlist |
' ) VALUES ( ' |
CHAR(39) |
vUnit |
CHAR(39) |
', ' |
CHAR(39) |
vCustomer_id |
CHAR(39) |
' );' ;
Kamil Arendt
- paulsimon
- MVP
- Posts: 808
- Joined: Sat Sep 03, 2011 11:10 pm
- OLAP Product: TM1
- Version: PA 2.0.5
- Excel Version: 2016
- Contact:
Re: Using Ti to transfer txt file to a SQL database
Hi
Although you can issue individual SQL statements, this can get quite slow for large numbers of records. I have seen big increases in speed by using the appropriate bulk insert statement for the database you are using.
Even if you are reading from a cube, it can be faster to write to a file and then use the Bulk Insert statement to load this file into the database.
From what I can gather each ODBC statement is a single transaction so if you execute this on the Data Tab for each record from the source view there is a lot more overhead on the database.
The Bulk Insert statement is naturally faster for big loads than the single Insert statement. You need to look at the manual for the Bulk Insert statement on your particular database. Some settings may turn off Triggers and Index updates, etc.
Below is an example for SQL Server
EPILOG
vSQL = 'BULK INSERT ' | vTable | ' FROM ' |
vSQ | vFilePath | vSQ |
' WITH ( FIELDTERMINATOR = ' | vSQ | vDelimiter | vSQ |
', ROWTERMINATOR = ' | vSQ | '\n' | vSQ |
', LASTROW=' | NumberToString(vRowCount) | ')';
ODBCOutput(vODBCDSN, vSQL);
Where
vSQ is the Single Quote Character - Char( 39.)
vFilePath is the full path to the file that you are loading which must be a path accessible to the SQL database if this is on a different box.
vDelimiter may be a comma or vertical bar, etc.
vRowCount is a count of the number of records in the file which can help the efficiency of the bulk loader but the LASTROW parameter is not strictly necessary.
If reading from a view and writing out to a text file with ASCIIOutput then setting
DatasourceASCIIQuoteCharacter = vQuoteChar ;
DatasourceASCIIDelimiter = vDelimiter;
seems to also affect the output. vDelimiter can be a vertical bar or comma.
Before this you might need to output a TRUNCATE TABLE or something similar to clear the table before the bulk insert.
From what I can seen for SQL Server the Bulk Insert statement is still faster than SSIS if all you want to do is to get data into a table.
Regards
Paul Simon
Although you can issue individual SQL statements, this can get quite slow for large numbers of records. I have seen big increases in speed by using the appropriate bulk insert statement for the database you are using.
Even if you are reading from a cube, it can be faster to write to a file and then use the Bulk Insert statement to load this file into the database.
From what I can gather each ODBC statement is a single transaction so if you execute this on the Data Tab for each record from the source view there is a lot more overhead on the database.
The Bulk Insert statement is naturally faster for big loads than the single Insert statement. You need to look at the manual for the Bulk Insert statement on your particular database. Some settings may turn off Triggers and Index updates, etc.
Below is an example for SQL Server
EPILOG
vSQL = 'BULK INSERT ' | vTable | ' FROM ' |
vSQ | vFilePath | vSQ |
' WITH ( FIELDTERMINATOR = ' | vSQ | vDelimiter | vSQ |
', ROWTERMINATOR = ' | vSQ | '\n' | vSQ |
', LASTROW=' | NumberToString(vRowCount) | ')';
ODBCOutput(vODBCDSN, vSQL);
Where
vSQ is the Single Quote Character - Char( 39.)
vFilePath is the full path to the file that you are loading which must be a path accessible to the SQL database if this is on a different box.
vDelimiter may be a comma or vertical bar, etc.
vRowCount is a count of the number of records in the file which can help the efficiency of the bulk loader but the LASTROW parameter is not strictly necessary.
If reading from a view and writing out to a text file with ASCIIOutput then setting
DatasourceASCIIQuoteCharacter = vQuoteChar ;
DatasourceASCIIDelimiter = vDelimiter;
seems to also affect the output. vDelimiter can be a vertical bar or comma.
Before this you might need to output a TRUNCATE TABLE or something similar to clear the table before the bulk insert.
From what I can seen for SQL Server the Bulk Insert statement is still faster than SSIS if all you want to do is to get data into a table.
Regards
Paul Simon
-
- MVP
- Posts: 3240
- Joined: Mon Dec 29, 2008 6:26 pm
- OLAP Product: TM1, Jedox
- Version: PAL 2.1.5
- Excel Version: Microsoft 365
- Location: Brussels, Belgium
- Contact:
Re: Using Ti to transfer txt file to a SQL database
Bulk insert is indeed much faster.
On my laptop, I have a text file (8 columns - some numbers, some text, 1 date column -, 8225 rows)
The data source of a process is that text file.
If I do a INSERT INTO statement into SQL Server, line by line, it takes 11.5 seconds.
If I write to text file and afterwards do a Bulk insert statement into SQL Server, it only takes 1.01 seconds.
On my laptop, I have a text file (8 columns - some numbers, some text, 1 date column -, 8225 rows)
The data source of a process is that text file.
If I do a INSERT INTO statement into SQL Server, line by line, it takes 11.5 seconds.
If I write to text file and afterwards do a Bulk insert statement into SQL Server, it only takes 1.01 seconds.
Best regards,
Wim Gielis
IBM Champion 2024-2025
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
Wim Gielis
IBM Champion 2024-2025
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
-
- MVP
- Posts: 3704
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Using Ti to transfer txt file to a SQL database
Another intermediate solution is to concatenate the Insert Into statements together either by a counter of the number of data source records or by an upper limit of the string length of the insert statement. Rather than one insert per source record you can then bundle potentially several hundred together which significantly reduces the chatter via the ODBC driver and can speed up the process at least an order of magnitude or more depending on the number of records and the records per batch. (The only slightly tricky thing is that you just need to remember to include an insert into on the epilog before the odbc close to account for incomplete batches so that sending of the last few records doesn't get skipped.)
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.