Page 1 of 1

Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 3:51 pm
by manne
Hi,

I know this sounds quite strange, but this is my only way to load into the sql database.

I have a text file as a source of data and would like to load this data via a TI process into a sql database. I have setup a normal TI selecting text as source and then tried to build an ODBC output statement. I am not loading the data 1st into a cube, I am trying to go direct from the text file with the TI process to the ODBC output.

Somehow it does not work as I would like....I have a process which takes data from a cube and there the ODBC output works fine.

Does anyone know if this will work at all ?

Thanks for your help....Manfred

Re: Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 4:01 pm
by David Usherwood
Shouldn't be a problem.
Write the generated SQL commands out to a text file and check they are valid - this is normally what breaks.

Re: Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 4:11 pm
by manne
I have done this and the generated SQL statement looks like this:

INSERT INTO GP_Customer.dbo.[Billing_Report] (Unit,CustomerID,CustomerName,Corporate,Dept,Account,Product,Year,Period,Invoice,Lohnart,Descr,Qty,UnitOfMeasure,UnitPrice,Percent,Amount,UnitPriceInclFactor,Employee,EmployeeName,AssignmentNumber,JobOrder,MpPositionTitle,TimesheetNumber,PORef,CRF-CustomerCostCentre,AcctgDate,InvoiceLine,TNStartDate,TNEndDate,JournalID,JournalDate,LineNr,VATBasis,Schedule,Tarifkennzeichen,EGG) Values (DE001,0000100016,Christ - Elektronik GmbH,0000100016,0325,400100,040,2015,11,DE01141601,6500,Normalstunden,12,2000,HRS,27,7000,0,337,94,27,70,80155755,Ludwig, Frank,6700432037,0000087871,Elektriker/in,587809,,,15.10.2015,1,01.10.2015,04.10.2015,BI01028857,15.10.2015,134,337,940,3,02)

my problem is, that I need the values in ' ' like ......Values ('DE001','00000100016',......

Any idea ?

Thanks, Manfred

Re: Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 4:40 pm
by tomok
manne wrote:Any idea ?
Yes. Modify your code to put the apostrophe in. It's simple, just insert CHAR(39) before and after each field.

Re: Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 4:56 pm
by manne
Do you mean like this ?



cxSQLfeldlist=sDim1|','|sDim2|';


vSQL= 'INSERT INTO GP_Customer.dbo.[Billing_Report] ( | cxSQLfeldlist | ') Values ( CHAR(39) | vUnit | CHAR(39) | CHAR(39) | vCustomer_id | CHAR(39) | )';

Re: Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 5:10 pm
by qml
No man, more like this:

Code: Select all

vSQL = 'INSERT INTO GP_Customer.dbo.[Billing_Report] ( ' |
 	cxSQLfeldlist |
 	' ) VALUES ( ' |
 	CHAR(39) |
 	vUnit |
 	CHAR(39) |
 	', ' |
 	CHAR(39) |
 	vCustomer_id |
 	CHAR(39) |
 	' );' ;
Surely, you could have figured this out for yourself with some trial and error instead of relying on others to write your exact code for you?

Re: Using Ti to transfer txt file to a SQL database

Posted: Fri Nov 27, 2015 11:35 pm
by paulsimon
Hi

Although you can issue individual SQL statements, this can get quite slow for large numbers of records. I have seen big increases in speed by using the appropriate bulk insert statement for the database you are using.

Even if you are reading from a cube, it can be faster to write to a file and then use the Bulk Insert statement to load this file into the database.

From what I can gather each ODBC statement is a single transaction so if you execute this on the Data Tab for each record from the source view there is a lot more overhead on the database.

The Bulk Insert statement is naturally faster for big loads than the single Insert statement. You need to look at the manual for the Bulk Insert statement on your particular database. Some settings may turn off Triggers and Index updates, etc.

Below is an example for SQL Server

EPILOG

vSQL = 'BULK INSERT ' | vTable | ' FROM ' |
vSQ | vFilePath | vSQ |
' WITH ( FIELDTERMINATOR = ' | vSQ | vDelimiter | vSQ |
', ROWTERMINATOR = ' | vSQ | '\n' | vSQ |
', LASTROW=' | NumberToString(vRowCount) | ')';

ODBCOutput(vODBCDSN, vSQL);

Where
vSQ is the Single Quote Character - Char( 39.)
vFilePath is the full path to the file that you are loading which must be a path accessible to the SQL database if this is on a different box.
vDelimiter may be a comma or vertical bar, etc.
vRowCount is a count of the number of records in the file which can help the efficiency of the bulk loader but the LASTROW parameter is not strictly necessary.

If reading from a view and writing out to a text file with ASCIIOutput then setting
DatasourceASCIIQuoteCharacter = vQuoteChar ;
DatasourceASCIIDelimiter = vDelimiter;
seems to also affect the output. vDelimiter can be a vertical bar or comma.

Before this you might need to output a TRUNCATE TABLE or something similar to clear the table before the bulk insert.

From what I can seen for SQL Server the Bulk Insert statement is still faster than SSIS if all you want to do is to get data into a table.

Regards

Paul Simon

Re: Using Ti to transfer txt file to a SQL database

Posted: Mon Nov 30, 2015 6:40 am
by Wim Gielis
Bulk insert is indeed much faster.
On my laptop, I have a text file (8 columns - some numbers, some text, 1 date column -, 8225 rows)
The data source of a process is that text file.

If I do a INSERT INTO statement into SQL Server, line by line, it takes 11.5 seconds.
If I write to text file and afterwards do a Bulk insert statement into SQL Server, it only takes 1.01 seconds.

Re: Using Ti to transfer txt file to a SQL database

Posted: Mon Nov 30, 2015 7:14 pm
by lotsaram
Another intermediate solution is to concatenate the Insert Into statements together either by a counter of the number of data source records or by an upper limit of the string length of the insert statement. Rather than one insert per source record you can then bundle potentially several hundred together which significantly reduces the chatter via the ODBC driver and can speed up the process at least an order of magnitude or more depending on the number of records and the records per batch. (The only slightly tricky thing is that you just need to remember to include an insert into on the epilog before the odbc close to account for incomplete batches so that sending of the last few records doesn't get skipped.)