Page 1 of 1

Not all rows from DB Table are loading into TM1 Cube

Posted: Fri Jun 22, 2012 3:08 am
by mrnara
I have about 50K rows in Oracle, but only about 30K rows are loading into my TM1 Cube.
I'm loading the rows via Turbo Integrator using "select * from tableName;"
21 columns in the DB Table maps to the Cube's 10 Elements, 10 Consolidations (and 1 Data value).

I've Enabled Cube Logging, but I don't see anything in the logs that would indicate a bad data row getting rejected.
Is there any detailed custom logging I can do/enable to see what TM1 is doing with each row as it gets it via ODBC?

Standard disclaimer: I'm very new to TM1.

Re: Not all rows from DB Table are loading into TM1 Cube

Posted: Fri Jun 22, 2012 3:17 am
by winsonlee
i always use asciioutput to do some debugging matter. It helps to find out what pass through the data tab.

have you tried looking at the log folder to find if there is any file with the name TM1ProcessError_20120622014455_processname.log ?

Re: Not all rows from DB Table are loading into TM1 Cube

Posted: Fri Jun 22, 2012 5:34 am
by Michel Zijlema
mrnara wrote:I have about 50K rows in Oracle, but only about 30K rows are loading into my TM1 Cube.
I'm loading the rows via Turbo Integrator using "select * from tableName;"
21 columns in the DB Table maps to the Cube's 10 Elements, 10 Consolidations (and 1 Data value).

I've Enabled Cube Logging, but I don't see anything in the logs that would indicate a bad data row getting rejected.
Is there any detailed custom logging I can do/enable to see what TM1 is doing with each row as it gets it via ODBC?

Standard disclaimer: I'm very new to TM1.
Could it be that you're using the option 'Store Value' instead of 'Accumulate Values' on the Maps/Cube tab in the TI wizard? The distiction is relevant when multiple rows from your source end up in the same cell in the cube: with store values every new row loading in the same cell will overwrite the previous loaded value, where accumulate values will add every new row loading in the same cell to the previous loaded value.

Michel

Re: Not all rows from DB Table are loading into TM1 Cube

Posted: Sat Jun 23, 2012 1:47 am
by mrnara
Michel Zijlema wrote:
mrnara wrote:I have about 50K rows in Oracle, but only about 30K rows are loading into my TM1 Cube.
I'm loading the rows via Turbo Integrator using "select * from tableName;"
21 columns in the DB Table maps to the Cube's 10 Elements, 10 Consolidations (and 1 Data value).

I've Enabled Cube Logging, but I don't see anything in the logs that would indicate a bad data row getting rejected.
Is there any detailed custom logging I can do/enable to see what TM1 is doing with each row as it gets it via ODBC?

Standard disclaimer: I'm very new to TM1.
Could it be that you're using the option 'Store Value' instead of 'Accumulate Values' on the Maps/Cube tab in the TI wizard? The distiction is relevant when multiple rows from your source end up in the same cell in the cube: with store values every new row loading in the same cell will overwrite the previous loaded value, where accumulate values will add every new row loading in the same cell to the previous loaded value.

Michel
That was it!
Thanks!

From an SQL perspective,

Code: Select all

select count(*) from table1;
returns 50K rows

I map each column of table1 to an element variable, so not selecting the "Accumulate Values" in TI effectively became

Code: Select all

select count(distinct(col1 || col2 || col3 || ... )) from table1;
which returns 30K rows (which is what I was seeing in TM1).

I guess I gotta be careful not to run the process again if I don't delete the Cube first, or the counts will double.

Re: Not all rows from DB Table are loading into TM1 Cube

Posted: Sat Jun 23, 2012 11:54 am
by lotsaram
mrnara wrote:I guess I gotta be careful not to run the process again if I don't delete the Cube first, or the counts will double.
Well no. Similar to what Wim advised you in another thread, when properly managed with TI script there is no need to be destroying and recreating cube objects. In fact in the vast majority of models this would be a pretty bad idea. Rather what you should be doing is to clear the appropriate portion of the cube before reloading from the source. This is important when just storing values at the unique cell intersections (in case of data realignment in the source between loads) and absolutely critical when accumulating values from multiple source rows into single cube intersections - or else as you say values will double, then tripple, ... etc.

Where TM1 may be different from other tools you are used to is that to do all such more advanced scripting the developer must actually code the script by hand. There is no drag and drop GUI as such (or even at all unless you count Cubeware Importer). The TI "mapping wizard" can achieve only the most basic of data loading with hardcoded references for source and area to clear with no capacity for any worthwhile data transformations or error handling. The automatically generated script from Performance Modler links is a lot better to credit the development team but for anything approaching true ETL (which the tool is capable of) then hand written script is a MUST.