Page 1 of 1
Is there a way to increase the number of rows read in TI?
Posted: Wed May 05, 2010 6:36 pm
by MarioRubbo
Hello everyone:
I noticed that my TI process appears to be reading 400 rows at a time from the datasource. Is this correct? Is there a way to increase the numbers of rows in each batch - hopefully to increase the speed of the build time?
Re: Is there a way to increase the number of rows read in TI
Posted: Wed May 05, 2010 8:01 pm
by Wim Gielis
Hello MarioRubbo
Here are 2 important factors that determine the speed of the code execution:
- what is the data source and how do you connect to it?
- what happens in the process, foremost in the Advanced > Data & Metadata tabs?
Re: Is there a way to increase the number of rows read in TI
Posted: Wed May 05, 2010 8:04 pm
by Wim Gielis
OK, from your other topic it seems you have a SQL data source. So that answers Q1 above

Re: Is there a way to increase the number of rows read in TI
Posted: Thu May 06, 2010 10:57 am
by lotsaram
MarioRubbo wrote:I noticed that my TI process appears to be reading 400 rows at a time from the datasource. Is this correct? Is there a way to increase the numbers of rows in each batch - hopefully to increase the speed of the build time?
This sounds quite bizarre. If this is really happening then I can only imagine that this would be due to some limitation in your ODBC source as TM1 comfortably processes loads via ODBC of millions of records at a typical rate of many thousands of records per second.
As Wim has suggested it would be helpful if you could post
- the source database and version
- the ODBC driver and version
- your SQL query
Re: Is there a way to increase the number of rows read in TI
Posted: Thu May 06, 2010 2:10 pm
by Steve Rowe
If the counter is only ticking over at 400 at time then this can be down to one or all of the following areas.
1.Complexity of the SQL query / performance of the SQL DB.
2.Network comms between the SQL and TM1 DB.
3.Work done inside the TI process, to get down to a tick rate of 400 you would need loops or perhaps to be referencing the end of a complex ruled calculation. (Don't forget the result of this calculation would probably be discared everytime the you write to the cube as well).
4.Work done once the value is written to the cube, i.e. feeders firing.
5.Seriously underpowered hardware, I'd say this was unlikely as you would notice the effect in other palces.
If you post some of the information asked for we might be able to narrow this down. You should be able to eliminate some possabilities yourself by testing in your environment.
1.How fast does the query run if you execute direct from the SQL DB, try to run the query as "close" to the box as possible.
2.Run the query using another SQL client near the TM1 box.
3.Write a TI that has the same data source but does no work other than say 1 line in data tab, say a=1;, then you can see how fast the query is executed when TM1 is doing the minimum work. (You need the single line of code to force the data source to be read.)
4.Comment bits of the problem TI out to see if there are areas that casue a problem.
5.Remove the rules and feeders from the target cube, see if there is an impact.
HTH gives you an idea of what to look for. I'd agree with lotsa though, instinct says that this is a problem before the data gets to TM1.
Cheers,
Re: Is there a way to increase the number of rows read in TI
Posted: Thu May 06, 2010 4:29 pm
by TM1Dunk
Mario
If you are loading data which is used to feed large volumes of your cubes, you may notice that the upload speeds are severely reduced, as TM1 evaluates and feeds for each cell updated by your TI process (in v9.5 at least).
For example, if you load a single record for "today's USD:EUR rate", which is fed into 1,000,000 cells.
To test this out, temporarily disable (comment out) all feeders from the cube which your data is loaded into and run the TI process. If this is the case, you may consider using some script to delete the rules on this cube during an upload and then recreate the rules from text file on completion using:
RuleLoadFromFile(Cube, TextFile);
TM1Dunk
Re: Is there a way to increase the number of rows read in TI
Posted: Fri May 07, 2010 3:11 pm
by MarioRubbo
Thanks to everyone for your input. The primary thing I've learned is that 400 rows does seem like a small number.
Environment specs as follows:
- TM1 9.5 on my local desktop (struggling to get the AIX server version to work with ODBC)
- Local desktop is anemic (1gb of RAM, Pentium 4 CPU)
- Backend is Sybase IQ on VM with 12 CPU and 8GB of RAM. Connecting through ODBC.
- No rules on cube
- I am pivoting the records coming in so the TI code is a little complicated. (if severity=1 then severity1_amount =amt, etc.)
- The SQL is a simple select * from table
From reading everyone's replies, I'm going to guess that my local machine's hardware configuration is the limiting factor and I won't panic until I move this cube build process to the TM1 server itself. My question was based on the knowledge that in Cognos Powerplay, there is an INI setting that can be manipulated to change the number of rows that are read from a datasource at a time. I wasn't aware of any similar setting in TM1 and I'm guessing that there isn't one. Hopefully, when the TM1 server is able to connect to Sybase, I will see some significant read performance.
Thanks again.