RunTI Issue

Post Reply
User avatar
mce
Community Contributor
Posts: 352
Joined: Tue Jul 20, 2010 5:01 pm
OLAP Product: Cognos TM1
Version: Planning Analytics Local 2.0.x
Excel Version: 2013 2016
Location: Istanbul, Turkey

RunTI Issue

Post by mce »

Hello,

We trigger the same TI process many times (say 250 times) with different parameters using RunTI at the same time.
RunTI is configured to to authenticate via CAM authentication in mode 5.

The problem is we observe that some of the TI processes has never been triggered via RunTI although command prompt was executed for them.
What could be the reason? Have you observed this behavior before?

We are trying to load data in parallel threads and each thread is loading a different GL account to cube. We observe that some of the accounts has never been loaded because RunTI has never triggered the process for them, although RunTI was called for those processes.

Thanks in advance for the responses.

Regards,
User avatar
paulsimon
MVP
Posts: 808
Joined: Sat Sep 03, 2011 11:10 pm
OLAP Product: TM1
Version: PA 2.0.5
Excel Version: 2016
Contact:

Re: RunTI Issue

Post by paulsimon »

Hi

I would check the TM1Server log to see whether the processes are running but failing or whether they are never being run at all.

I suggest that you make the process output a last time completed to a cube so that you can tell whether they were run or not. Perhaps make the process that triggers the TM1RunTI put a start time into the cube, then you can easily check for cases where the last time completed is before the last time started so identify failures.

I would also look at how you are running the processes. Sending off all 250 at one may be too much. I suggest that you engineer some sort of delay into the triggering of the TM1RunTIs. For example, a WHILE Loop that stores the time the last process was run and only triggers the next one in the list if more than eg 10 seconds have elapsed.

If the processes are connecting to a SQL data source, then consider also whether the SQL side can handle 250 logins all at once.

Consider also whether splitting your loads 250 ways is actually going to get the data loaded any faster. I doubt that your server has 250 processors, so probably not, particularly as each load has the overhead of logging in to both TM1 and possibly a SQL data source, plus other overheads.

Consider also whether the amount of data for each account is roughly equal.

If you have n processors then you might get a similar benefit by loading in n equal sized batches of accounts.

I can't remember the details but the latest versions do actually offer some automatic parallelism of loads, so look in to that.

Regards

Paul Simon
User avatar
mce
Community Contributor
Posts: 352
Joined: Tue Jul 20, 2010 5:01 pm
OLAP Product: Cognos TM1
Version: Planning Analytics Local 2.0.x
Excel Version: 2013 2016
Location: Istanbul, Turkey

Re: RunTI Issue

Post by mce »

paulsimon wrote: Fri May 31, 2019 8:43 pm I would check the TM1Server log to see whether the processes are running but failing or whether they are never being run at all.
We already checked and the process has never been triggered via RunTI.
paulsimon wrote: Fri May 31, 2019 8:43 pm I suggest that you make the process output a last time completed to a cube so that you can tell whether they were run or not. Perhaps make the process that triggers the TM1RunTI put a start time into the cube, then you can easily check for cases where the last time completed is before the last time started so identify failures.
It is all done. we made sure that RunTI could not trigger the process probably because it could not login to TM1 via CAM.

I guess the bottleneck is there at CAM authentication. The authentication mechanism cannot handle too many request coming at the same time and fails to authenticate some.

When we put 1 second delay to each execution, then all threads runs OK. But this means approximately 4 minutes longer execution time for the whole update process.

Obviously it might solve the issue if we use RunProcess command instead of RunTI, but it is not supported in our version 2.0.5 at this client.
paulsimon wrote: Fri May 31, 2019 8:43 pm Consider also whether splitting your loads 250 ways is actually going to get the data loaded any faster. I doubt that your server has 250 processors, so probably not, particularly as each load has the overhead of logging in to both TM1 and possibly a SQL data source, plus other overheads.
We have about 2 million records for each account in DB. This mechanism works well for both DB side and TM1 side as we have many processors in each. The whole execution finishes in about 3-4 minutes if we ignore the waiting time of 1 seconds between each execution, which adds 4 minutes delay to the completion.
David Usherwood
Site Admin
Posts: 1453
Joined: Wed May 28, 2008 9:09 am

Re: RunTI Issue

Post by David Usherwood »

Please re-read and digest Paul's suggestion that dividing the loads by 250 GL accounts is not the best use of resources. If you rethink your 'granularity' and divide the loads to match the (TM1) cores you have you should get better performance and the extra second delay to manage login contention will hardly matter.
User avatar
mce
Community Contributor
Posts: 352
Joined: Tue Jul 20, 2010 5:01 pm
OLAP Product: Cognos TM1
Version: Planning Analytics Local 2.0.x
Excel Version: 2013 2016
Location: Istanbul, Turkey

Re: RunTI Issue

Post by mce »

David Usherwood wrote: Sat Jun 01, 2019 10:24 am Please re-read and digest Paul's suggestion that dividing the loads by 250 GL accounts is not the best use of resources. If you rethink your 'granularity' and divide the loads to match the (TM1) cores you have you should get better performance and the extra second delay to manage login contention will hardly matter.
I understand his suggestion and appreciate it.
Yes, not all GL accounts have a lot of data, and certainly some will have less. But there is no practical way to group them into batches in a way that all groups will have equivalent load. If I load each GL in a separate batch, the whole process takes the same time as the batch that has the largest record count. If I group them, it will not be any shorter than this anyway. Among 250 accounts, about 40 of them have large and equivalent record count, rest are small, and I have about 40 cores in TM1 server.

I have another dimension which has millions of elements and which I can use to uniformly group them in 40 batches, but to do this I have to use MOD function in SQL. However in this case, source system DB is not able to return the results quicker due to the use of MOD function.

Considering the non-existence of RunProcess function till last month, unforetunately TM1 server has to authenticate all parallel batches separately to be able to parallelize a data write operation in TM1, and causing a lot of inefficiencies and other problems.
User avatar
macsir
MVP
Posts: 782
Joined: Wed May 30, 2012 6:50 am
OLAP Product: TM1
Version: PAL 2.0.9
Excel Version: Office 365
Contact:

Re: RunTI Issue

Post by macsir »

How about triggering your TI via "tm1.ExecuteWithReturn" functions in rest api?
In TM1,the answer is always yes though sometimes with a but....
http://tm1sir.blogspot.com.au/
MariusWirtz
Posts: 29
Joined: Sat Apr 08, 2017 8:40 pm
OLAP Product: TM1
Version: 10.2.2.6
Excel Version: 2016

Re: RunTI Issue

Post by MariusWirtz »

Hi mce,

I wrote a tool for exactly the use case that you are describing: RushTI

It allows you to execute a list of processes with a fix number of threads at the same time, while using only one session. That means it Authenticates only once at the start and re-uses the session.

You can download the latest release here:
https://github.com/cubewise-code/rushti ... /tag/1.0.0

You can find more documentation on RushTI here:
https://code.cubewise.com/rushti
and here:
https://code.cubewise.com/tm1py-help-co ... connection

It's free and open source. If you missing a feature, feel free to open an issue or a merge request in GitHub.
If you don't like to install python on the Server, you can download RushTI as an .exe file.

Cheers,

Marius
tomok
MVP
Posts: 2831
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: RunTI Issue

Post by tomok »

Is anyone using this in the IBM Cloud? Care to share any experiences with getting it to work?
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
lotsaram
MVP
Posts: 3651
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: RunTI Issue

Post by lotsaram »

Hi mce,

Although RushTI may assist vs tm1runti by eliminating the overhead for authentication for each separate process execution I think you also really need to consider the design of the data-splitting. Both the number of distinct slices the load is split to and the dimension being used to segment. Particularly consider:
  • Slices should be of roughly same order of magnitude in size but not of equal size. This is because the commit phase is always single threaded and serialized. There's no sense having all the loads complete in parallel and all finish at the same time and then form a commit queue.
  • The partitioning dimension for the slices should be as close to 1st index order of the cube as possible to minimize the size of each commit. There's no sense essentially saving the whole cube for each commit which is what I strongly suspect you would be doing be segmenting on the Account dimension
To understand the role of dimension ordering in parallel loading I suggest you read this article.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
User avatar
mce
Community Contributor
Posts: 352
Joined: Tue Jul 20, 2010 5:01 pm
OLAP Product: Cognos TM1
Version: Planning Analytics Local 2.0.x
Excel Version: 2013 2016
Location: Istanbul, Turkey

Re: RunTI Issue

Post by mce »

Thanks for the valuable comments.

I found that there is an APAR for this issue:
PH11163 MULTIPLE TM1RUNTI PROCESSES FAIL TO EXECUTE VIA EXECUTECOMMAND WHEN THE NUMBER IS VERY HIGH

I understand that this APAR is still open and to be fixed in a future release.
User avatar
mce
Community Contributor
Posts: 352
Joined: Tue Jul 20, 2010 5:01 pm
OLAP Product: Cognos TM1
Version: Planning Analytics Local 2.0.x
Excel Version: 2013 2016
Location: Istanbul, Turkey

Re: RunTI Issue

Post by mce »

lotsaram wrote: Tue Jun 04, 2019 7:59 am
  • Slices should be of roughly same order of magnitude in size but not of equal size. This is because the commit phase is always single threaded and serialized. There's no sense having all the loads complete in parallel and all finish at the same time and then form a commit queue.
Hi Lotsaram, When loading data to a cube in multiple parallel threads, I always use BatchUpdateStart; and BatchUpdateFinishWait(0); to avoid locking and to speed up the load process. In this case, all commits waits until all parallel threads finish processing and reaches to BatchUpdateFinishWait(0);. Then all of them commits in sequence. Hence even if some of them finish earlier, they wait for others to finish before it can start committing.

I understand that your recommendation is different and does not involve using BatchUpdateStart; and BatchUpdateFinishWait(0). How do you compare using BatchUpdateStart/BatchUpdateFinishWait(0) and not using them in parallel threads? When not used, does not it cause locking on objects and causing threads to wait each other as they try to update the same cube, but different slices of the cube?

I would like to understand different experiences about loading in parallel threads.
Regards,
lotsaram
MVP
Posts: 3651
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: RunTI Issue

Post by lotsaram »

mce wrote: Sat Jun 15, 2019 8:09 am When loading data to a cube in multiple parallel threads, I always use BatchUpdateStart; and BatchUpdateFinishWait(0); to avoid locking and to speed up the load process. In this case, all commits waits until all parallel threads finish processing and reaches to BatchUpdateFinishWait(0);. Then all of them commits in sequence. Hence even if some of them finish earlier, they wait for others to finish before it can start committing.
Who told you to do this? Because you shouldn't.
The BatchUpdate functions are an ancient relic from before parallel interaction. There is no reason to use them anymore. There is also the nasty side effect that when BatchUpdate is switched on the process making the changes is unable to query the result of changed data pre-commit.

If you force a commit queue then there's not much point in loading in parallel as the aggregate commit time can eat up or in many cases surpass the time saved fro load parallelization. To get the best efficiencies the size of the slices being loaded should be staggered such that each job commits as the others are still processing. As the last slice completes and commits the whole job is done. It is also important that the dimension being split on is as close to 1st index as possible to minimize the size and time of each commit.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
Post Reply