Hi
I have just found an issue whereby CellIncrementN when you are in BatchUpdate Mode works like CellPutN
This is present in at least versions 10.2 IF1 and 10.2 FP1 IF1.
This has been reported to IBM.
Regards
Paul Simon
CellIncrementN and BatchUpdate Mode
-
- MVP
- Posts: 600
- Joined: Wed Aug 17, 2011 1:19 pm
- OLAP Product: TM1
- Version: 9.5.2 10.1 10.2
- Excel Version: 2003 2007
- Location: York, UK
Re: CellIncrementN and BatchUpdate Mode
I'm not sure that I would have accepted that as a bug if I had been IBM, given that the stated behaviour of BatchUpdateMode is that values written to cells are not available to the TI until the transaction is committed, so you would always be incrementing the value that was there at the start of the transaction.
Is that in fact what is happening? If the TI starts with a value in the cell, is the value at the end the last value applied or is it the sum of the original value and the last value applied?
Is that in fact what is happening? If the TI starts with a value in the cell, is the value at the end the last value applied or is it the sum of the original value and the last value applied?
-
- Community Contributor
- Posts: 341
- Joined: Wed Nov 03, 2010 9:16 pm
- OLAP Product: tm1
- Version: 10 2 2 - 2.0.5
- Excel Version: From 2007 to 2013
- Location: Earth
Re: CellIncrementN and BatchUpdate Mode
I just wanted to report this bug to the forum when I saw Paul had already done so, I ran into this just today.
Why I would think this is a bug is that batch update mode works fine with the old logic:
vOldValue = CellGetN(...
CellPutN(Value + vOldValue , ...
I would expect the CellIncrementN should do the same.
Why I would think this is a bug is that batch update mode works fine with the old logic:
vOldValue = CellGetN(...
CellPutN(Value + vOldValue , ...
I would expect the CellIncrementN should do the same.
-
- MVP
- Posts: 170
- Joined: Fri Dec 10, 2010 4:07 pm
- OLAP Product: TM1
- Version: [2.x ...] 11.x / PAL 2.0.9
- Excel Version: Excel 2013-2016
- Location: Germany
Re: CellIncrementN and BatchUpdate Mode
I thought it is more or less the TI version of the deferred update in the GUI, a kind of ancestor of sandboxing giving a private version of a cube structure without consolidations & rules. N-element values should be accessible at that time.
-
- Posts: 12
- Joined: Thu Dec 07, 2017 11:17 am
- OLAP Product: IBM Planning Analytics
- Version: 2.0
- Excel Version: 2016
Re: CellIncrementN and BatchUpdate Mode
Sorry for performing necromancy..
Does anyone have update on this? Just encountered exactly the same issue as Paul on PA 2.0.
BR,
Simon
Does anyone have update on this? Just encountered exactly the same issue as Paul on PA 2.0.
BR,
Simon
-
- MVP
- Posts: 3663
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: CellIncrementN and BatchUpdate Mode
Why are you using BatchUpdate? This is something I haven't used since ParallelInteraction was introduced. I really don't see a need for it in PA. If you think I'm wrong, convince me otherwise.ultrakocka wrote: ↑Wed Aug 21, 2019 11:11 am Sorry for performing necromancy..
Does anyone have update on this? Just encountered exactly the same issue as Paul on PA 2.0.
BR,
Simon
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
-
- Posts: 12
- Joined: Thu Dec 07, 2017 11:17 am
- OLAP Product: IBM Planning Analytics
- Version: 2.0
- Excel Version: 2016
Re: CellIncrementN and BatchUpdate Mode
How else can you move large-ish amount of data between cubes/within cube in reasonable time? I'm asking because I sincerely have no idea. Switching on batch update just gives me much faster processing time.
Thanks
-
- MVP
- Posts: 3663
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: CellIncrementN and BatchUpdate Mode
Cube to cube processing operations are typically very fast (say >40K cells/second).
Where it is slower than this the reasons are usually
1. dynamic subsets
2. where source & target are contained in the same cube slice
3. data input triggering heavy feeding
Solve
1. by never using dynamic subsets in data processing
2. use an intermediate export/import in processing (it sounds counter-intuitive but often it is many times faster to export to a CSV and then read the CSV in a separate process rather than all in a single step. Or rather than using a file use a background staging cube, this is faster.)
3. Using RuleLoadFromFile detach rule, load data, reattach rule. Especially now with MTQ.Feeders=T this can be much faster.
And there's always the strategy of splitting data loads into multiple chunks and running in parallel. In the latest versions of PA with the RunProcess function or in earlier versions with TM1RunTI.exe. You can often get a near linear improvemernt in throughput by scaling over multiple cores rather than using a single core. It does depend in the ratio between commit time and processing time which depends on dimension index order vs the dimension used for each slice (complicated topic to explain). Of course this depends on having the correct licensing and the CPUs available to do it.
Where it is slower than this the reasons are usually
1. dynamic subsets
2. where source & target are contained in the same cube slice
3. data input triggering heavy feeding
Solve
1. by never using dynamic subsets in data processing
2. use an intermediate export/import in processing (it sounds counter-intuitive but often it is many times faster to export to a CSV and then read the CSV in a separate process rather than all in a single step. Or rather than using a file use a background staging cube, this is faster.)
3. Using RuleLoadFromFile detach rule, load data, reattach rule. Especially now with MTQ.Feeders=T this can be much faster.
And there's always the strategy of splitting data loads into multiple chunks and running in parallel. In the latest versions of PA with the RunProcess function or in earlier versions with TM1RunTI.exe. You can often get a near linear improvemernt in throughput by scaling over multiple cores rather than using a single core. It does depend in the ratio between commit time and processing time which depends on dimension index order vs the dimension used for each slice (complicated topic to explain). Of course this depends on having the correct licensing and the CPUs available to do it.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
-
- Posts: 12
- Joined: Thu Dec 07, 2017 11:17 am
- OLAP Product: IBM Planning Analytics
- Version: 2.0
- Excel Version: 2016
Re: CellIncrementN and BatchUpdate Mode
Hi Lotsaram,
Definitely not getting the ~40k cells/s processing speed - will try to implement point 3) instead of batchUpdate (of which purpose is the same - stop processing rules and feeders while running the process).
I have no experience using RunProcess, but definitely will give it a shot when there is some free time on my hands.
Thanks a lot for the suggestions!
Definitely not getting the ~40k cells/s processing speed - will try to implement point 3) instead of batchUpdate (of which purpose is the same - stop processing rules and feeders while running the process).
I have no experience using RunProcess, but definitely will give it a shot when there is some free time on my hands.
Thanks a lot for the suggestions!