Asynchronous processing

Post Reply
AmbPin
Regular Participant
Posts: 173
Joined: Sat Mar 20, 2010 3:03 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2007-10

Asynchronous processing

Post by AmbPin »

Hello,
I have read with interest that it is possible to mimic asynchronous processing in TM1 by executing a cmd file that in turn calls a TM1 process. This afternoon have been trying this and found that I can get asynchronous avtivity as long as I don't write to the cube. Am I getting some thing wrong or is what people would expect.

My Process Code:-
--------------------------------------------------
Prolog
--------------------------------------------------
NumericGlobalVariable('nDebug');
nDebug = 0;
StringGlobalVariable('sCubeName');
sCubeName = 'jiSales1';
StringGlobalVariable('sViewName');
sViewName = 'sys-jiSales1-' | psNamePrefix;
StringGlobalVariable('sDimName');
sDimName = 'Customer';
StringGlobalVariable('sSubsetName');
sSubsetName = 'sys-Customer-' | psNamePrefix;

ViewDestroy(sCubeName, sViewName);

if(SubsetExists(sDimName, sSubsetName) = 1);
SubsetDestroy(sDimName, sSubsetName);
endif;
SubsetCreateByMDX(sSubsetName, '{TM1FILTERBYPATTERN( {TM1FILTERBYLEVEL( {TM1SUBSETALL( [Customer] )}, 0)}, "' | psNamePrefix | '*")}');

if(ViewExists(sCubeName, sViewName) = 1);
ViewDestroy(sCubeName, sViewName);
endif;
ViewCreate(sCubeName, sViewName);
ViewSubsetAssign(sCubeName, sViewName, sDimName, sSubsetName);
ViewExtractSkipZeroesSet(sCubeName, sViewName, 0);
ViewExtractSkipRuleValuesSet(sCubeName, sViewName, 0);
ViewExtractSkipCalcsSet(sCubeName, sViewName, 0);

DatasourceCubeview = sViewName;

--------------------------------------------------
Data
--------------------------------------------------
sFileName = 'ji-' | psNamePrefix | '.txt';
ASCIIOutput(sFileName, TimSt(Now, '\H:\i:\s'));

nValue = CellGetN(sCubeName, vsScenario, vsCustomer, vsMeasure) + 1;
CellPutN(nValue , sCubeName, vsScenario, vsCustomer, vsMeasure);
ASCIIOutput(sFileName, vsCustomer, vsScenario, vsMeasure, NumberToStringEx(vnValue, '0.00', '.', ','));
ASCIIOutput(sFileName, TimSt(Now, '\H:\i:\s'));

nIndex = 0;
while(nIndex < 1000000);
nIndex = nIndex + 1;
end;

--------------------------------------------------
CMD File
--------------------------------------------------
cd "C:\Program Files\Cognos\TM1\bin"
tm1runti.exe /adminhost tbs0660 /server ji-tm1dev-01 /user johni /pwd piffle101 /process jiMulti-Process psNamePrefix=%1

--------------------------------------------------
Process to call the CMD file
--------------------------------------------------
ExecuteCommand('D:\Documents\bsmi\MDX Samples\Tm1RunTi.bat a', 0);
ExecuteCommand('D:\Documents\bsmi\MDX Samples\Tm1RunTi.bat d', 0);
lotsaram
MVP
Posts: 3703
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: Asynchronous processing

Post by lotsaram »

in 9.5.2 you should have no problem doing simultaneous writes to the same cube provided you have parallel interaction switched on.

In earlier 9.5 and 9.4 pre parallel interaction it should also be doable provided the different TIs are writing to different cubes or are using batch update mode.
AmbPin
Regular Participant
Posts: 173
Joined: Sat Mar 20, 2010 3:03 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2007-10

Re: Asynchronous processing

Post by AmbPin »

Hello,
Thanks for your reply, I do have parallel interaction switched on
ParallelInteraction=T
.

Here are the outputs when the write operation is included which show synchronous activity:-
Output1.PNG
Output1.PNG (82.42 KiB) Viewed 6682 times
And here are the outputs when the write operation is NOT included which show asynchronous activity:-
Output2.PNG
Output2.PNG (54.81 KiB) Viewed 6682 times
Andy Key
MVP
Posts: 352
Joined: Wed May 14, 2008 1:37 pm
OLAP Product: TM1
Version: 2.5 to PA
Excel Version: Lots
Location: Sydney
Contact:

Re: Asynchronous processing

Post by Andy Key »

Try replacing your ViewCreate and SubsetCreateByMDX with references to objects that already exist - so you can also get rid of the ViewDestroy and SubsetDestroy. These objects will need to be different for each parameter that you pass in, so keep the concatenation.

Even with PI you can still get locks on objects by creating metadata associated with them.

PI isn't quite the panacea that IBM would like you to think.
Andy Key
AmbPin
Regular Participant
Posts: 173
Joined: Sat Mar 20, 2010 3:03 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2007-10

Re: Asynchronous processing

Post by AmbPin »

Thanks for the tip Andy,

It seems that I can leave the ViewDestroy in the Epilog but must, as you suggest reference objects that already exist. To do this it seems I can move the Prolog ViewCreate code into a new process which I call with ExcecuteProcess. That all means it's a bit of a mess looking something like this:-

1) TM1 process to shell BAT file
2) BAT file launches TM1 process (P1) with the prefix parameter
3) P1 Executes another TM1 process (P2) with the prefix parameter
4) P2 Destroys views/subsets if they exist then creates views/subsets
5) P1 assigns DatasourceCubeview then performs data operations
6) P1 destroys views/subsets created by P2

What a phaf! I now get asynchronous results as shown below:-
Output1.PNG
Output1.PNG (67.21 KiB) Viewed 6620 times
I know the exmple I have used is completely unrealistic but I have found it useful to determine how parallell interaction is supposed to work. Has anyone found that it speeds up large data write operations in the real world?
bradohare
Posts: 26
Joined: Wed Jul 23, 2008 3:11 pm

Re: Asynchronous processing

Post by bradohare »

Hi AmbPin,

In looking at your post I have one suggestion regarding the way you are approaching your asychronus processing. Now, I'm on 9.5.1 and this very well may no longer be an issue in 9.5.2 however I thought it was worth mentioning. Specifically, I'm refereing this section in your Data tab:

nIndex = 0;
while(nIndex < 1000000);
nIndex = nIndex + 1;
end;

Now, I'm assuming you are doing this so you don't fire off all of your processes at once and would would like a buffer in between calls. I would sugget avoiding this approach. We initially did something very similiar to this. However, we found that if users were performing certain operations during these loops we ran into issues. For example, if a user was exporting to Excel a small report, the export function would stay "stuck" in a "Commit" if you are watching in Top until all of these loops had completed running. The result of this is none of your asynschronous processes really running until all of the calls finished executing.

Instead, I would suggest creating BAT file that creates a VBS script that "sleeps" and calling it via an EXECUTECOMMAND instead of constantly running loops within TI. I have one that takes a parameter for the number of seconds to "sleep" if you're interested.

thanks
brad
AmbPin
Regular Participant
Posts: 173
Joined: Sat Mar 20, 2010 3:03 pm
OLAP Product: TM1
Version: 9.5.2
Excel Version: 2007-10

Re: Asynchronous processing

Post by AmbPin »

Hello Brad,

Thank you for your reply. I know the while loop is bad and would never do that for real, it is simply there because my example had so little data and I wanted it to make the process last more than a second.

Have you used parallel processing? I am really keen to know whether anyone has actually seen a performance increase specifically during data write operations.
lotsaram
MVP
Posts: 3703
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: Asynchronous processing

Post by lotsaram »

AmbPin wrote:Hello Brad,

Thank you for your reply. I know the while loop is bad and would never do that for real, it is simply there because my example had so little data and I wanted it to make the process last more than a second.

Have you used parallel processing? I am really keen to know whether anyone has actually seen a performance increase specifically during data write operations.
I assume by parallel processing you mean loading to cubes (or same cube) in parallel with TI? For user driven manual data updates there's nothing special that you need to do with parallel interaction. For the main high concurrency user input application that my team manages this has been great for us and PI has significantly improved per user performance by more or less eliminating locks and wait queues.

For TI driven loads yes there is more to it to allow simultaneous loads as you need to be careful to ensure there is no locking from metadata actions like creation of subsets and views, but if the TIs are loading from external source(s) such as a flat file then there is no special setup or special care or watchouts. It should "just work". If you need to also manage clearing out sections of cubes prior to the load in my experience often this is easier to manage in serial then switch to parallel for the actual loading. Whether this leads to a performance increase well what do you think? If you have 1x 1GB flat file to load vs 10x 100MB files to load in parallel then the later is much faster end to end, not 10x faster mind you as the commit phase can't run in parallel and there seems to be some noticeable overhead from running multiple operations but if you have a lot of transaction volume to process and need to manage within batch processing time windows then it is definitely worthwhile.
Post Reply