"The server method 'CommitSandbox' failed" error

Post Reply
Jaan
Posts: 5
Joined: Wed Jan 23, 2013 3:05 pm
OLAP Product: TM1, MS Analyses Services
Version: 10.1
Excel Version: Office2010

"The server method 'CommitSandbox' failed" error

Post by Jaan »

Hi,

my client is using Cognos TM1 10.1. They have built up an Application from Performance Modeler and all the end users should commit and submit data over the web. They are getting the following error while trying to commit over 15 records at a time - "The server method 'CommitSandbox' failed". They get the error message only when inserting data over the web. In Architect everything works fine and the performance is also good.

When they try to commit 2-5 records over the web, then it processes really slowly, however it manages to finish at the end. Committing approx 15 records is already too much for tm1 because after a minute or so it throws an error. I have a feeling that it might be something to do with server network settings but I am not sure. It seems like there's some kind of timeout after which the tm1 throws an error message.

Has anybody come across the similar situation?

Thanks,
Jaan
Jaan
Posts: 5
Joined: Wed Jan 23, 2013 3:05 pm
OLAP Product: TM1, MS Analyses Services
Version: 10.1
Excel Version: Office2010

Re: "The server method 'CommitSandbox' failed" error

Post by Jaan »

Just to update my post, the function that is causing the delay is ServerSandboxMerge. After some minutes the server sends an error message because of timeout. If I let the server work as long at it needs, the function finishes after 10 minutes. I could temporarily solve the problem by reducing drastically the number of time periods in the time dimension. Now the function takes about 30 seconds. It is still too long as the system goes into production in October and therefore doesn't have many data in it. The time periods I deleted had zero values, so I was surprised that it actually affected the performance.
I couldn't find any reasonable explanation as to what the function exactly does except of "merging the sandbox data into base data". If I want to accelerate the run-time, the explanation doesn't help. Does it create some internal indexes before merging? Does it check all the fields of all the dependent cubes regardless of zero values? I saw that it performed also some disk writes during the merge. Does it mean that it logs the merges in parallel? If I use the base data only then all the calculations work just fine. The performance issues start with committing the sandbox. Maybe there's a simple design issue I wasn't aware of.

So, if anybody has a good reference or has come across a similar situation, I would still appreciate the feedback.

Regards,
Jaan
Post Reply