Page 1 of 1
When to use SaveDataAll?
Posted: Mon Jul 29, 2013 7:20 pm
by dilip
Hi,
Whether SaveDataAll is required in each TI process or it is optional?
Thanks
Re: TM1 Basic Question
Posted: Mon Jul 29, 2013 7:23 pm
by Steve Rowe
lol
Optional, depends if you want to save data or not...
If you don't do it at the end of your data loads you should probably schedule it a chore that runs say every 4 hours.
Re: When to use SaveDataAll?
Posted: Mon Jul 29, 2013 8:08 pm
by dilip
Thank you steve
Re: When to use SaveDataAll?
Posted: Mon Jul 29, 2013 11:00 pm
by macsir
The more, the better in case the service is crashed but you need to take care of size of your log files.
Usually, I would do it manually after massive loading. And the chore is scheduled to run in the early morning after automatical loading, another one at midday and the last one in the evening.
Re: When to use SaveDataAll?
Posted: Mon Jul 29, 2013 11:12 pm
by Alan Kirk
macsir wrote:The more, the better in case the service is crashed
Not always, IMHO. Much depends on how big your cubes are and how fast your hard disks are. A full save on our legacy system takes 2 to 3 minutes (assuming changes have been made in all of the larger cubes, which is normally a safe bet) and that can hack users off if you do it on a regular basis. (My newer servers that we're migrating to on 9.5.2 on are a lot faster, but I still wouldn't want to do it too regularly.) Also the new CubeSaveData() function in 10.1 may make a difference, though I obviously haven't used that operationally to date.
My rule of thumb is to do a data save after doing a
non-reproducible, unlogged data upload. For example, when I do an hourly pull from the GL it's unlogged to save time (and log size), but I can run the chore again and the data is fully refreshed. There's no need to have it saved to disk. Conversely with most of our "one-off" data uploads I turn logging on so that I don't have to worry if the server crashes. (And so that I can audit exactly what was uploaded and in which sequence in case anything screwy shows up.) That only leaves loads that are too huge to log, and too problematic to re-do.
Those, I do a data save after (as well as regular early morning ones) even if it irritates the users a bit.
Re: When to use SaveDataAll?
Posted: Tue Jul 30, 2013 3:59 am
by Kyro
I'm with Alan on this one.
We had a really... really bizarre situation where we had a large data load (45min +) with a SaveDataAll in its epilog.
We also had a 4 hourly SaveDataAll Chore Scheduled.
When the two overlapped (The chore kicking of while historic data was loading), the loading TI would not commit and instead the whole process would kick off a second time. It took a long time to find out that this was the cause for sporadic double execution of TI's.
I think its only really a good idea to have it scheduled more often than once a day if you have a lot of user manual data-entry.
Re: When to use SaveDataAll?
Posted: Tue Jul 30, 2013 9:53 am
by David Usherwood
We had a very similar issue last year at a client, running (then) CX9.0. We have a copy/calculate/freeze process on a very very large model which can take many hours. The process has a server restart at the end so (IMO) required a SaveDataAll. This ran fine for rather over a year - there was the standard CX timed chore as well but no issues were encountered. Then, for no apparent reason, we started to get locks and rollbacks. IBM's advice was strange:
a SaveDataAll should be in its own chore to avoid contention (we had no other users or processes running apart from the overnight SDA, which we disabled);
b We should enable logging on the freeze and let the server process TM1S.LOG on startup (the log files would have had billions of rows).
In the end we disabled the overnight SDA as mentioned and the problem did not recur. The client has not yet moved to CX10 (we'd like to move to using CubeSaveData) as we asked IBM whether the issue with PI fixed in TM1 10.1 HF4 (
http://www.tm1forum.com/viewtopic.php?f=3&t=8453 ) was in the CX10.1 codebase and they could not give an answer. I'm hoping that an autumn release, if there is one, may finally address this so they can move forward.