Hi,
Whether SaveDataAll is required in each TI process or it is optional?
Thanks
When to use SaveDataAll?
- Steve Rowe
- Site Admin
- Posts: 2456
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: TM1 Basic Question
lol
Optional, depends if you want to save data or not...
If you don't do it at the end of your data loads you should probably schedule it a chore that runs say every 4 hours.
Optional, depends if you want to save data or not...
If you don't do it at the end of your data loads you should probably schedule it a chore that runs say every 4 hours.
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- Posts: 128
- Joined: Thu Dec 15, 2011 8:22 pm
- OLAP Product: TM1
- Version: 9.4
- Excel Version: 2003
Re: When to use SaveDataAll?
Thank you steve
- macsir
- MVP
- Posts: 785
- Joined: Wed May 30, 2012 6:50 am
- OLAP Product: TM1
- Version: PAL 2.0.9
- Excel Version: Office 365
- Contact:
Re: When to use SaveDataAll?
The more, the better in case the service is crashed but you need to take care of size of your log files.
Usually, I would do it manually after massive loading. And the chore is scheduled to run in the early morning after automatical loading, another one at midday and the last one in the evening.
Usually, I would do it manually after massive loading. And the chore is scheduled to run in the early morning after automatical loading, another one at midday and the last one in the evening.
-
- Site Admin
- Posts: 6647
- Joined: Sun May 11, 2008 2:30 am
- OLAP Product: TM1
- Version: PA2.0.9.18 Classic NO PAW!
- Excel Version: 2013 and Office 365
- Location: Sydney, Australia
- Contact:
Re: When to use SaveDataAll?
Not always, IMHO. Much depends on how big your cubes are and how fast your hard disks are. A full save on our legacy system takes 2 to 3 minutes (assuming changes have been made in all of the larger cubes, which is normally a safe bet) and that can hack users off if you do it on a regular basis. (My newer servers that we're migrating to on 9.5.2 on are a lot faster, but I still wouldn't want to do it too regularly.) Also the new CubeSaveData() function in 10.1 may make a difference, though I obviously haven't used that operationally to date.macsir wrote:The more, the better in case the service is crashed
My rule of thumb is to do a data save after doing a non-reproducible, unlogged data upload. For example, when I do an hourly pull from the GL it's unlogged to save time (and log size), but I can run the chore again and the data is fully refreshed. There's no need to have it saved to disk. Conversely with most of our "one-off" data uploads I turn logging on so that I don't have to worry if the server crashes. (And so that I can audit exactly what was uploaded and in which sequence in case anything screwy shows up.) That only leaves loads that are too huge to log, and too problematic to re-do. Those, I do a data save after (as well as regular early morning ones) even if it irritates the users a bit.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-
- Community Contributor
- Posts: 126
- Joined: Tue Nov 03, 2009 7:46 pm
- OLAP Product: MODLR - The CPM Cloud
- Version: Always the latest.
- Excel Version: 365
- Location: Sydney, Australia
- Contact:
Re: When to use SaveDataAll?
I'm with Alan on this one.
We had a really... really bizarre situation where we had a large data load (45min +) with a SaveDataAll in its epilog.
We also had a 4 hourly SaveDataAll Chore Scheduled.
When the two overlapped (The chore kicking of while historic data was loading), the loading TI would not commit and instead the whole process would kick off a second time. It took a long time to find out that this was the cause for sporadic double execution of TI's.
I think its only really a good idea to have it scheduled more often than once a day if you have a lot of user manual data-entry.
We had a really... really bizarre situation where we had a large data load (45min +) with a SaveDataAll in its epilog.
We also had a 4 hourly SaveDataAll Chore Scheduled.
When the two overlapped (The chore kicking of while historic data was loading), the loading TI would not commit and instead the whole process would kick off a second time. It took a long time to find out that this was the cause for sporadic double execution of TI's.
I think its only really a good idea to have it scheduled more often than once a day if you have a lot of user manual data-entry.
-
- Site Admin
- Posts: 1458
- Joined: Wed May 28, 2008 9:09 am
Re: When to use SaveDataAll?
We had a very similar issue last year at a client, running (then) CX9.0. We have a copy/calculate/freeze process on a very very large model which can take many hours. The process has a server restart at the end so (IMO) required a SaveDataAll. This ran fine for rather over a year - there was the standard CX timed chore as well but no issues were encountered. Then, for no apparent reason, we started to get locks and rollbacks. IBM's advice was strange:
a SaveDataAll should be in its own chore to avoid contention (we had no other users or processes running apart from the overnight SDA, which we disabled);
b We should enable logging on the freeze and let the server process TM1S.LOG on startup (the log files would have had billions of rows).
In the end we disabled the overnight SDA as mentioned and the problem did not recur. The client has not yet moved to CX10 (we'd like to move to using CubeSaveData) as we asked IBM whether the issue with PI fixed in TM1 10.1 HF4 ( http://www.tm1forum.com/viewtopic.php?f=3&t=8453 ) was in the CX10.1 codebase and they could not give an answer. I'm hoping that an autumn release, if there is one, may finally address this so they can move forward.
a SaveDataAll should be in its own chore to avoid contention (we had no other users or processes running apart from the overnight SDA, which we disabled);
b We should enable logging on the freeze and let the server process TM1S.LOG on startup (the log files would have had billions of rows).
In the end we disabled the overnight SDA as mentioned and the problem did not recur. The client has not yet moved to CX10 (we'd like to move to using CubeSaveData) as we asked IBM whether the issue with PI fixed in TM1 10.1 HF4 ( http://www.tm1forum.com/viewtopic.php?f=3&t=8453 ) was in the CX10.1 codebase and they could not give an answer. I'm hoping that an autumn release, if there is one, may finally address this so they can move forward.