My goal is to copy "the" important cube from the production environment to both the test and development instances once a week. But unfortunately IT doesn't allow me to stop and restart the services from a TI routine, so I finally came up with the idea of unloading the cube, copying the productive version and reloading it with ViewConstruct.
This works well, but it takes unexpectedly huge amounts of memory, so I only do this at weekends as the services are restarted by IT every Sunday morning anyway. Actually, it would be great to have a solution that works at the push of a button whenever it's needed, but since it requires so much RAM, I'm reluctant to make it available to the power users.
Any ideas on how I could make it work better?
RAM usage exploding with CubeUnload and ViewConstruct
- Steve Rowe
- Site Admin
- Posts: 2455
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: RAM usage exploding with CubeUnload and ViewConstruct
If the method you have picked for keeping the cubes aligned is not working, have you considered other methods? A couple of methods, there are probably others.
If you have logging enabled you could consider replication.
Export and import of active periods
Cheers,
If you have logging enabled you could consider replication.
Export and import of active periods
Cheers,
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- Community Contributor
- Posts: 311
- Joined: Fri Feb 15, 2013 5:49 pm
- OLAP Product: TM1
- Version: PA 2.0.9.1
- Excel Version: 365
- Location: Minneapolis, USA
Re: RAM usage exploding with CubeUnload and ViewConstruct
How big is the view in your ViewConstruct? You should be able to construct a small view to trigger the cube load back into memory. Also, when a cube is reloaded to memory, feeders are reprocessed which takes time & memory. The server theoretically should be reusing the garbage memory from the previous unload, but perhaps that isn't happening.
IT won't let you restart a DEV tm1 service, will they know and you get in trouble somehow? If it's not PROD, whatever you can accomplish with TI and ExecuteCommand is fair game imo
IT won't let you restart a DEV tm1 service, will they know and you get in trouble somehow? If it's not PROD, whatever you can accomplish with TI and ExecuteCommand is fair game imo

-
- Posts: 131
- Joined: Tue May 17, 2011 10:04 am
- OLAP Product: TM1
- Version: Planning Analytics 2.0
- Excel Version: 2016
- Location: Freiburg, Germany
Re: RAM usage exploding with CubeUnload and ViewConstruct
Good idea to open only a very small view, but this still takes 30 GB of RAM even though the cube usually does not take much more than 18 in production. But it works fine to simply unload and copy the cube shortly before the service is anyway restarted, so I will stick to that.
Thank you!
Holger
Thank you!
Holger
- gtonkin
- MVP
- Posts: 1254
- Joined: Thu May 06, 2010 3:03 pm
- OLAP Product: TM1
- Version: Latest and greatest
- Excel Version: Office 365 64-bit
- Location: JHB, South Africa
- Contact:
Re: RAM usage exploding with CubeUnload and ViewConstruct
I have noticed similar behaviour when using MTFeeders=T and MTFeeders.AtStartup=T
Are you using these in your TM1s.cfg?
Are you using these in your TM1s.cfg?
-
- MVP
- Posts: 3698
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: RAM usage exploding with CubeUnload and ViewConstruct
Why don't you export data to CSV on production and load the CSV in Test/Dev? It seems to me that this would be a better solution.
- more scope for automation since at the end of the export the process to load the file could be chained by tm1runti or rest call
- presumably the whole history doesn't change each week only the latest time partition, so the data saved to disk should be small compared to the whole cube. This would also mean a much smaller memory impact to the targets.
- more scope for automation since at the end of the export the process to load the file could be chained by tm1runti or rest call
- presumably the whole history doesn't change each week only the latest time partition, so the data saved to disk should be small compared to the whole cube. This would also mean a much smaller memory impact to the targets.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.