Garbage Memory issue

Post Reply
Jodha
Posts: 62
Joined: Tue Mar 13, 2012 4:34 am
OLAP Product: TM1
Version: 9.5.2 10.1 10.2
Excel Version: 2007 2010 SP1

Garbage Memory issue

Post by Jodha »

Recently we upgrade from 10.1 to 10.2.2 .After upgrade we are noticing TM1 garbage memory is increasing drastically at any given time from 5 GB to 6 GB ( i got this number from }statsbyserver cube) .
We haven't modified any rule feeders/configurations on server recently .

Total memory consumed by TM1 cubes is about 25 GB ( i got his number from }statbyCube ) this extra garbage memory consumed by TM1 server is causing frequent server outages .

we are on 64 bit 2008 R2 windows server with 32 GB RAM Memory and enough hard disk space .

Below are my config file parameters .



ServerLogging=T
LoggingDirectory=XXXXX
SecurityPackageName=NTLM
IntegratedSecurityMode=2
GroupsCreationLimit=200
UseSSL=T
ServerName= XXXXXX
DataBaseDirectory=XXXXXX
AdminHost=XXXXXXXXXXXX
MTQ=4
PortNumber=12345
ClientMessagePortNumber=
Savetime=
Downtime=
ProgressMessage=True
AuditLogOn=F
AuditLogMaxFileSize= 100 MB
AuditLogUpdateInterval=60
PersistentFeeders=F
ParallelInteraction=T
IPVersion=ipv4
ServerCAMIPVersion=ipv4


Can some please let us know ways to reduce garabge memory .Thanks in advance for you replys.
Last edited by Jodha on Fri Jul 25, 2014 10:28 pm, edited 1 time in total.
declanr
MVP
Posts: 1828
Joined: Mon Dec 05, 2011 11:51 am
OLAP Product: Cognos TM1
Version: PA2.0 and most of the old ones
Excel Version: All of em
Location: Manchester, United Kingdom
Contact:

Re: Garbage Memory issue

Post by declanr »

Despite it's name "Garbage Memory" is far from garbage; it is where stargate views are stored (this will be the majority of the value) allowing quick/automatic view retrieval on future requests - if it's there it is because a view has been used before, if a view has been used before it will probably be used again... as such you don't want to get rid of it.

If you are running out of memory there is not really any magic config setting that will help you; turning parallel interaction off can normally help you save a significant chunk but that rarely outweighs the benefits. You have 2 real options:
1/ Reduce the size of your model (archive data, improve feeders etc - especially improve feeders)
2/ Add more RAM

If the model is the sort of size you are suggesting (note that the stats cubes don't tell the whole story but are an indicator) I would both do a full system cleanup (even the best models can be and should be constantly improved/tweaked for performance) and add more RAM. I don't like any server I work on to get anywhere near 80% RAM usage, being an in-memory tool TM1 is pretty much redundant if you don't purchase the RAM to go with it.
Declan Rodger
Alan Kirk
Site Admin
Posts: 6647
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: Garbage Memory issue

Post by Alan Kirk »

I read your post before I read Declan's, but I have to agree with everything he said. In particular the thing that struck me was when you said that you had only 32 gig for 25 gig of cubes. My home desktop has 32 gig; it's not really a lot for a modern server. You also have to consider that you won't be getting to use the whole 32 gig for TM1 anyway; the O/S will be demanding some, as will the web server if you're running TM1 Web.

Even if you were getting away with it on 10.1, it would only be a matter of time before your data usage would push the boundaries. In upgrading, you'll certainly come across situations where the newer versions will consume more memory. I remember that when we were on 32 bit we could run our server on 9.0 without a problem but it wouldn't even get out of bed on 9.1 or later because the locking model changes increased memory consumption. To be fair to IBM when they do this it's not for the heck of it but rather to try to improve performance, even at the cost of greater resource consumption.

Put another way, that server was always going to cause you problems at some point.

If the server is capable of having its memory upgraded you should definitely do that; double it if possible. RAM is relatively cheap these days (server memory maybe a bit less cheap, but for a business it should be claimable as a tax deduction which lowers the effective cost) and clearly would be a good investment in your case. Of course if it's an older machine there may be a limit to the memory that you can put on the motherboard. In such a case it's time to either look at a bit of extra capex to get a whole new server, or see whether your IT department is going the virtualisation route in which case they may be able to give you a more flexible setup via a virtualised server.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
Jodha
Posts: 62
Joined: Tue Mar 13, 2012 4:34 am
OLAP Product: TM1
Version: 9.5.2 10.1 10.2
Excel Version: 2007 2010 SP1

Re: Garbage Memory issue

Post by Jodha »

Thanks Declanr , Alan for your valuable suggestions .
Out IT Team confirmed nearest deadline to increase system RAM is 2015 so it is a distant option .

Next month we have a planned migration for 2 more cubes into production . I am looking at possible options to reduce TM1 cube sizes by cubeoptimizer and ways to reduce garbage meomory , It is the only option i have to avoid production server from crashing. :)
we have a decent user base of 80 users interacting to Prod server every day .
In our model TM1 rule usage is very little just two cubes with small rule file and feeders are optimized .

Can some one can please suggestions me ways to reduce TM1 cube sizes and garbage meomory .
Is there any disadvantges of dimesnion reordering using cubeoptimizer.
tomok
MVP
Posts: 2836
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: Garbage Memory issue

Post by tomok »

You can try cube optimizer. The only drawback I can see is that you might nott have enough memory on your box for the job to run as it needs to consume RAM is it tries the different scenarios. You may need to do the optimizing in DEV and then upload the final changes to PROD. Assuming you have minimal rules and you are not overfeeding (I'm taking you at your word :roll: on this one) then the only options you have left to reduce memory usage are to 1) reduce the size of your cubes or 2) lower the VMM and VMT settings so that almost nothing gets cached, or 3) a combination of 1 and 2. There is no magic bullet. A production TM1 server of the size you are talking about is shameful in this day and age.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/
RJ!
Community Contributor
Posts: 219
Joined: Mon Jul 23, 2012 8:31 am
OLAP Product: TM1
Version: 10.2.2
Excel Version: 2010

Re: Garbage Memory issue

Post by RJ! »

We had similar issues with our environment, server was running 52GB of RAM & was using 100% of it according to the Taks Manager.
We tried a lot of different things such as archiving data + deleting & unloading cubes, they made minimal impact...

In the end we removed 2 lines of Feeder & tah dah, clawed back 20GB of RAM...
If you have time, its defianately worth your while to review your Cube rules again.

Seeing as your limited until 2015, I'm assuming you are building your cubes around what you "must have" rather than "good to have"?
Alan Kirk
Site Admin
Posts: 6647
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: Garbage Memory issue

Post by Alan Kirk »

Jodha wrote:Out IT Team confirmed nearest deadline to increase system RAM is 2015 so it is a distant option .
I think you may need to put some pressure on your IT team to bring that forward. You report to someone, and so do they. You need to punt it up the line and have your bosses tell their bosses until someone understands the concept of "look, we need this". Either that, or you just put up with the crashes.
Jodha wrote:Next month we have a planned migration for 2 more cubes into production .
If I may put it as diplomatically as I can, that would seem to be a sub-optimal tactical decision if you can't even run the cubes that you already have in a reliable fashion.
Jodha wrote:I am looking at possible options to reduce TM1 cube sizes by cubeoptimizer and ways to reduce garbage meomory , It is the only option i have to avoid production server from crashing. :)
we have a decent user base of 80 users interacting to Prod server every day .
In our model TM1 rule usage is very little just two cubes with small rule file and feeders are optimized .

Can some one can please suggestions me ways to reduce TM1 cube sizes and garbage meomory .
The only way to reduce the cube size is to reduce the amount of stored or calculated data in it. As RJ! mentioned overfeeding can be a problem, though you're claiming that it isn't. I'll share Tomok's "take you at your word" approach on that, though I'd still take another look at the number of cells that are being fed. You can never look at that too often.

But you do need to get out of the mindset that "garbage memory is just garbage" and therefore available to be reduced. Tomok told you how to reduce it a bit, Declan told you why you really shouldn't.
Jodha wrote:Is there any disadvantges of dimesnion reordering using cubeoptimizer.
I'd watch out for this issue. I'd hope that it is no longer an issue in 10.x, but given IBM's failure to realise that it's a problem and why it's a problem it's not impossible that it'll show up again at some point. Test it on the version that you're on to make sure that you're safe before committing the reordered cube to production.

One other thing to consider is whether you have any cubes which are rarely if ever used. Setting those to "Load On Demand" through the Cube Properties will limit them to loading only when people query them. They'll take a performance hit when they do that, but if it's a cube that might only be used every couple of months it's likely that people won't care very much. I did that on a couple of cubes when we were still on 32 bit and scrounging for every byte we could get.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
RJ!
Community Contributor
Posts: 219
Joined: Mon Jul 23, 2012 8:31 am
OLAP Product: TM1
Version: 10.2.2
Excel Version: 2010

Re: Garbage Memory issue

Post by RJ! »

Possible dumb question, but do "Private" Subsets & Views gobble up your RAM?

We've let loose with allowing Users to share a number of Private Subsets & Views amougst themselves via a Process to move the .sub & .vue files about.

I can't actually pinpoint if doing this has contributed to our once again continually expanding Cube RAM usage...
Alan Kirk
Site Admin
Posts: 6647
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: Garbage Memory issue

Post by Alan Kirk »

RJ! wrote:Possible dumb question, but do "Private" Subsets & Views gobble up your RAM?

We've let loose with allowing Users to share a number of Private Subsets & Views amougst themselves via a Process to move the .sub & .vue files about.

I can't actually pinpoint if doing this has contributed to our once again continually expanding Cube RAM usage...
No, it's not a dumb question at all. Indeed it's one that I've been meaning to try to find an answer to at some point as well, but it won't be easy to since any single view and subset would probably consume only a tiny amount of RAM by itself.

The reason that I suspect that they don't unless the user is logged in is that they have one characteristic that public views and subsets don't have; and that is that you can copy them from (say) another server (say a dev server to a prod server) and the user will be able to see them by simply logging out and back in. You can't do that with public views and subsets, which require a server restart. This tends to suggest that the private views and subsets are only loaded upon the user logging in. At a guess they would be released back to garbage memory when the user logs out.

Unless you're talking about thousands and thousands of views and subsets I'm not sure you'd be able to detect the exact memory consumption over the background noise of calculations being cached. However I'd also be interested to know whether anyone has done some less metaphysical analysis of this issue.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
RJ!
Community Contributor
Posts: 219
Joined: Mon Jul 23, 2012 8:31 am
OLAP Product: TM1
Version: 10.2.2
Excel Version: 2010

Re: Garbage Memory issue

Post by RJ! »

In our example I'd estimate about 50 Private Subs & View's being shared between 20 users or so... not exactly mind boggling numbers, but our Cubes do seem keep growing until a restart
BariAbdul
Regular Participant
Posts: 424
Joined: Sat Mar 10, 2012 1:03 pm
OLAP Product: IBM TM1, Planning Analytics, P
Version: PAW 2.0.8
Excel Version: 2019

Re: Garbage Memory issue

Post by BariAbdul »

According to Developer Guide:
" When a remote server has both a public and a private Default subset for a
dimension, your private Default subset takes precedence over the public Default
subset"
Does this has any impact on performance as private subsets has to be loaded first.Thanks
"You Never Fail Until You Stop Trying......"
Alan Kirk
Site Admin
Posts: 6647
Joined: Sun May 11, 2008 2:30 am
OLAP Product: TM1
Version: PA2.0.9.18 Classic NO PAW!
Excel Version: 2013 and Office 365
Location: Sydney, Australia
Contact:

Re: Garbage Memory issue

Post by Alan Kirk »

BariAbdul wrote:According to Developer Guide:
" When a remote server has both a public and a private Default subset for a
dimension, your private Default subset takes precedence over the public Default
subset"
Does this has any impact on performance as private subsets has to be loaded first.Thanks
I don't think the question of whether it has the same name as a public subset is relevant. Bear in mind that a private subset would need to be loaded into memory regardless of its name. You'd always have one private and one public, which would use more memory than a public one alone. Same thing with speed, since loading a subset will take more time than not needing to but (a) you'd take that hit on logging in and (b) it's probably too.small to mention.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
Post Reply