Page 1 of 1

Tm1 Model - Limit Overall RAM Usage

Posted: Wed Nov 08, 2023 2:35 pm
by chewza
Hi there

We have inherited a complex model (which is non business critical) which on rare occasions (every few months), start spiking the RAM to a point where the RAM for the entire server maxes out, and all models go down. Obviously we need to get to the root cause of what is causing this (initial investigation points towards unexpected parameters being captured for a Ti), but until then, do you know where you can limit the overall RAM usage of a model?

Regards
Chris

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Wed Nov 08, 2023 6:21 pm
by lotsaram
You can't.

TM1 will eat up all available memory until it runs out.

And then it is paging, hanging and crashing time.

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Wed Nov 08, 2023 8:53 pm
by ardi
lotsaram wrote: Wed Nov 08, 2023 6:21 pm You can't.

TM1 will eat up all available memory until it runs out.

And then it is paging, hanging and crashing time.
And the worst part is that TM1 does not release the memory back until next restart, even if you Zero Out some of the data

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Wed Nov 08, 2023 8:55 pm
by ardi
chewza wrote: Wed Nov 08, 2023 2:35 pm Hi there

We have inherited a complex model (which is non business critical) which on rare occasions (every few months), start spiking the RAM to a point where the RAM for the entire server maxes out, and all models go down. Obviously we need to get to the root cause of what is causing this (initial investigation points towards unexpected parameters being captured for a Ti), but until then, do you know where you can limit the overall RAM usage of a model?

Regards
Chris
Check Cube Dimension Ordering. Dimension Ordering plays is a huge factor in memory consumption

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Thu Nov 09, 2023 6:19 am
by gtonkin
So there actually seems to be a way though not official/documented.

If you add PooledMemoryMaxLimit=<threshold in MB> to your tm1s.cfg, that instance will basically just crash when the threshold is reached.
Not ideal normally but may be what you need in your specific case.

My TM1server.log shows the following:

Code: Select all

12304   []   INFO   2023-11-09 06:13:18.548   TM1.Server   TM1CubeImpl::ProcessFeeders: Exception, done computing feeders for base cube 'General Ledger'.
12224   []   ERROR   2023-11-09 06:13:18.568   TM1.Server.Memory   Memory Max Limit Exceeded: PooledMemoryMaxLimit = 3500 Mb, Total Memory Allocated = 3493 Mb, Total Garbage Memory = 2 Mb, Allocating = 8388608 B
12224   []   WARN   2023-11-09 06:13:18.568   TM1.Server.Memory   CreateNewGarbageBlocks() outOfMemory Exception <<< MEMORY_FATAL_LEVEL >>> -  apifunc# "0"
So you have an option, not sure about the risks etc., use at your own peril and all the other fine print applies with undocumented functions...

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Sat Nov 11, 2023 1:20 am
by babytiger
gtonkin wrote: Thu Nov 09, 2023 6:19 am So there actually seems to be a way though not official/documented.

If you add PooledMemoryMaxLimit=<threshold in MB> to your tm1s.cfg, that instance will basically just crash when the threshold is reached.
Not ideal normally but may be what you need in your specific case.

My TM1server.log shows the following:

Code: Select all

12304   []   INFO   2023-11-09 06:13:18.548   TM1.Server   TM1CubeImpl::ProcessFeeders: Exception, done computing feeders for base cube 'General Ledger'.
12224   []   ERROR   2023-11-09 06:13:18.568   TM1.Server.Memory   Memory Max Limit Exceeded: PooledMemoryMaxLimit = 3500 Mb, Total Memory Allocated = 3493 Mb, Total Garbage Memory = 2 Mb, Allocating = 8388608 B
12224   []   WARN   2023-11-09 06:13:18.568   TM1.Server.Memory   CreateNewGarbageBlocks() outOfMemory Exception <<< MEMORY_FATAL_LEVEL >>> -  apifunc# "0"
So you have an option, not sure about the risks etc., use at your own peril and all the other fine print applies with undocumented functions...
Yes, agree. Depending on the case, crashing an instance would not be ideal solution (even temporary).
Is it possible that your TI has triggered a massive calculated cube slice, hence the feeders? Apart from the potential incorrect parameters entered when running TI, I would also check the source feeder as well.

In the past, I have VMT and VMM configuration flags to control the size of views stored in memory, that's typically aim to minimise the memory use by views created by users. I cannot remember what's the impact with TI when working with those 2 configuration flags.

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Sun Nov 12, 2023 9:35 am
by gtonkin
I had some time to play around a bit more with this parameter. Looks like my initial server crashes were due to too little memory on startup.
Bumping this up a bit allowed the server to start as per normal.

This parameter appears to be Static - no changes to it whilst running seemed to have any effect.

When opening a large cube view which would exceed the threshold I received this error in Architect: Out of Memory - Cube Viewer
No Server Crash - Could go to a smaller view and open that instead.

Code: Select all

2852   [32]   ERROR   2023-11-12 08:07:15.657   TM1.Server.Memory   Memory Max Limit Exceeded: PooledMemoryMaxLimit = 3500 Mb, Total Memory Allocated = 4146 Mb, Total Garbage Memory = 143 Mb, Allocating = 238487 B
2852   [32]   WARN   2023-11-12 08:07:15.657   TM1.Server.Memory   AllocateBigBlock() outOfMemory Exception <<< MEMORY_FATAL_LEVEL >>> -  apifunc# "1"
I ran a process to reprocess feeders on all cubes and this failed with the following message: Unexpected return in function: COrionTreeView::OnProcessExecute
Again, No Server Crash, just seems to have terminated the process.
TM1ProcessError log being generated:

Code: Select all

Error: Data procedure line (0):  Exception Occurred during process execution: TM1MemoryException: Fatal level
TM1Server.log had:

Code: Select all

4744   [2]   ERROR   2023-11-12 09:01:36.676   TM1.Server.Memory   Memory Max Limit Exceeded: PooledMemoryMaxLimit = 4200 Mb, Total Memory Allocated = 4198 Mb, Total Garbage Memory = 0 Mb, Allocating = 4194304 B
4744   [2]   WARN   2023-11-12 09:01:36.677   TM1.Server.Memory   CreateNewGarbageBlocks() outOfMemory Exception <<< MEMORY_FATAL_LEVEL >>> -  apifunc# "196"
4744   [2]   INFO   2023-11-12 09:01:36.677   TM1.Server   TM1CubeImpl::ProcessFeeders: Exception, done computing feeders for base cube 'GDT Expense'.
With various tests performed, as long as sufficient memory is given to start up, I do not seem to be crashing the server, only the function demanding resources whether view or process is being terminated. Note however that based on the resources available, you may not be able to shut down the server as this may require memory to commit changes during shutdown.

Would be great if others needing to apply this to one or more models could perform some tests and feedback on their experiences.

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Tue Nov 14, 2023 9:25 am
by lotsaram
Not being able to shutdown the instance would seem to be a fatal flaw in using this!

What would happen to a long-running TI process that was processing rule calculated cells and consequently growing the calculation cache? If memory limit exceeded does the TI process just error? Or does it do something "intelligent" like dumping the cache and then continuing? (My guess would of course be the former.)

Re: Tm1 Model - Limit Overall RAM Usage

Posted: Tue Nov 14, 2023 9:32 am
by gtonkin
More fiddling needed - I have not had time to look at running DebugUtility 125 to clear the cache, unload certain cubes etc. before attempting a shutdown.

Still not a perfect solution by any stretch but if this model and others were going to simply crash, limiting to one that may need to be killed may be preferable.

Coincidentally, had a Prod server's databases all crash on the weekend as IT decided to set to page file size to zero. I know which model caused the issue as like the OP, it too can spike from time to time. Had I limited it to something reasonable, the other 3 models would not have been impacted.