Page 1 of 1

Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 9:30 am
by Hippogriff
Hi all,

I'm still educating myself on all this stuff... every now and again I come across something that I feel is right for the forum to guide me on.

I observed that I always see the memory used by a model in Windows Task Manager increase and increase with usage, until it's shut down. I understand that's by design - TM1 never releases memory back to the OS, but it does have the concept of garbage memory and garbage collection - although it's probably not exactly analogous to what I'm used to from my previous life - a JVM heap (which can also be bounded by command line options) whereas TM1 will just grow and grow if allowed / necessary and there may be no concept of compacting the heap? Obviously I'm no expert - I'm making a few assumptions.

So, I wanted to know if there was a Logger that wrote to tm1server.log when an allocation was requested and it was 'placed' into a block that was being re-used - I cannot seem to observe anything giving me that kind of information from the TM1.Server.Memory Loggers - unless I'm missing something or I don't have the ability to test the hypothesis?

Therefore, I also wanted to know if there was a way I could simulate and test this by having a new allocation use an existing 'gap' rather than requesting more memory from the OS... maybe with a sample like GO_New_Stores and some TI or even some Java (I had a look at that and I can create simple programs)? Is it possible for me to create - say - a 100MB request, and observe the model's memory usage increase by ~100MB - then do whatever is necessary to let TM1 consider this garbage, then do another similar request, but this time for only 50MB, and observe that the memory reported by Window Task Manager does not increase by ~50MB - because we know that the 50MB request has just been slotted neatly into the 100MB garbage gap?

I hope this thread is OK because I do get the fact that it's a bit theoretical in nature...

Re: Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 10:52 am
by Steve Rowe
Hi Hippo,
You're probably going into far more detail than you'll ever need (or at least I've ever needed).

I think you are basically asking "How do I get TM1 to consume memory and then release it to garbage and then use it again so I can monitor the impact?"?

I'm assuming (dangerous I know..) that you know you can monitor the value for RAM in garbage by having the TM1s performance monitor turned on and looking in the cube }StatsForServer? Turn it on by selecting the option on the right click menu.

Anyway, to get TM1 to consume memory you need to make it work post start-up. A simple way would be to export a large view using a TI process, the view will get built in RAM and then this data will not be retained and so the RAM will not be released. This should work as a TI view export does not use stargate views, if you do a large slice then you may find that the RAM does not get released since TM1 will retain the data used to generate the view. (I'm simplifying for brevity).

Another way would be to unload a large cube (off the right click menu).

HTH

Re: Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 10:55 am
by qml
I get where you coming from and knowing is usually (but probably not always) much better than not knowing. However, I have to warn you that it is quite hard in practice to figure out the algorithms of TM1's memory management via the black box approach (i.e. generating some inputs and examining the outputs). The TM1 Server definitely does not manage memory in the most conservative way - often priority is put on performance. There are certain actions that will always result in new memory being requested from the OS, even if there is garbage memory available for reuse. For other actions the server will happily use garbage memory first. You are also almost guaranteed to have some memory leaks lurking in the code - some are constantly patched while others are being added. It is a very complex subject and to make things worse, different versions of TM1 will display different behaviour, so what you learn in one version is not guaranteed to apply to other versions.

If, considering the above caveats, you still want to experiment and expand your own knowledge (and by that our collective knowledge) then by all means, do so. Maybe someone will suggest good loggers to use or maybe you can find one by using this method.

Re: Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 10:57 am
by Hippogriff
Thank you both... I'm going to spend some time working on this, for sure. I will report back anything I think I find out that is useful.

Re: Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 11:03 am
by Hippogriff
qml wrote:There are certain actions that will always result in new memory being requested from the OS, even if there is garbage memory available for reuse.
This was kind of another crux behind my query too... I'm guessing you don't mean just by virtue of size, but possibly by type? So, if there is a 'gap' in the memory allocated to the tm1sd.exe process of 500MB and the next action you perform would result in an allocation of 250MB, you'd expect it to go in that gap... but you're implying it might not. I have no understanding of what might and might not.

I turn the MetaLogger Logger on as a matter of course now... I always thought it would be nice if it echoed out any in-flight changes to the tm1s-log.properties too.

Re: Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 1:01 pm
by lotsaram
If you really want to do this as an experiment here's an approach that *should* work.
- create a largish cube. To get to 100 MB of memory at about 16 - 32 Bytes/cell (or more if the dimensions are't well ordered) you're going to need to load about 4 million cells
- in the measure dimension have 2 measures "input" & "rule". ['Rule'] = N:['input'] & ['input']=>['rule']
- turn on performance monitor
- load your 4M cells to intersections of the input measure
- calculate the top most consolidated cell on all dimensions of the "rule" measure
- this should cause a large calculation cache in the cube, confirm by looking in }StatsByCube
- check in }StatsForServer for Memory in Garbage
- clear the calculation cache by changing some random value of the input measure
- load data again to input measure but this time only 1/2 as much, calculate top level view again
- check again in }StatsForServer. Hopefully (& theoretically) total memory will have stayed the same from last check but memory in garbage decreased.

Re: Understanding TM1 memory usage a bit more

Posted: Tue Oct 04, 2016 1:10 pm
by Hippogriff
I'm going to need to digest that, but thank you for taking the time. It doesn't have to be 100MB... 1MB (or anything) would be fine, just as long as I could reliably observe what's going on - that would fulfill my experimental desire just fine.

Re: Understanding TM1 memory usage a bit more

Posted: Tue Apr 12, 2022 4:36 pm
by Emixam
Hello,

Is there an easy formula to understand TM1 memory usage ?

I thought it should be something like:
Memory Used (}StatsForServer) = Total Memory Used (}StatsByCube) + Memory in Garbage (}StatsForServer)

Am I missing something ?

Re: Understanding TM1 memory usage a bit more

Posted: Tue Apr 12, 2022 6:21 pm
by declanr
Emixam wrote: Tue Apr 12, 2022 4:36 pm Hello,

Is there an easy formula to understand TM1 memory usage ?

I thought it should be something like:
Memory Used (}StatsForServer) = Total Memory Used (}StatsByCube) + Memory in Garbage (}StatsForServer)

Am I missing something ?
I haven't tried it, but I doubt you would ever manage to match the value you calculate precisely to what is reported in resource monitor or similar server-side tools. That being said, I expect with some effort you could probably get pretty close.

In terms of things you could be missing, Dimensions can take up a lot of memory (even ones with no attributes) - I am looking in architect's properties at a dimension that has 11.8m elements and apparently uses approximately 2.2 GB of memory.
Admittedly, most applications won't have particularly large dimensions in terms of element numbers - but even with a large number of smaller dimensions, it could start making a noticeable difference.

Oh - and subsets use Memory as well. I assume that the memory used by subsets is not included in the reported memory used by the dimension, but I also wouldn't be surprised if it was. These will mostly be in the kb or low mb, but again a lot of subsets could add up to a not insignificant memory requirement.