TM1 Virtual Memory discussion

Post Reply
John Hammond
Community Contributor
Posts: 300
Joined: Mon Mar 23, 2009 10:50 am
OLAP Product: PAW/PAX 2.0.72 Perspectives
Version: TM1 Server 11.8.003
Excel Version: 365 and 2016
Location: South London

TM1 Virtual Memory discussion

Post by John Hammond »

I understand that TM1 ideally should have enough RAM to encompass the entire model. The simple solution is to have no swap file and this ensures that none of Windows' virtual memory swapping algorithms come into play which I do not understand the complexities of.

However in a real world TM1 has to co-exist with other applications and the model itself may be bigger than the RAM available. The solution is to use virtual memory. Now I appreciate the solution may be to just buy more RAM chips but the discussion I want to initiate here is how TM1 interacts with Virtual Memory.

There are a couple of points that I would share

The critical setting for VM seems to be whether you give background or foreground processes priority. If you give the foreground processes priority, services will be swapped and vice versa if TM1 is running as a user process.

TM1, unlike databases, does not have any way of communicating with Windows to let Windows know what pages it should use as its working set. (That said, the database's memory management is not really integrated with Windows but TM1 attempts to perform no kind of memory management between RAM and disk).



I have a question for those who may have tried really massive models

Has anyone tried RAID 0 SWAP files and multiple parallel striped read writes as a cheap alternative for expanding memory. Certainly the cost of n HDDs and a controller might be considerably less than the equivalent RAM but is the RAM just so much faster as to make this not worthwhile.

Another question

Could IBM easily improve TM1 VM performance by communicating to windows which are pages to retain in memory and which pages can be swapped out. Does such an interface exist in windows or as usual does it exist and is only available to Microsoft programmers?
lotsaram
MVP
Posts: 3706
Joined: Fri Mar 13, 2009 11:14 am
OLAP Product: TableManager1
Version: PA 2.0.x
Excel Version: Office 365
Location: Switzerland

Re: TM1 Virtual Memory discussion

Post by lotsaram »

Hi John,

The point about TM1's architecture is that it has to be in memory. TM1 is cell based and not relational. Each cell in the model could potentially be dependent on any other cell in any other cube in the entire model. The only way to address this efficiently is in RAM. If there is insufficient physical RAM and parts of the model need to be swapped to disk then performance will suffer, and not just incrementally but by orders of magnitude.

I don't think there is any quick fix but I agree that it would be an improvement if a TM1 administrator could mark areas of the model that were lower priority and could be swapped and held in a paging file (at present it is only possible to mark a cube "load on demand" but once loaded it is in RAM until unloaded). This would make TM1's internal engine and management of calculation trees much more complicated and server configuration and administration would be more complicated as well.

It might be a worthwhile improvement but the counter side is just "buy more RAM" so I don't expect that this is a current development priority.

As models get larger the real flaw with TM1's present architecture IMO is that there is no capacity to do incremental saves, that is if a single cell in a 100GB cube is changed then the entire 100GB cube must be written to disk. Clearly this is inefficient as the save must write to disk and is single threaded, this is a real bottleneck.
User avatar
LoadzaGrunt
Posts: 72
Joined: Tue May 26, 2009 2:23 am
Version: LoadzaVersions
Excel Version: LoadzaVersions

Re: TM1 Virtual Memory discussion

Post by LoadzaGrunt »

If there is insufficient physical RAM and parts of the model need to be swapped to disk then performance will suffer, and not just incrementally but by orders of magnitude.
Access to main memory is around 10^2 nanoseconds and for magnetic disk it is around 10^7 nanoseconds - some difference. An interesting analogy that I heard was it is the difference between having money in your pocket and having money sent to you through the post.

Most of the time, I have found that when you look at the real cost of RAM as a proportion of the total cost of ownership of your TM1 system then finding arguments not to buy extra RAM become difficult to make.
John Hammond
Community Contributor
Posts: 300
Joined: Mon Mar 23, 2009 10:50 am
OLAP Product: PAW/PAX 2.0.72 Perspectives
Version: TM1 Server 11.8.003
Excel Version: 365 and 2016
Location: South London

Re: TM1 Virtual Memory discussion

Post by John Hammond »

Thanks for your replies Loadza and Lotsa (am I detecting a pattern here?)
As models get larger the real flaw with TM1's present architecture IMO is that there is no capacity to do incremental saves, that is if a single cell in a 100GB cube is changed then the entire 100GB cube must be written to disk. Clearly this is inefficient as the save must write to disk and is single threaded, this is a real bottleneck.
Hence the slowness of savedataall in a VM based system, although (ironically) this is a good guaranteed way to ensure that your entire model is in RAM when sufficient RAM is available!
I don't think there is any quick fix but I agree that it would be an improvement if a TM1 administrator could mark areas of the model that were lower priority and could be swapped and held in a paging file (at present it is only possible to mark a cube "load on demand" but once loaded it is in RAM until unloaded). This would make TM1's internal engine and management of calculation trees much more complicated and server configuration and administration would be more complicated as well.
The idea I have is that TM1 when it needs to recalc a higher level element does a test on the 'component tree' first to see if any of the components have changed and then decides to recalculate.
My guess is that TM1 has to recalculate most of the time since it cannot determine which elements are changed because you have calculated elements that are current time based. Eg you have a TIME() or DATE() based calculation and you just dont know if it has changed significantly.

IBM would probably have to introduce something along the lines of feeders where the programmer has to tell TM1 that an item might vary with time. I was thinking about a STET command that frees up cells for input instead of calculation once the period changes and how this would be handled by TM1 and the only way I can see is to recalc/reconsolidate every time which may be quite inefficient.
Access to main memory is around 10^2 nanoseconds and for magnetic disk it is around 10^7 nanoseconds - some difference. An interesting analogy that I heard was it is the difference between having money in your pocket and having money sent to you through the post.
Great analogy.

I have another question.

Do you as a TM1 admin disable the swap file to prevent the vagaries of VM memory management (swaps out even when enough memory available) or do you define a small swap file to allow a model a bit of 'overflow'. I would be interested to hear what people have their Windows virtual memory set at.
User avatar
Steve Vincent
Site Admin
Posts: 1054
Joined: Mon May 12, 2008 8:33 am
OLAP Product: TM1
Version: 10.2.2 FP1
Excel Version: 2010
Location: UK

Re: TM1 Virtual Memory discussion

Post by Steve Vincent »

i don't disable the swap because other things in windows may need it. i keep a close eye on the RAM consumed by my models, i'm stuck in 32bit land so its a key indicator for ensuring reliable running. my goal is to always keep it in the RAM limits which can result in either archiving data or stopping / limiting users ability to run lots of scenarios. it has to be a trade off between flexibility and reliability.
If this were a dictatorship, it would be a heck of a lot easier, just so long as I'm the dictator.
Production: Planning Analytics 64 bit 2.0.5, Windows 2016 Server. Excel 2016, IE11 for t'internet
User avatar
Steve Vincent
Site Admin
Posts: 1054
Joined: Mon May 12, 2008 8:33 am
OLAP Product: TM1
Version: 10.2.2 FP1
Excel Version: 2010
Location: UK

Re: TM1 Virtual Memory discussion

Post by Steve Vincent »

<BUMP>

I have issues with certain parts of 9.5.2 and IBM have asked me to provide debug files of when my server crashes. I can replicate the issue all the time (open a view & it sits there forever trying to recalc it when a particular config parameter is set) but not the server crash. The service is consuming as much RAM as it can but the page file is also increasing as the view tries to return values. I have reduced the page file to a max of 500MB (keeps other progs happy) but task manager is showing page file useage up to 3GB. I want to get it to hit all the RAM then crash so i can capture the debug report but my production servers have lasted nearly a week on max RAM before something finally tipped them over the edge. Problem is i have no idea what that is and the view itself takes 20 mins before the client finally cries enough. This isn't throwing the server tho - any ideas why the PF is reporting so high when i told it to use no more than 500MB? And how might i stop that to attempt to get this server to fall over easier?
If this were a dictatorship, it would be a heck of a lot easier, just so long as I'm the dictator.
Production: Planning Analytics 64 bit 2.0.5, Windows 2016 Server. Excel 2016, IE11 for t'internet
Post Reply