Memory for Feeders in Large Sparce Cubes
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Memory for Feeders in Large Sparce Cubes
Hi,
I have problem with the memory allocated to feeders.
Here is some info:
- I have very sparce cube having 14 dimensions.
- Total number of leaf level elements in all of the cube dimensions is more than 300K.
- The memory per input data is around 40 bytes. (total number of stored numeric data records is around 120M)
- I added some feeders, the source and the target of the feeder is the same cube. (total number of fed cells is around 360M)
The problem is that each feeder (feeder per fed cell) is using around 35 bytes of memory, which is too much.
This makes it too expensive to use rules and feeders in this cube.
I am surprised to see feeders using so much of memory in this cube.Here is my explanation to the situation:
I assume a feeder need to know the coordinates of both source and target cells, and hence need to use memory for both.
A data record in a cube need to know only one coordinate (for the cell) and the value of the cell.
Having cube so sparce and having so many dimensions, makes it too expensive to know the coordinates.
Therefore the memory for a feeder becomes relatively more expensive with respect to memory for data in such a sparce cube than it would be in a smaller and dence cube.
I am not sure if this explanation is right or not, and I am not sure if something is going wrong with this cube.
But I would appreciate any comment or help from experts on this.
Regards,
I have problem with the memory allocated to feeders.
Here is some info:
- I have very sparce cube having 14 dimensions.
- Total number of leaf level elements in all of the cube dimensions is more than 300K.
- The memory per input data is around 40 bytes. (total number of stored numeric data records is around 120M)
- I added some feeders, the source and the target of the feeder is the same cube. (total number of fed cells is around 360M)
The problem is that each feeder (feeder per fed cell) is using around 35 bytes of memory, which is too much.
This makes it too expensive to use rules and feeders in this cube.
I am surprised to see feeders using so much of memory in this cube.Here is my explanation to the situation:
I assume a feeder need to know the coordinates of both source and target cells, and hence need to use memory for both.
A data record in a cube need to know only one coordinate (for the cell) and the value of the cell.
Having cube so sparce and having so many dimensions, makes it too expensive to know the coordinates.
Therefore the memory for a feeder becomes relatively more expensive with respect to memory for data in such a sparce cube than it would be in a smaller and dence cube.
I am not sure if this explanation is right or not, and I am not sure if something is going wrong with this cube.
But I would appreciate any comment or help from experts on this.
Regards,
Last edited by mce on Sun Sep 18, 2011 9:41 pm, edited 3 times in total.
-
- MVP
- Posts: 2836
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
That's not even close to be being correct. The size of the dimensions has nothing to do with how much memory a feeder takes, unless of course you are overfeeding. If your feeders are taking up this much memory then that is exactly what is happening. I recommend you look at your feeder logic and make sure you aren't feeding a bunch of cells that don't need to be fed.mce wrote:Having cube so sparce and having so many dimensions, makes it too expensive to know the coordinates.
Therefore the memory for a feeder becomes relatively more expensive with respect to memory for data in such a sparce cube than it would be in a smaller and dence cube.
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Re: Memory for Feeders in Large Sparce Cubes
Actually this one is not close to be being correct. My feeder is the most simple type of filter, which is far from overfeeding. My calculation is just simple corrency conversion, where all leaf level in base currency needs to be fed for the corresponding values in the target currency.tomok wrote:That's not even close to be being correct. The size of the dimensions has nothing to do with how much memory a feeder takes, unless of course you are overfeeding. If your feeders are taking up this much memory then that is exactly what is happening. I recommend you look at your feeder logic and make sure you aren't feeding a bunch of cells that don't need to be fed.mce wrote:Having cube so sparce and having so many dimensions, makes it too expensive to know the coordinates.
Therefore the memory for a feeder becomes relatively more expensive with respect to memory for data in such a sparce cube than it would be in a smaller and dence cube.
Moreover here I am not talking about having too many feeders. The number of feeders are correct and the problem is the memory that is used by feeders per feeder.
-
- MVP
- Posts: 2836
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
So, your assertion is that feeding 1,000 actual intersections in a cube that has 10,000 potential intersections in it uses more memory than feeding 1,000 actual intersections in a cube with 100,000 potential intersections??? That's just not correct.mce wrote:Actually this one is not close to be being correct. The number of feeders are correct and the problem is the memory that is used by feeders per feeder.
-
- Site Admin
- Posts: 1458
- Joined: Wed May 28, 2008 9:09 am
Re: Memory for Feeders in Large Sparce Cubes
@mce, I think it would be a good idea if you posted the relevant rules and feeders so we can give you better feedback.
- paulsimon
- MVP
- Posts: 808
- Joined: Sat Sep 03, 2011 11:10 pm
- OLAP Product: TM1
- Version: PA 2.0.5
- Excel Version: 2016
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
Perhaps it is just a language barrier, but I am not clear as to whether you have several input currencies and you want to feed these to a single reporting currency, or whether you are feeding from a single input currency to multiple reporting currencies.mce wrote: My calculation is just simple corrency conversion, where all leaf level in base currency needs to be fed for the corresponding values in the target currency.
If you can post the rules and feeders, and a list of dimensions with number of elements, it would be easier to help.
Regards
Paul Simon
- Martin Ryan
- Site Admin
- Posts: 1989
- Joined: Sat May 10, 2008 9:08 am
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2010
- Location: Wellington, New Zealand
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
You sound very confident - have you run a couple of tests? I haven't, so I'm only speculating, but I could imagine that being the case. Depends on how those 10,000/100,000 is made up. I'd have thought that the more important thing is not the number of potential intersections, but the number of dimensions. I would expect that a value in a cube that has 256 dimensions would take up more disk space than a value in a cube with 16 dimensions. However whether than translates to space taken to RAM and also then whether that can be extrapolated to rules and feeders may be a different matter, but it might be worth investigating.tomok wrote: So, your assertion is that feeding 1,000 actual intersections in a cube that has 10,000 potential intersections in it uses more memory than feeding 1,000 actual intersections in a cube with 100,000 potential intersections??? That's just not correct.
Martin
Please do not send technical questions via private message or email. Post them in the forum where you'll probably get a faster reply, and everyone can benefit from the answers.
Jodi Ryan Family Lawyer
Jodi Ryan Family Lawyer
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Re: Memory for Feeders in Large Sparce Cubes
Tomok, obviously there might be a language barrier. I do not know how come you derive this assertion from my notes above.tomok wrote:So, your assertion is that feeding 1,000 actual intersections in a cube that has 10,000 potential intersections in it uses more memory than feeding 1,000 actual intersections in a cube with 100,000 potential intersections??? That's just not correct.mce wrote:Actually this one is not close to be being correct. The number of feeders are correct and the problem is the memory that is used by feeders per feeder.
Actually it does not matter. My problem statement above does not matter even what type of a rule or calculation I have. I was just trying to convince Tomok that this problem has nothing to do with overfeeding although my rule in this cube does not have overfeeding problem.paulsimon wrote: Perhaps it is just a language barrier, but I am not clear as to whether you have several input currencies and you want to feed these to a single reporting currency, or whether you are feeding from a single input currency to multiple reporting currencies.
while it is not relevant to my issue definition in the first post, just to answer your question, in this example, my data is loaded to cube in a reporting currency and I do calculate numbers for other reporting currencies, local currencies and transaction currencies, based on rates in a lookup cube. So I need to feed all values from base currency to other reporting currencies and currency types (local or transactional...etc.). I do not have any overfeeding problem.
David, my problem is not any specific to a rule in this cube. It is to do with the memory usage per fed cell. (memory used by feeders devided by number of fed cells)David Usherwood wrote:@mce, I think it would be a good idea if you posted the relevant rules and feeders so we can give you better feedback.
-
- MVP
- Posts: 263
- Joined: Fri Jun 27, 2008 12:15 am
- OLAP Product: Cognos TM1, CX
- Version: 9.0 and up
- Excel Version: 2007 and up
Re: Memory for Feeders in Large Sparce Cubes
Somehow my post went to oblivion and in the meantime mce has clarified what my guess was any ways. The question is about the difference in memory usage PER single feeder.
Further I think mce is looking at the }statsbyCube cube.
Either
Number of Fed Cells (OP 9.4.1.:This is the number of cells in the cube targeted by feeders.)
Memory Used for Feeders (This metric measures the amount of memory used to feed cells through TM1 rules.)
in the }statsbyCube are not understood (including myself)/very well documented or the }statsbyCube doesn't show the reality and is not reliable.
I tend to believe the latter as I have some cubes in which there are "Fed Cells" but apparently no memory is used for it. I could understand this, sort of, if the feeders would come from another cube, but in the mentioned cubes this is not the case.
Because of this I couldn’t really find a correlation between number of dimensions and memory used per feeder. At least not without creating several test cubes with different number of dimensions but same recorded "Number of Fed Cells".
Further I think mce is looking at the }statsbyCube cube.
Either
Number of Fed Cells (OP 9.4.1.:This is the number of cells in the cube targeted by feeders.)
Memory Used for Feeders (This metric measures the amount of memory used to feed cells through TM1 rules.)
in the }statsbyCube are not understood (including myself)/very well documented or the }statsbyCube doesn't show the reality and is not reliable.
I tend to believe the latter as I have some cubes in which there are "Fed Cells" but apparently no memory is used for it. I could understand this, sort of, if the feeders would come from another cube, but in the mentioned cubes this is not the case.
Because of this I couldn’t really find a correlation between number of dimensions and memory used per feeder. At least not without creating several test cubes with different number of dimensions but same recorded "Number of Fed Cells".
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Re: Memory for Feeders in Large Sparce Cubes
I also observed that sometimes }statsbyCube does not show reality especially for feeders. But restarting the TM1 service seems to make it work ok.Gregor Koch wrote: in the }statsbyCube are not understood (including myself)/very well documented or the }statsbyCube doesn't show the reality and is not reliable.
I tend to believe the latter as I have some cubes in which there are "Fed Cells" but apparently no memory is used for it. I could understand this, sort of, if the feeders would come from another cube, but in the mentioned cubes this is not the case.
Here is my non-empiric explanation to this:Gregor Koch wrote:Because of this I couldn’t really find a correlation between number of dimensions and memory used per feeder. At least not without creating several test cubes with different number of dimensions but same recorded "Number of Fed Cells".
A feeder is a pointer from a source to a target, is not it. Hence it needs to know the coordinates of both the source cell and the target cell. Coordinates are defined by the dimension elements from each dimension. The more dimensions the more elements in the coordinates to define the coordinate. So, more dimensions potentially may mean more memory per fed cell.
-
- MVP
- Posts: 2836
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
No, I haven't run any tests. I am going off what I was told by an Applix developer a number of years ago at an annual users conference. We had a very lengthy discussion of feeders one afternoon. He told me that each fed cell takes up one bit of memory for the pointer. Granted this was a number of years ago but I don't believe they have changed the underlying scheme for how feeders work.Martin Ryan wrote:You sound very confident - have you run a couple of tests?
-
- MVP
- Posts: 2836
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Memory for Feeders in Large Sparce Cubes
A feeder is not a pointer between a source and target cell, it's a one bit marker in a cube intersection that tells the TM1 consolidation alogorithm not to skip it when performing a consolidation.mce wrote:A feeder is a pointer from a source to a target, is not it.
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Re: Memory for Feeders in Large Sparce Cubes
I heard the same from an IBMer. But this does not mean you use one bit of memory per fed cell.tomok wrote:A feeder is not a pointer between a source and target cell, it's a one bit marker in a cube intersection that tells the TM1 consolidation alogorithm not to skip it when performing a consolidation.
- Consider leaf level cell combinations in a cube that does not store any value. Does TM1 have any definition of this cell in cube memory? No. But if you add a feeder to target this cell, TM1 need to acknowledge at least the coordinates of this cell and hence use memory for the coordinates before it uses memory for the one bit marker.
- I do not know if you check stat cubes at all for your feeders, but if you do, you will see that the memory alocated per fed cell is never as little as one bit.
-
- MVP
- Posts: 170
- Joined: Fri Dec 10, 2010 4:07 pm
- OLAP Product: TM1
- Version: [2.x ...] 11.x / PAL 2.0.9
- Excel Version: Excel 2013-2016
- Location: Germany
Re: Memory for Feeders in Large Sparce Cubes
Please recheck the dim order, it should follow the sequence 1st smallest sparse to largest sparse, 2nd smallest dense to largest dense.
I have seen memory explosions due to feeders similar to what is known from population with physical values. I could not detect this with physical values only, I could not use the automatic feature for dim reordering, just trial and error helped.
I have seen memory explosions due to feeders similar to what is known from population with physical values. I could not detect this with physical values only, I could not use the automatic feature for dim reordering, just trial and error helped.
- qml
- MVP
- Posts: 1096
- Joined: Mon Feb 01, 2010 1:01 pm
- OLAP Product: TM1 / Planning Analytics
- Version: 2.0.9 and all previous
- Excel Version: 2007 - 2016
- Location: London, UK, Europe
Re: Memory for Feeders in Large Sparce Cubes
Number of dimensions has a big impact on the size of cube in memory, as shown by jstrygner in this thread: http://www.tm1forum.com/viewtopic.php?f ... 933#p21933.Martin Ryan wrote:I would expect that a value in a cube that has 256 dimensions would take up more disk space than a value in a cube with 16 dimensions. However whether than translates to space taken to RAM and also then whether that can be extrapolated to rules and feeders may be a different matter, but it might be worth investigating.
I would not rule out that it has a similar effect on feeders as from my experience I can say that memory used up by feeders is always much more than one bit per fed cell. I would be inclined to think that every feeder flag has to be addressed in the in-memory data structure that is storing it and this addressing unavoidably must take up space. One way of thinking about what feeders are is: one-bit values stored in a shadow cube.
I think running some tests is the only way to see for sure.
Kamil Arendt
-
- MVP
- Posts: 3703
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Memory for Feeders in Large Sparce Cubes
Tomok is correct, a feeder is a marker in the FED CELL, it does not store the coordinates of the cell that it was fed from, nor does it need to as the only thing that matters is whether it is fed or not. However it is correct that the number of dimensions does have an impact on the memory used to store the feeder (but the order of dimensions in the cube is a far more important factor generally.)
Coming back to the other point dimension order is is the much more important factor in cube memory consumption versus just the actual count of dimensions. If you fix this I think you'll fix you problem.
(I'd like to know if jstrygner optimized the dimension order in the test he did, if not it would be interesting to see the impact of additional dimensions on memory per cell in optimized vs. non optimized cubes.)
Here I agree 100% with Gabor - optimal dimension ordering is critical to the economy (or opposite) of both data and feeder storage. This is the reason why server sizing is an inexact science and only a (quite broad) range of bytes per cell can be given as the memory footprint for a cube is hugely dependent on the dimension order. Also agree don't use the "suggested optimal order" feature, I don't know what the algorithm behind it is but I do know that if you know what you are doing then you can beat the suggested order more than 9 times out of 10 and by quite a margin. In an x64 model the guideline for memory consumption per feeder flag is 8 bytes/cell to 40 bytes/cell. If you are on the upper limit then this suggests your dimension order is far from optimal and you could probably be using 2x or 3x less memory per feeder.Gabor wrote:Please recheck the dim order, it should follow the sequence 1st smallest sparse to largest sparse, 2nd smallest dense to largest dense.
I have seen memory explosions due to feeders similar to what is known from population with physical values. I could not detect this with physical values only, I could not use the automatic feature for dim reordering, just trial and error helped.
It is definitely true that the number of dimensions has an impact on the memory required per cell and jstrygner's test shows this. It makes sense too as if there are more dimensions then it stands to reason that the address pointer or cell vector identifier must be larger and therefore consume more memory. What I would dispute is how significant this is. Yes it will take more memory to per cell to store data or feeders in a cube with 10 dimensions versus a cube with 2 less dimensions but 8 identical dimensions but the 10 - 20% additional memory may be well worth it for the utility provided by the extra dimensions. Also relatively versus other OLAP tools TM1 is incredibly efficient at accommodating additional dimensions as the increase in memory footprint is far far less than the relative increase in sparsity of adding the extra dimensions. (Which it the point I was trying to make in that thread. Therefore I would see additional dimensions as "cheap" and not as "expensive" in terms of additional memory requirement.)qml wrote:Number of dimensions has a big impact on the size of cube in memory, as shown by jstrygner in this thread: http://www.tm1forum.com/viewtopic.php?f ... 933#p21933.
Coming back to the other point dimension order is is the much more important factor in cube memory consumption versus just the actual count of dimensions. If you fix this I think you'll fix you problem.
(I'd like to know if jstrygner optimized the dimension order in the test he did, if not it would be interesting to see the impact of additional dimensions on memory per cell in optimized vs. non optimized cubes.)
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Re: Memory for Feeders in Large Sparce Cubes
Thanks for the comments.
Here are few notes:
- The cube I mentioned above in the first post is already optimized for dimension order, and the figures given above are for the optimized cube.
- I added an additional info to the first post. The total number of data records loaded into that sample cube, real data.
- I checked memory consumption per fed cell, and observerd that optimizing cube memory consumption for stored numeric cells, does not necessarily optimize the memory consumption for feeders. In my example, reducing the memory consumption for stored numeric cells to one third of original, did not make any considarable improvement in memory consumption for feeders.
Not schientifically proven, but as per my observations in real data cubes that I dealt with, I feel that memory consumption per stored numeric cell in a dimension order optimized cube is impacted from the following main factors:
- number of dimensions in the cube
- how sparce the cube data is which can be identified as total number of possible leaf level cells in the cube divided by total number of records loaded to the cube. (the bigger the ratio, the higher the sparcity)
- in addition to above, the sparcity in each dimension and their place in the cube order should also be playing role in memory consumption per stored numeric cell.
However I do not have enough observations to list such similar factors in confidence for the memory consumption per fed cell. It would be good to hear more on this from feeder experts.
Here are few notes:
- The cube I mentioned above in the first post is already optimized for dimension order, and the figures given above are for the optimized cube.
- I added an additional info to the first post. The total number of data records loaded into that sample cube, real data.
- I checked memory consumption per fed cell, and observerd that optimizing cube memory consumption for stored numeric cells, does not necessarily optimize the memory consumption for feeders. In my example, reducing the memory consumption for stored numeric cells to one third of original, did not make any considarable improvement in memory consumption for feeders.
I think such a statement cannot be made regardless of the cube size in terms of number of dimensions, and sparcity in the cube. Wrong dimension order is not the source of all memory issues in TM1. Please do not make the assumption that dimension order is not optimized for any sizing issue.lotsaram wrote:In an x64 model the guideline for memory consumption per feeder flag is 8 bytes/cell to 40 bytes/cell. If you are on the upper limit then this suggests your dimension order is far from optimal and you could probably be using 2x or 3x less memory per feeder.
Not schientifically proven, but as per my observations in real data cubes that I dealt with, I feel that memory consumption per stored numeric cell in a dimension order optimized cube is impacted from the following main factors:
- number of dimensions in the cube
- how sparce the cube data is which can be identified as total number of possible leaf level cells in the cube divided by total number of records loaded to the cube. (the bigger the ratio, the higher the sparcity)
- in addition to above, the sparcity in each dimension and their place in the cube order should also be playing role in memory consumption per stored numeric cell.
However I do not have enough observations to list such similar factors in confidence for the memory consumption per fed cell. It would be good to hear more on this from feeder experts.
-
- MVP
- Posts: 3703
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Memory for Feeders in Large Sparce Cubes
If you have a particular need to calculate many reporting currencies from a given input currency then using consolidations in the currency dimension and doing away with feeders for the currency calculation altogether can be a much more effective solution than using feeders and conventional N rules. However this does mean using ConsolidateChildren on the month dimension which does have some performance implications but they are usually not that severe if you don't need to do many year bridging type calculations.
Can be a good option, have you tried this?
Can be a good option, have you tried this?
- mce
- Community Contributor
- Posts: 352
- Joined: Tue Jul 20, 2010 5:01 pm
- OLAP Product: Cognos TM1
- Version: Planning Analytics Local 2.0.x
- Excel Version: 2013 2016
- Location: Istanbul, Turkey
Re: Memory for Feeders in Large Sparce Cubes
I have already gone away from feeders due to the issue mentioned in this post, and have already implemented the ConsolidateChildren option, although there are some disadvantages of the ConsolidateChildren option that we know. However I wanted to continue this investigation around feeders to find out if I missed anything, and the issue that I mentioned in this post is a more generic issue with feeders not necessarily limited to those ones that can be replaced with ConsolidateChildren.lotsaram wrote:If you have a particular need to calculate many reporting currencies from a given input currency then using consolidations in the currency dimension and doing away with feeders for the currency calculation altogether can be a much more effective solution than using feeders and conventional N rules. However this does mean using ConsolidateChildren on the month dimension which does have some performance implications but they are usually not that severe if you don't need to do many year bridging type calculations.
Can be a good option, have you tried this?
-
- MVP
- Posts: 195
- Joined: Wed Jul 22, 2009 10:35 pm
- OLAP Product: TM1
- Version: 9.5.2 FP3
- Excel Version: 2010
Re: Memory for Feeders in Large Sparce Cubes
Hi, I was not on the forum for a while.lotsaram wrote:(I'd like to know if jstrygner optimized the dimension order in the test he did, if not it would be interesting to see the impact of additional dimensions on memory per cell in optimized vs. non optimized cubes.)
Responding to your question, I did NOT optimize the dimension order, which I can see would be a good idea to do it. Nice discussion, guys

Now I do not have that model and environment anymore, so it is not a matter of free 15 minutes to check it for me.