Page 1 of 1

over feeding

Posted: Wed Jul 08, 2009 7:41 pm
by wissew
I'm trying to determine the efficiency of the rules and feeders on a cube. I was not the construction worker or the architect on the monster. With out doing a deep dive into the rule I was hoping to look to the statsbycube for some guidance on this. It shows the following data:
LATEST
Memory Used for Views 0
Number of Stored Views 0
Number of Stored Calculated Cells 3567
Number of Populated String Cells 0
Number of Populated Numeric Cells 3975442
Number of Fed Cells 86442485
Memory Used for Calculations 193280
Memory Used for Feeders 0
Memory Used for Input Data 1533871104
Total Memory Used 1534064384

My instinct tell me this is being way over fed. For 3,567 calculations it is feeding 86+ million cells. Is there a general rule of thumb as to the ratio of feeders to calcs? Also, why would there be no memory used for feeders?

Any feedback would be greatly appriciated.

Re: over feeding

Posted: Wed Jul 08, 2009 8:16 pm
by Martin Ryan
Sadly another case of "how long is a piece of string". Generally fed cells shouldn't outnumber calculated cells by more than a factor of, say, 10. But there are so many caveats that go with that, it's almost not even worth paying any attention to that number.

If you're having issues with memory, you'd be better to take out feeders line by line, bouncing the server each time and seeing what impact each line is having, then see if you can make improvements to that line.

Taking another look at your numbers, the ratio of memory used for calculated cells vs input cells is very low so I don't think you're going to yield much improvement from rules or feeders. You may simply need to look at re-ordering the cubes in the dimension (right click on the cube -> re-order).

Martin

Re: over feeding

Posted: Thu Jul 09, 2009 9:01 am
by belair22
I take Martin's particular case on this one. The statsbycube has rarely given me much to go by in the past.

As was alluded to you're better off getting a feel for where the performance bottlenecks are and removing feeders to pintpoint. The good news is one you've have found the cluprit(s) there's potentially a few ways around optimizing feeders depending on the rule and how it fits into the model (ie. remove the rule and potentially use TI populate the rule, using conditional feeders).

I've always steerted clear of pushing rules across huge datasets (Daily Sales, etc...) - there's plenty of options available depending on the source/granuality of the data.