It helps me to remember that memory is just a great (organised) pile of bits which are either on or off. Program instructions point to selected bits to see if they are on or off and act accordingly. What follows is my (quite probably wrong) understanding of the process:
Here is an imaginary TM1 cell, that has a value, in memory (I invented all the addresses):
- Addresses 1-100 contain dimension element names that define the area.
Addresses 102-200 contain the value.
Address 201 - contains the feeder flag (maybe set to 1).
When you write ['a']=>['b'];
and save the rule, the program looks for every cell whose area coordinates include ['a'] and, if there is a value in it, turns on the feeder bit for the corresponding'['b'] cell.
Before a user or process requests a view, the only data in a cube is:
- area co-ordinates
raw numbers
feeder flags
. To meet the view request, TM1:
1. Calculates everything in the view according to the rules.
2. Consolidates the view along the dimension hierarchies. If the cell has a real (not calculated) value, it gets included. If the cell has a feeder flag it gets included. If it has neither, it doesn't get included.
Most of the 'cube' is still just sitting in memory, being values and feeder flags. The view is probably a tiny part of the whole cube.
I think there must be two passes by the program, one to calculate the requested cells and another to consolidate. TM1 doesn't decide not to calculate an un-fed cell, because you can see the calculated values in un-fed view cells at the N level. And it wouldn't gain much by doing so (unless the rule were badly written) - millions of zeros skipped probably far outweigh the number of calculated cells.
TM1 is ipso facto efficient because it only stores raw values and feeder flags, as well as calculating only when a view is requested.