qml wrote:
Well, I don't have any proof for that, let alone a documentation extract (but just because something is not documented doesn't mean it's not true, especially in TM1's case

).
I hear you on the latter part.
qml wrote:
However, it is obvious (and mentioned in the documentation) that rules precedence is defined by rules order in the rule file.
That is correct, if subsequent definitions are for the same range of cells as only one rule can apply to a cell at a given point in time.
qml wrote:
I never said TM1 opens the file from the disk every time a value is requested.
Apologies that I have interpreted what you wrote in that way.
qml wrote:
But it can be inferred that there is some sort of parsing going on every time a value is requested from a cube with rules. Whether this happens using plain text search or some pre-parsed code, doesn't really matter. TM1 will find the first rule that can be applied and will ignore the ones below.
Let’s just say something happens in memory but we are not quite sure what it is and we are still speculating. But I’ll follow your lead about the ‘first rule’ and ‘ignore the ones below’.
Wouldn’t you agree that when you request a rule calculated cell say on C level and you have
[‘something’]=C:
and somewhere else
[‘something’]=N:
once you have hit the ‘first line’ (are there any lines of this in memory or is it just addresses, numbers and what nots?) TM1 will stop looking because it has found the first applicable rule on C: level. End of story.
If I further follow your argumentation about the ‘scanning the rule file’ this might be a problem if the rules are the other way around and thousands of lines of code apart and because TM1 has to scan every single rule in the ‘file’ and it takes time until it hits the proper definition. Right?
BTW, who would write the two lines far apart anyways?
If indeed TM1 really has to scan every single (line of) rule every time you retrieve a cell wouldn’t this mean that the last rule in the file will always perform the worst (negating all other differences between the rules) because TM1 had to scan everything before it? Mmmmh. This would certainly change my way of writing rules quite a bit. I'd go, hey, let's put the ones we don't need as much at the end of the .rux file with all that scanning going on.
Maybe, and just maybe, TM1 is smart enough to attach a rule to a cell or range of cell and knows that it applies because when compiling the rules it collects all the LHSs of the rules for that range and puts them together side by side in memory or something magical so it does not have to do all that scanning. Probably just wishful thinking, but you get my drift.
qml wrote:
Also, it is testable (try it yourself in a big model) that using CONTINUE instead of STET slows rule calculations down because of the very fact that the parser needs to keep looking for any rule that can be applied instead of stopping at STET.
Is this really a test for the performance with and without separate N: and C: rules? I am not quite sure. Wouldn’t the test be to turn the setting on and off and have two sets of the (logically) same rules with and without separate N: and C: rules? Anyways, I am being very picky and it sort of makes sense. But is it proof of how the parser works or that the config setting leads to performance degradation, again I think no.
qml wrote:
Now, I have never actually tested what effect using separate N and C rules has on performance (it's probably measurable, but minuscule),
Again
Because you previously mentioned that
qml wrote:
So - in most cases calculation performance is better when separating N and C rules is not allowed.
I was asking, and really out of curiosity and because it was new to me, if there is any reference to this anywhere or maybe any personal experience.
qml wrote:
but I honestly cannot think of any other reason for that parameter's existence than to be able to sacrifice performance to gain rule writing flexibility or vice versa. Can you, Gregor?
First I would ask why the Syntax was changed in the first place. Was it because the old Syntax (separate N: and C: rules, <V7) allowed a way of writing rules that is causing performance to degrade? Did the performance of rules improve with the new Syntax? Possibly, but I don’t know for sure.
Then I would ask why the Config setting was introduced? Probably it was because of all the old models that used the old syntax and to allow a more flexible way of writing the rule.
Your conclusion sort of makes sense to me but it is just a theory and I think it is fair for me to question it or raise doubts.
Cheers