After working with Karin (who must certainly also clothe needy children and organize neighborhood spay/neuter programs for pets, and never the reverse, in her spare time), I think I understand what happened in my case. I was already suspicious that the .pro file contains dimension ordering, which is redundant when you'll be guaranteed to have access to the .cub file. It turns out that dim ordering is also stored in the .vue file (along with a TON of other redundant info including displayed aliases for all elements in all dims in the view).
It's this redundant data that's the key. What I think happened in this case is that I was playing around with cube reordering around the same time I created the Default view for the cube (circa 2007). The new view picked up the reordered dims on the cube and stored them. At some point later, I got rid of cube reordering, but the vue still contained the reordering. This data corruption (my opinion) didn't really matter (and was never obvious) because this was an experimental cube that we never ended up using (but also never deleted).
Literally years later, I happened to use this view as the datasource. It's not actually used by the process-- it's just the cube with the largest number of dims so I used it to force the greatest number of variables in TI (I change it up with DatasourceCubeView in the Prolog which is set by a parameter sent by the parent process calling this child). The pro picked up the non-standard dim order from the vue and applied that reordering to the variables.
So:
- Problem #1: Redundant data is being stored in different places and not updated in every location when it changes. This ties nicely back to my database normalization rant on another thread. In fact, this is probably the top reason for database normalization (not to mention efficiency).
- Problem #2: Despite the fact that DatasourceCubeView alters the datasource, TI uses the dimension ordering from the original datasource (as stored in the pro, not what's currently in the cub) instead of getting it from the new datasource.
Admittedly, this is a rare situation. But that's what best practices are all about: designing systems so that rare situations (which no one would anticipate and write a test case to verify) are covered. If this data was normalized, this problem would not have happened.
Frankly, I'm relieved that this is a file corruption issue, since it's a much easier fix than applying a HF to all nine of my servers. Plus, I got a little burned trying to go to 9.5.2 a few months ago, so I've frozen any updates until the budget is final. Should be a fun holiday season....
Matt