TI: Cube View Cached or Re-evaluated?
Posted: Tue Jan 29, 2013 11:56 am
I'm in the process of testing this but would be interesting to see what others think too.
If a TI uses a cube view as it's data source, for each line of metadata/data it processes its code for, does it re-evaluate the view it is based on?
I am looking to improve a section of code but my current idea relies upon the theory the view is not cached. I want to use some MDX in a subset which will be affected by the TI as it processes the metadata code (updates an attribute, MDX filters out elements with the attribute). If the view is re-evaluated the MDX should reduce the size of the view significantly as it runs, reducing the amount of data it needs to look at significantly. If it caches the view then changing the attribute would have no effect, and my process will take just as long as it does now.
I'm trying to use data on a fairly large cube (13 dims, several of which are over 1000's of elements) and using top level consolidations is taking a long time to process. My idea is to use this MDX theory to "skip" values once it finds them, and mark them as such to avoid repeating the task. It is taking 20 mins on one server to run the current code but i may need to repeat this on a new server that could take much, much longer. I'm looking to improve what i have now to avoid that potential issue.
If a TI uses a cube view as it's data source, for each line of metadata/data it processes its code for, does it re-evaluate the view it is based on?
I am looking to improve a section of code but my current idea relies upon the theory the view is not cached. I want to use some MDX in a subset which will be affected by the TI as it processes the metadata code (updates an attribute, MDX filters out elements with the attribute). If the view is re-evaluated the MDX should reduce the size of the view significantly as it runs, reducing the amount of data it needs to look at significantly. If it caches the view then changing the attribute would have no effect, and my process will take just as long as it does now.
I'm trying to use data on a fairly large cube (13 dims, several of which are over 1000's of elements) and using top level consolidations is taking a long time to process. My idea is to use this MDX theory to "skip" values once it finds them, and mark them as such to avoid repeating the task. It is taking 20 mins on one server to run the current code but i may need to repeat this on a new server that could take much, much longer. I'm looking to improve what i have now to avoid that potential issue.