Hi all - it's been awhile since I have posted on here, but I was wondering what the best way is to feed a reporting cube that is using [ ] =N: and [ ] =C: to pull over data?
We have a "Forecast Intermediate" cube where all of our rules are and calculations are handled. It pulls data from various cubes. Come Consolidated elements have rules to override the natural consolidation. This cube is not seen by end-users. The end-user sees the "Reporting" cube. In it, it's simply this:
SKIPCHECK;
[] =N: DB('Forecast Intermediate', !Version, !Actual and Forecast Types, !Products , !Current Time, !RAAF Measures) ;
[] =C: DB('Forecast Intermediate', !Version, !Actual and Forecast Types, !Products , !Current Time, !RAAF Measures) ;
FEEDERS ;
My question is what is the best way to feed this so it's not overfed? The "Forecast Intermediate" cube is exactly the same other than the "measures" dimension, whereas it has more measures than are in the Reporting cube.
I've been looking around online, developer guide, rules guide, this form, etc. and I haven't really found what is the best way to do this. Or, do you advise not doing this? Thanks a lot for your help! I appreciate any assistance you can provide! OR, let me know if you need more information!!
Use of [ ] =N: in reporting cube
-
- Posts: 15
- Joined: Fri May 22, 2015 3:44 pm
- OLAP Product: TM1
- Version: 10.2.2 FP7
- Excel Version: 2016
- Location: California
- Steve Rowe
- Site Admin
- Posts: 2456
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Use of [ ] =N: in reporting cube
Without getting into the merits of what you are doing in the 'Forecast Intermediate' cube put
[{'Subset list of common measures' , 'M1' , 'M2' , etc]=>DB('Reporting' , !Version, !Actual and Forecast Types, !Products , !Current Time, !FORECAST Measures) ;
I can't help but ask if the situation is as you describe why not point the users at the Forecast cube and avoid the extra layer and duplication. I can't see any benefit from what you are doing...
Cheers,
[{'Subset list of common measures' , 'M1' , 'M2' , etc]=>DB('Reporting' , !Version, !Actual and Forecast Types, !Products , !Current Time, !FORECAST Measures) ;
I can't help but ask if the situation is as you describe why not point the users at the Forecast cube and avoid the extra layer and duplication. I can't see any benefit from what you are doing...
Cheers,
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- MVP
- Posts: 3703
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Use of [ ] =N: in reporting cube
Yes you can just feed cell to cell in this way but if the cubes have equivalent dimensionality except for one having additional measures then there really is no point at all to doing this.
Just apply element security to the measure dimension so users only see the measures you want them to. This would be a better (and simpler) solution.
Just apply element security to the measure dimension so users only see the measures you want them to. This would be a better (and simpler) solution.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
-
- Posts: 15
- Joined: Fri May 22, 2015 3:44 pm
- OLAP Product: TM1
- Version: 10.2.2 FP7
- Excel Version: 2016
- Location: California
Re: Use of [ ] =N: in reporting cube
Thanks guys! I appreciate your opinion and input!
You do raise a fair and valid question and it now has me second guessing.
I inherited this server instance. However, I am also rebuilding it and can change things. In our current, we have 4 reporting cubes and all four of them have rules and it takes a long time to restart server due to so many feeders, etc. etc. etc. So, on the new server I only have two reporting cubes. The Forecast Intermediate cube pulls data over from 4 actual input cubes (loaded via TI processes) and also from a forecast staging cube. Forecasting cube only has forecast data and actual cubes only have actual data and then it gets combined in the Forecast Intermediate cube.
The concept was to have the Forecast Intermediate cube be the "brains" behind the scenes and have all calcs in it. Then, simply flow over the data to the reporting cubes. But, if I have to add a lot of feeders to just get the data over properly, then I question the Forecast Intermediate cubes existence now. We were also running all the TI processes against the Forecast Intermediate cube versus the reporting cubes (we export a lot of data to other groups). That could be done from the reporting cube though.
I am going to work on Steve's suggestion now just to see how much time it takes, but man, you really have me thinking now. Thanks again for your input!
You do raise a fair and valid question and it now has me second guessing.
I inherited this server instance. However, I am also rebuilding it and can change things. In our current, we have 4 reporting cubes and all four of them have rules and it takes a long time to restart server due to so many feeders, etc. etc. etc. So, on the new server I only have two reporting cubes. The Forecast Intermediate cube pulls data over from 4 actual input cubes (loaded via TI processes) and also from a forecast staging cube. Forecasting cube only has forecast data and actual cubes only have actual data and then it gets combined in the Forecast Intermediate cube.
The concept was to have the Forecast Intermediate cube be the "brains" behind the scenes and have all calcs in it. Then, simply flow over the data to the reporting cubes. But, if I have to add a lot of feeders to just get the data over properly, then I question the Forecast Intermediate cubes existence now. We were also running all the TI processes against the Forecast Intermediate cube versus the reporting cubes (we export a lot of data to other groups). That could be done from the reporting cube though.
I am going to work on Steve's suggestion now just to see how much time it takes, but man, you really have me thinking now. Thanks again for your input!
- Steve Rowe
- Site Admin
- Posts: 2456
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Use of [ ] =N: in reporting cube
Obviously only working with a little bit of detail but....
Some questions
What is the difference between the 4 sets of actual data and why are they in different cubes?
What is the difference in between the forecast data and the actual data, is there some difference in granularity?
General Comments.
This feels like at most a two cube system, perhaps only 1. Depends on the answers to the above. Can't you just add a version dimension and get all the data into 1 cube?
If you are joining 5 cubes with rules to a master cube and then a reporting cube then remember every time a number is changed anywhere in these 7 connected cubes the cache will be destroyed. Depending on data volumes and frequency of data change you may find you're response times are not what you want. Why not refresh the master cube(s) with the actual data from the TI processes and skip the rules altogether? Rules should (IMO) be for results that really need to dynamic (usually based on user input), not for moving large volumes of data from one place to another.
TM1 is much much less "locky" than it used to be, there are some historic design practices that are no longer relevant.
Just my 2p, enjoy!
Some questions
What is the difference between the 4 sets of actual data and why are they in different cubes?
What is the difference in between the forecast data and the actual data, is there some difference in granularity?
General Comments.
This feels like at most a two cube system, perhaps only 1. Depends on the answers to the above. Can't you just add a version dimension and get all the data into 1 cube?
If you are joining 5 cubes with rules to a master cube and then a reporting cube then remember every time a number is changed anywhere in these 7 connected cubes the cache will be destroyed. Depending on data volumes and frequency of data change you may find you're response times are not what you want. Why not refresh the master cube(s) with the actual data from the TI processes and skip the rules altogether? Rules should (IMO) be for results that really need to dynamic (usually based on user input), not for moving large volumes of data from one place to another.
TM1 is much much less "locky" than it used to be, there are some historic design practices that are no longer relevant.
Just my 2p, enjoy!
Technical Director
www.infocat.co.uk
www.infocat.co.uk