Alan Kirk wrote:
Provided that the server has the required performance capability (RAM, processor capacity and did I mention RAM?) 30 to 40 users browsing different views of a cube is nothing. If I can manage that in a 32 bit environment you should certainly be able to in a 64 bit, if indeed yours is... again, if the server is up to it. So that would be the first thing that I'd be looking at. For even if the calculation load is huge, yes, you may get a freeze but you should not get a crash. It might be an idea to provide some insight into your hardware specs, and the size of this cube.
If the rules are indeed complex the next thing I'd be looking at is whether they can be simplified. Are they the most efficient they can be? Does all of the calculated data have to be calculated live, or could it be calculated within TI and stored as static values which are periodically refreshed? Just because you can do things by rules, it doesn't necessarily mean that you should in all cases.
But in answer to your question, though there is a parameter which deals with the size of any individual view there's no parameter that does what you describe... simply because the crash that you describe shouldn't be happening in a healthy system anyway.
Hi Alan,
Yeah you were right , i am looking for a way to simplify my rules
This is a Consolidation Cube with only few dimensions , so the values from all cubes taken at all level and caluculated here
['All_Business','All_Insured', 'All_Sponsor','Total_Product','Booking Delay']=>DB('Consolidation',!Version,!Scenario,!Currency,!Business Type,!Country,!Operating Unit,!LOB,!Year,!Month,'GWP');
This kind of feeders are pesent in various cubes that feeds to this consolidation cube
Will this kind of feeders affect the performance
Thank You
Regards
Guru