I am sure I have had issues with this in the past so I will say No. Hopefully you will tell us the exact cases where it will and won't, as that would be helpful!
Elessar wrote: ↑Fri Sep 20, 2024 5:10 pmQuestion #38:
My data load process from csv file to TM1 cube is very slow. What can be the reason? What can I do with this?
There’s many many answers to this. I’d like to share the points with others and post just one potential option:
I’m going to assume the server is V11. If logging is enabled, that adds overhead because each change is recorded. You should disable logging before the load with the CubeSetLogChanges function. Assuming this isn’t a read only cube, you should re-enable logging afterwards to preserve changes made to the cube by end-users.
For example
1) there are a lot of complex rules and feeders in Cube and then you load data in it they are calculated (of course you can switch off Rules (CubeRuleDestroy), load data and switch on (RuleLoadfromFile) - it may be more faster
2) You are trying to input data in cells which are closed for changing (by rules for example) - and it will generate long error logs and slow process. You can check cells before updating by CellIsUpdatable command
3) May be the Target Cube is locked by another process and our proccess in Wait Status - wait the moment then the Target Cube will be unlocked to load data from csv
Related to feeders EP_explorer alluded to, I would also be reviewing model parameters.
In particular ForceReevaluationOfFeedersForFedCellsOnDataChange, performance modeler necessitated that this be set to T, but this setting will force more feeder evaluations to occur when loading data.
The main reason using the approach switching off Rules (CubeRuleDestroy), load data and switch on (RuleLoadfromFile) is to circumvent this behaviour.
Instead we can set the paramater to F (false). And if the feeder is data dependent (conditional) then we can retrigger/recalculate the feeders when necessary to do so, such as mapping or dimension updates.
Most likely cause for slow performance is long feeder times. As no ones biting here are some other avenues to explore are:
- other server activity /locking or insufficient threads. Are other threads running creating a lock or no available threads meaning it has to queue. (could also include someone leaving the ti debugger mode on)
- Anti-virus scanning, is the anti virus scanning too aggressive, throttling performance
- has the server exceeded physical ram, and now using paging.
- is the ti posting using cellputproportionalspread to a highly consolidated data point with no pre-existing data causing a huge increase in number of cells populated
- is the process using direct functions for mass creation of new elements. May be better using metadata or having seperate load for dimensional updates.
- cell being updated could be causing a stack overflow, generally recorded in the server message log if the case
- version of pa used has bugs or design changes causing slower than expected performance. I'm looking at versions 2.0.14, 2.0.15 which had significant problems. Particularly related to updating attributes
Not sure if this is allowed given we have moved onto Q39, but another one for Q38 might be dimension ordering.
Q39 - You are going to tell me there is a cfg parameter that does this? But I don't know it, therefore my answer would be to put element security everywhere to make all consolidations read. Be nice if there is an easier way...
First of all, doubt any requirement. You should ask, why should we disable consolidated input, and the answer will be 99% like "Users will enter value to super-total zero cell and explode the server". Therefore, ProportionSpreadToZeroCells=F in tm1s.cfg setting should fulfill this requirement well. @ascheevel, @MarenC, you have mentioned it before!
Set Consolidation TypeIn Spreading = Deny in Capability Assignments
Use "IF(ISLEAF=1, CONTINUE, 'READ') rule in }Celsecurity cube
Use ElementSecurity (it is more preferred than CellSecurity, because it is precalculated once with SecurityRefresh, while CellSecurity will recalculate each time user recalculates cube view)
Winner of this round is MarenC!
I'm noticing that participants became less active. 2 months left, pull yourselves together guys!
Question #40:
Which activity causes stargate view removal?
As an example, I am optimizing rules and want to measure calculation time, but cached views are spoiling it. How can I purge this cache without server restart?
It doesn't matter where in the cube. It can be an unused portion which doesn't actually flow or consolidate anywhere. Just as long as some leaf level data is changed. The caching mechanism in TM1 is very non-granular and doesn't track calculation dependency. As soon as any data in a cube is changed then all cached calculations in that cube are destroyed. It doesn't matter if the calculations are unchanged becasue they don't actually have any dependency on the changed input data.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
Correct answer:
Stragates are very delicate. There are several options again:
write-some-data-to-the-cube . Or even to any cube your cube depends on
Change cube's rule (Or other cube's rule your cube depends on)
Change alias of any element of any dimension in the cube
add+delete element to any dimension in the cube
DebugUtility( 125, 0, 0, '', '', '' );
Server restart
Winners of this round are MarenC and Mark RMBC!
Question #41:
The cube and rule are shown on screenshots below. B1 is not fed. What will be in C1? Will it take numbers from not-fed calculated cells of B1?