Hi Holger
When it comes to dimension updates, we use a standard process which builds an entirely new dimension in the shape in which we want to have the existing one in the end. Only after we did that we make a DimensionDeleteAllElements on the "real" one and rebuild it based on the previously created one, thus avoiding to drop elements that carry values without meaning to. The main reason why we did this was we wanted to re-order existing elements - interesting to read here that this may also help with reducing cube lock time.
Your method can still potentially delete elements that have data. However, I presume that what you do is to check that your temp version of the dimension has all the base level elements of the real dimension before you use DimensionDeleteAllElements on the real dimension.
You said that you did that to control Dimension Order. I usually just use the }DimensionProperties cube and put in BYHIERARCHY etc - Same parameters as DimensionSortOrder but putting them into }DimensionProperties seems to work more reliably.
For things like Account dimensions I allocate a prefixed number for the consolidations to ensure that eg Revenue comes before Expenses so you have we N_3 with an Alias of Revenue and N_4 with an Alias of Expenses. As the Dimension Sort Order is based on the Element Names this gives the right sort order.
Carrying out Dimension Updates on a Temp Copy is one way of reducing locking. However, if I were you I would avoid the use of DimensionDeleteAllElements. It is too easy to lose data if something goes wrong. I use a standard process that just breaks the consol links leaving the base level elements and consols in place, which ensures that any static subsets etc are not upset by a failure. I have recently been working on some very large dimensions eg over 400,000 elements, where my standard process was too slow. Instead I am using one that deletes consols, where the consols are selected by an MDX subset - this proved to be faster than looping over all elements and using DTYPE to select the consols.
When it comes to reducing locking from minor Dimension Updates that are needed just to add new elements that are encountered while loading data into a cube, I have found that the new DimensionInsertDirect statements still cause locking. The only benefit is that the locking only occurs during the Data Tab step as the MetaData Tab pass over the data is no longer needed. However, it seems that the locking happens as soon as the IF( DIMIX( vDIm, vElem ) = 0 ) test finds that a new element is needed and the DimensionInsertDirect statement is executed. That might be on the first record in the Data Tab step or the last, so there can still be a lot of locking.
The only approach I have seen to minimise this is as follows:
Use the IF( DIMIX test to identify new elements.
If a new element is needed, write the whole row to a file, set a flag and and do an ItemSkip.
In the Epilog if the flag is set then call a sub-process to read from the file, insert the elements, and put in the data.
Locking still happens but if lets say you are loading 1,000,000 records and only 100 records are affected by New elements, then the sub-process only needs to process 100 records, and therefore the duration of the locking is much shorter than it would be for the 1,000,000 records.
Regards
Paul Simon