Optimize While
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Optimize While
Hello everyone!
I am loading from one cube to another cube.
The destination cube has a different dimension than the source cube. This dimension (02.Codigo_Funcional) has 6800 elements.
I am using the while function to allocate the data in the destination cube.
But my process is taking about 2 hour to run.
This is the code I am using:
vContador=1;
WHILE (vContador<=DIMSIZ('02.Codigo_Funcional'));
vContador= vContador+1;
vElemento = DIMNM ('02.Codigo_Funcional', vContador);
IF(ELLEV('02.Codigo_Funcional', DIMNM('02.Codigo_Funcional', vContador)) = 0);
CellPutN(vVALOR,'CMU.PARQUE_FUNCIONAL',
vVERSAO,
vCENARIO,
vMESES,
vANO,
vDATA_REFERENCIA,
vCOD_MARCA,
vDATA_FIM_GARANTIA,
vCOD_MERCADO,
vGRUPO_ECONOMICO,
vCOD_VERSAO,
vCOD_MODELO,
vCOD_FAMILIA,
vCOD_MVS,
vElemento,
'QTD_VEI');
ENDIF;
END;
Does anyone suggest anything so that I can optimize this code?
Thanks
Alessandro
PS: sorry for the poor English
I am loading from one cube to another cube.
The destination cube has a different dimension than the source cube. This dimension (02.Codigo_Funcional) has 6800 elements.
I am using the while function to allocate the data in the destination cube.
But my process is taking about 2 hour to run.
This is the code I am using:
vContador=1;
WHILE (vContador<=DIMSIZ('02.Codigo_Funcional'));
vContador= vContador+1;
vElemento = DIMNM ('02.Codigo_Funcional', vContador);
IF(ELLEV('02.Codigo_Funcional', DIMNM('02.Codigo_Funcional', vContador)) = 0);
CellPutN(vVALOR,'CMU.PARQUE_FUNCIONAL',
vVERSAO,
vCENARIO,
vMESES,
vANO,
vDATA_REFERENCIA,
vCOD_MARCA,
vDATA_FIM_GARANTIA,
vCOD_MERCADO,
vGRUPO_ECONOMICO,
vCOD_VERSAO,
vCOD_MODELO,
vCOD_FAMILIA,
vCOD_MVS,
vElemento,
'QTD_VEI');
ENDIF;
END;
Does anyone suggest anything so that I can optimize this code?
Thanks
Alessandro
PS: sorry for the poor English
- gtonkin
- MVP
- Posts: 1211
- Joined: Thu May 06, 2010 3:03 pm
- OLAP Product: TM1
- Version: Latest and greatest
- Excel Version: Office 365 64-bit
- Location: JHB, South Africa
- Contact:
Re: Optimize While
Two quick things you could try without going into why and what you are doing.
1) Have you turned off logging?
2) You could set nContadorMax=DIMSIZ('02.Codigo_Funcional'); Then use the new variable in your while instead of sizing it each time.
1) Have you turned off logging?
2) You could set nContadorMax=DIMSIZ('02.Codigo_Funcional'); Then use the new variable in your while instead of sizing it each time.
-
- MVP
- Posts: 3128
- Joined: Mon Dec 29, 2008 6:26 pm
- OLAP Product: TM1, Jedox
- Version: PAL 2.0.9.18
- Excel Version: Microsoft 365
- Location: Brussels, Belgium
- Contact:
Re: Optimize While
3) create a static subset with level 0 elements. That’s at least taking out some of the overhead.
Is the value that each element receives, the same ?
Is the value that each element receives, the same ?
Best regards,
Wim Gielis
IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
Wim Gielis
IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
R: Based on your suggestion and wimgielis, I created and changed the code:gtonkin wrote: ↑Tue Nov 12, 2019 3:51 pm Two quick things you could try without going into why and what you are doing.
1) Have you turned off logging?
R: I deactivated after your suggestion.
2) You could set nContadorMax=DIMSIZ('02.Codigo_Funcional'); Then use the new variable in your while instead of sizing it each time.
vCounter = 1;
vMaxCounter = SubsetGetSize ('02.Codigo_Funcional');
WHILE (vCounter <= vMax counter);
vCounter = vCounter + 1;
vElement = DIMNM ('02.Codigo_Funcional', vCounter);
IF (ELLEV ('02.Codigo_Funcional', DIMNM ('02.Codigo_Funcional', vCounter)) = 0);
CellPutN (vVALUE, 'CMU.FUNCTIONAL_PART',
vVERSION,
vCENARIO,
MONTHS,
vANO,
vDATA_REFERENCE,
vCOD_MARCA,
vDATA_FIM_GARRANTY,
vCOD_MERCADO,
vECONOMIC GROUP,
vCOD_VERSAO,
vCOD_MODEL,
vCOD_FAMILIA,
vCOD_MVS,
vElement,
'QTD_VEI');
ENDIF;
The process is still slow.
Anyway thank you very much for the suggestions.
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
I created the subset.Wim Gielis wrote: ↑Tue Nov 12, 2019 4:07 pm 3) create a static subset with level 0 elements. That’s at least taking out some of the overhead.
Is the value that each element receives, the same ?
And the elements get the same values.
The process is still slow.
Anyway thank you very much for the suggestions.
-
- MVP
- Posts: 3128
- Joined: Mon Dec 29, 2008 6:26 pm
- OLAP Product: TM1, Jedox
- Version: PAL 2.0.9.18
- Excel Version: Microsoft 365
- Location: Brussels, Belgium
- Contact:
Re: Optimize While
Did you check your code for errors/inconsistencies ?
- SubsetGetSize asks a dimension and a subset, you only have one.
- You are looping from 1 to the number of elements of a subset. Still, you pass this subset index as a dimension index.
- Even more, you check whether the returned element (if any) is level 0
Still, I agree that it will probably be slow after the changes.
You need to elaborate on what you are doing, why, what are the business requirements and so on.
- SubsetGetSize asks a dimension and a subset, you only have one.
- You are looping from 1 to the number of elements of a subset. Still, you pass this subset index as a dimension index.
- Even more, you check whether the returned element (if any) is level 0
Still, I agree that it will probably be slow after the changes.
You need to elaborate on what you are doing, why, what are the business requirements and so on.
Best regards,
Wim Gielis
IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
Wim Gielis
IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
Wim Gielis wrote: ↑Tue Nov 12, 2019 5:10 pm Did you check your code for errors/inconsistencies ?
- SubsetGetSize asks a dimension and a subset, you only have one.
- You are looping from 1 to the number of elements of a subset. Still, you pass this subset index as a dimension index.
- Even more, you check whether the returned element (if any) is level 0
Still, I agree that it will probably be slow after the changes.
You need to elaborate on what you are doing, why, what are the business requirements and so on.
Sorry, the correct code is this:
vCounter = 1;
vMaxCounter = SubsetGetSize ('02.Codigo_Funcional','N0');
WHILE (vCounter <= vMaxCounter);
vCounter = vCounter + 1;
vElement = DIMNM ('02.Codigo_Funcional', vCounter);
IF (ELLEV ('02.Codigo_Funcional', DIMNM ('02.Codigo_Funcional', vCounter)) = 0);
CellPutN (vVALUE, 'CMU.FUNCTIONAL_PART',
vVERSION,
vCENARIO,
MONTHS,
vANO,
vDATA_REFERENCE,
vCOD_MARCA,
vDATA_FIM_GARRANTY,
vCOD_MERCADO,
vECONOMIC GROUP,
vCOD_VERSAO,
vCOD_MODEL,
vCOD_FAMILIA,
vCOD_MVS,
vElement,
'QTD_VEI');
ENDIF;
As I am using a machine translator, some of the code has been hidden.
-
- Posts: 41
- Joined: Thu Sep 12, 2019 11:02 pm
- OLAP Product: TM1 - PAX-PAW-Perspectiv-Arc
- Version: PA 2.0.9 - PAW 2.0.73
- Excel Version: office 2016
Re: Optimize While
Do you have a lot of free memory (RAM)? If no then any process will be slower than the usual, in this case you should reboot your machine and then try running the process again.aleferrer wrote: ↑Tue Nov 12, 2019 5:06 pmI created the subset.Wim Gielis wrote: ↑Tue Nov 12, 2019 4:07 pm 3) create a static subset with level 0 elements. That’s at least taking out some of the overhead.
Is the value that each element receives, the same ?
And the elements get the same values.
The process is still slow.
Anyway thank you very much for the suggestions.
I did something similar of what you're doing last month, but we used the the 'Cube -> Export as text data' option and then create a process to read this file as data source and made the personalized code (cellputn) in the data tab. It didn't take too much time to complete, but i don't know if it'll be any faster for you though. Just giving you more options.
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
Yes, we have.luizg2019 wrote: ↑Tue Nov 12, 2019 5:27 pmDo you have a lot of free memory (RAM)? If no then any process will be slower than the usual, in this case you should reboot your machine and then try running the process again.aleferrer wrote: ↑Tue Nov 12, 2019 5:06 pmI created the subset.Wim Gielis wrote: ↑Tue Nov 12, 2019 4:07 pm 3) create a static subset with level 0 elements. That’s at least taking out some of the overhead.
Is the value that each element receives, the same ?
And the elements get the same values.
The process is still slow.
Anyway thank you very much for the suggestions.
I did something similar of what you're doing last month, but we used the the 'Cube -> Export as text data' option and then create a process to read this file as data source and made the personalized code (cellputn) in the data tab. It didn't take too much time to complete, but i don't know if it'll be any faster for you though. Just giving you more options.
Thanks for the sugestion.
-
- MVP
- Posts: 3667
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Optimize While
First.
You are multilying the volume of data by a huge amount. Why are you taking the same piece of data and writing it to 6800 locations? (give or take, depends how many N vs. C elements). First you need to answer this question. Do you need to do this? Could this be a reference rule instead? It doesn't usually make sense to multiply data like this.
Second.
Let's deal with some other basics. You haven't said ...
1. how many cells you are processing in the data source?
2. is the data source zero supressed?
If you are also processing null values then you could be processing a mind bendingly large amount of cells. Perhaps 70K seconds isn't so bad? But to know for sure if 2 hours really is bad performance or not we need to know how many cells are being processed.
Third.
Common causes for slow performance.
1. Overfeeding on data entry. Do the cells that are being written to in the destination cube cause feeding? If yes, of what for what purpose?
2. Poorly performing rules on reads. Are the source values rule calculated? If yes what is the performance of the rules?
You are multilying the volume of data by a huge amount. Why are you taking the same piece of data and writing it to 6800 locations? (give or take, depends how many N vs. C elements). First you need to answer this question. Do you need to do this? Could this be a reference rule instead? It doesn't usually make sense to multiply data like this.
Second.
Let's deal with some other basics. You haven't said ...
1. how many cells you are processing in the data source?
2. is the data source zero supressed?
If you are also processing null values then you could be processing a mind bendingly large amount of cells. Perhaps 70K seconds isn't so bad? But to know for sure if 2 hours really is bad performance or not we need to know how many cells are being processed.
Third.
Common causes for slow performance.
1. Overfeeding on data entry. Do the cells that are being written to in the destination cube cause feeding? If yes, of what for what purpose?
2. Poorly performing rules on reads. Are the source values rule calculated? If yes what is the performance of the rules?
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
===========================================XXXX==================================================lotsaram wrote: ↑Tue Nov 12, 2019 6:21 pm First.
You are multilying the volume of data by a huge amount. Why are you taking the same piece of data and writing it to 6800 locations? (give or take, depends how many N vs. C elements). First you need to answer this question. Do you need to do this? Could this be a reference rule instead? It doesn't usually make sense to multiply data like this.
Second.
Let's deal with some other basics. You haven't said ...
1. how many cells you are processing in the data source?
2. is the data source zero supressed?
If you are also processing null values then you could be processing a mind bendingly large amount of cells. Perhaps 70K seconds isn't so bad? But to know for sure if 2 hours really is bad performance or not we need to know how many cells are being processed.
Third.
Common causes for slow performance.
1. Overfeeding on data entry. Do the cells that are being written to in the destination cube cause feeding? If yes, of what for what purpose?
2. Poorly performing rules on reads. Are the source values rule calculated? If yes what is the performance of the rules?
Lotsaram, as I do not speak English well, I will try to answer your questions, thanks in advance for your help.
First
I have two cubes. The first cube has 14 dimensions and I load it through a .txt file. The cube has no rules.
The second cube has 15 dimensions and its origin is my cube 01 which has 14 dimensions.
My problem dimension is the 02.Cod_Functional dimension.
I have a calculation to be performed on another cube that I need to have the park information (number of vehicles) in the functional code granularity. As in my cube 01 (origin) I do not have this information in this granularity, I need to allocate this park by functional code, to then perform my calculation. I agree with you that this makes no sense at all, but it is a business rule. I fought a lot until I could convince myself of that.
I tried to do it as a rule, but the feeder got too big due to the size of dimension 02.Cod_Functional.
Second
1) My source has 113770 records.
2) Yes, I'm ignoring zero and consolidated cells, as prints attached.
Third
1) No. Both cubes have no rules either between them or to another cube. I am doing all cube interactions via process.
2) Cubes have no rules between them or for other cubes.
Attached are some prints of the cube structure and the rule editors.
- Attachments
-
- Estructure Cubes.rar
- (87.34 KiB) Downloaded 315 times
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
============================================XXXX======================================================aleferrer wrote: ↑Tue Nov 12, 2019 7:18 pm===========================================XXXX==================================================lotsaram wrote: ↑Tue Nov 12, 2019 6:21 pm First.
You are multilying the volume of data by a huge amount. Why are you taking the same piece of data and writing it to 6800 locations? (give or take, depends how many N vs. C elements). First you need to answer this question. Do you need to do this? Could this be a reference rule instead? It doesn't usually make sense to multiply data like this.
Second.
Let's deal with some other basics. You haven't said ...
1. how many cells you are processing in the data source?
2. is the data source zero supressed?
If you are also processing null values then you could be processing a mind bendingly large amount of cells. Perhaps 70K seconds isn't so bad? But to know for sure if 2 hours really is bad performance or not we need to know how many cells are being processed.
Third.
Common causes for slow performance.
1. Overfeeding on data entry. Do the cells that are being written to in the destination cube cause feeding? If yes, of what for what purpose?
2. Poorly performing rules on reads. Are the source values rule calculated? If yes what is the performance of the rules?
Lotsaram, as I do not speak English well, I will try to answer your questions, thanks in advance for your help.
First
I have two cubes. The first cube has 14 dimensions and I load it through a .txt file. The cube has no rules.
The second cube has 15 dimensions and its origin is my cube 01 which has 14 dimensions.
My problem dimension is the 02.Cod_Functional dimension.
I have a calculation to be performed on another cube that I need to have the park information (number of vehicles) in the functional code granularity. As in my cube 01 (origin) I do not have this information in this granularity, I need to allocate this park by functional code, to then perform my calculation. I agree with you that this makes no sense at all, but it is a business rule. I fought a lot until I could convince myself of that.
I tried to do it as a rule, but the feeder got too big due to the size of dimension 02.Cod_Functional.
Second
1) My source has 113770 records.
2) Yes, I'm ignoring zero and consolidated cells, as prints attached.
Third
1) No. Both cubes have no rules either between them or to another cube. I am doing all cube interactions via process.
2) Cubes have no rules between them or for other cubes.
Attached are some prints of the cube structure and the rule editors.
For the sake of illustration, as per the appendix what I basically need is to have the park information in the same VLR Total SG Liquidada structure.
The VLR Total SG Liquidada information I have in the functional code structure, not the parque code.
And I need to load the park as the VLR Total SG Liquidada was loaded, that is, the same functional codes.
VLR Total SG Liquidada comes from another source and is in the functional code structure.
- Attachments
-
- Cube final.png (43.64 KiB) Viewed 7210 times
-
- MVP
- Posts: 2832
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Optimize While
What type of allocation are you doing to come up with the Parque value to post? Is it a proportional spread based on some other amount in the cube? There is a TI function called CellPutProportionalSpread that can do this for you and it is going to be light years faster then looping through a dimension to allocate.
- gtonkin
- MVP
- Posts: 1211
- Joined: Thu May 06, 2010 3:03 pm
- OLAP Product: TM1
- Version: Latest and greatest
- Excel Version: Office 365 64-bit
- Location: JHB, South Africa
- Contact:
Re: Optimize While
Worth pointing out that on very large models (we were populating billions of cells-daily sales across 50k stores, 200k SKUs etc.), we found it exponentially faster to calculate the ratio and save this to the cube on one layer then multiply out the result to give the proportion and save that to another i.e. mimic what the proportional spread does. The model effectively needed to break back annual forecasts to days, stores, SKUs based on sales history.
We cut down run time from somewhere over 8 hours to under 1 using the "long-hand" approach.
YMMV.
-
- Community Contributor
- Posts: 292
- Joined: Tue Sep 06, 2016 7:55 am
- OLAP Product: TM1
- Version: 10.1.1
- Excel Version: Excel 2010
Re: Optimize While
I don't think gTonkins long hand method will work here given I think this is spreading the same value to multiple cells, so if my understanding is correct the problem looks as follows:
Source Cube:
2019 AreaA SectionD 100
2019 AreaA SectionE 200
Destination Cube:
2019 AreaA SectionD SubSectA 100
2019 AreaA SectionD SubSectB 100
2019 AreaA SectionD SubSectC 100
2019 AreaA SectionE SubSectA 200
2019 AreaA SectionE SubSectB 200
2019 AreaA SectionE SubSectC 200
In which case the question becomes, have you limited the data to the fewest possible cellput's? Maybe treble check this.
And then if so how about Tomok's suggestion for CellPutProportionalSpread, in the above example spreading against a consolidation
in the sub section dimension?
Otherwise, looks like its 2 hours, give or take 10 minutes!
regards, Mark
Source Cube:
2019 AreaA SectionD 100
2019 AreaA SectionE 200
Destination Cube:
2019 AreaA SectionD SubSectA 100
2019 AreaA SectionD SubSectB 100
2019 AreaA SectionD SubSectC 100
2019 AreaA SectionE SubSectA 200
2019 AreaA SectionE SubSectB 200
2019 AreaA SectionE SubSectC 200
In which case the question becomes, have you limited the data to the fewest possible cellput's? Maybe treble check this.
And then if so how about Tomok's suggestion for CellPutProportionalSpread, in the above example spreading against a consolidation
in the sub section dimension?
Otherwise, looks like its 2 hours, give or take 10 minutes!
regards, Mark
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
Thank you all for the suggestion.
Since I don't speak English, I may not be able to explain my problem.
I have two cubes in different granularities:
Cube A
Dimension A
Dimension B
Cube B
Dimension A
And I have a third cube
Cube C
With dimensions:
Dimension A
Dimension B
This third cube has two measures:
VLR Total SG Settled
Park
The measure VLR Total SG Settled originates from cube A.
The measure Park originates from cube B.
What I need is to allocate the Park values from cube B to cube C.
This dimension B is dimension 02.Functional_code that has 6863 items.
The following images illustrate cubes A, B, and C.
Having the two cubes in common the dimension A.
My target cube needs park information to do a simple calculation.
Ideally I would only allocate the park where it has VLR Total SG Liquidated information, but that would be the perfect world.
Doing the proportional calculation is not business friendly because there is a dimension in the model that is common to both cubes called 02.Familia_Budget which is like a cost center, not a cost center but just making an analogy for translate the granularity of the business, and each item of this dimension has a number of vehicles that need to be calculated according to the functional codes, existing in dimension 02. Functional Code. The functional code is a code linked to problems existing in vehicles, as problems arise, a functional code is created, so when allocating the park, it has to respect the budget family, because there will be cases that vehicles in the park are new (releases) and no problems related to the warranty of these vehicles have yet arisen.
Once again I apologize for the English and thank you for the answers.
Since I don't speak English, I may not be able to explain my problem.
I have two cubes in different granularities:
Cube A
Dimension A
Dimension B
Cube B
Dimension A
And I have a third cube
Cube C
With dimensions:
Dimension A
Dimension B
This third cube has two measures:
VLR Total SG Settled
Park
The measure VLR Total SG Settled originates from cube A.
The measure Park originates from cube B.
What I need is to allocate the Park values from cube B to cube C.
This dimension B is dimension 02.Functional_code that has 6863 items.
The following images illustrate cubes A, B, and C.
Having the two cubes in common the dimension A.
My target cube needs park information to do a simple calculation.
Ideally I would only allocate the park where it has VLR Total SG Liquidated information, but that would be the perfect world.
Doing the proportional calculation is not business friendly because there is a dimension in the model that is common to both cubes called 02.Familia_Budget which is like a cost center, not a cost center but just making an analogy for translate the granularity of the business, and each item of this dimension has a number of vehicles that need to be calculated according to the functional codes, existing in dimension 02. Functional Code. The functional code is a code linked to problems existing in vehicles, as problems arise, a functional code is created, so when allocating the park, it has to respect the budget family, because there will be cases that vehicles in the park are new (releases) and no problems related to the warranty of these vehicles have yet arisen.
Once again I apologize for the English and thank you for the answers.
- Attachments
-
- Target.png (43.64 KiB) Viewed 7148 times
-
- Source 2.png (28.99 KiB) Viewed 7148 times
-
- Source 1.png (32.27 KiB) Viewed 7148 times
- Steve Rowe
- Site Admin
- Posts: 2423
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Optimize While
Maybe generalising the problem might help?
I think you should probably be doing this with rules. Though you state you have already tried this and had an issue with the feeders, maybe your feeder was wrong?
Lets say your simple calculation is
Revenue = Volume * Price
Price is a "rate" and is like your park value in that it is typically stored a cube with lower dimensionality than the volume.
So the rule for Revenue is
['Revenue'] = N:['Volume'] * DB('RateCube' , !D1 , etc, 'Price');
The feeder is
['Volume']=>['Revenue'];
At no point do you need to write a feeder from Price to Revenue.
If you want to show Price in your main cube then you back calculate it rather than writing a cross cube rule
This gives.
['Revenue'] = N:['Volume'] * DB('RateCube' , !D1 , etc, 'Price');
['Price'] = ['Revenue'] \ ['Volume'] ;
The feeder is
['Volume']=>['Revenue'], ['Price'];
I think you should probably be doing this with rules. Though you state you have already tried this and had an issue with the feeders, maybe your feeder was wrong?
As Lotsaram implied this does not mean that the main cube has to hold the number, it has to be able to reference it.My target cube needs park information to do a simple calculation.
Lets say your simple calculation is
Revenue = Volume * Price
Price is a "rate" and is like your park value in that it is typically stored a cube with lower dimensionality than the volume.
So the rule for Revenue is
['Revenue'] = N:['Volume'] * DB('RateCube' , !D1 , etc, 'Price');
The feeder is
['Volume']=>['Revenue'];
At no point do you need to write a feeder from Price to Revenue.
If you want to show Price in your main cube then you back calculate it rather than writing a cross cube rule
This gives.
['Revenue'] = N:['Volume'] * DB('RateCube' , !D1 , etc, 'Price');
['Price'] = ['Revenue'] \ ['Volume'] ;
The feeder is
['Volume']=>['Revenue'], ['Price'];
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- Posts: 32
- Joined: Wed Nov 14, 2012 5:37 pm
- OLAP Product: TM1
- Version: 10.1
- Excel Version: 2007 STARDAND
Re: Optimize While
Steve Rowe wrote: ↑Wed Nov 13, 2019 11:52 am Maybe generalising the problem might help?
I think you should probably be doing this with rules. Though you state you have already tried this and had an issue with the feeders, maybe your feeder was wrong?
As Lotsaram implied this does not mean that the main cube has to hold the number, it has to be able to reference it.My target cube needs park information to do a simple calculation.
Lets say your simple calculation is
Revenue = Volume * Price
Price is a "rate" and is like your park value in that it is typically stored a cube with lower dimensionality than the volume.
So the rule for Revenue is
['Revenue'] = N:['Volume'] * DB('RateCube' , !D1 , etc, 'Price');
The feeder is
['Volume']=>['Revenue'];
At no point do you need to write a feeder from Price to Revenue.
If you want to show Price in your main cube then you back calculate it rather than writing a cross cube rule
This gives.
['Revenue'] = N:['Volume'] * DB('RateCube' , !D1 , etc, 'Price');
['Price'] = ['Revenue'] \ ['Volume'] ;
The feeder is
['Volume']=>['Revenue'], ['Price'];
####################################XXXXXXXXXXXXXXXXXXXXXXXX#######################################
Steve Rowe
Thanks for the answer.
The question of doing by rules does not work, as I have a step before where in a parameter cube I select a date range, company and expense channel. According to these parameters, my process goes in my source cubes and fetches this information. As these values, after simulated can not change, as a rule would not work, because the selection of the cube is by picklist and several simulations are made with different dates, companies and expense channel, making as a rule, I lose the dynamic concept of simulations, because the rule will always respect what was defined in the parameter.
Once again I appreciate the suggestions and apologize for not exposing the whole scenario of my problem.
- Steve Rowe
- Site Admin
- Posts: 2423
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Optimize While
Some testing you could do
1. The source with the 113770 records, how long does it take to export to a text file and does the text file just contain the records I think it should.
2. Activate different bits of the code to see where the performance hit is.
If it is the while loop and there is no rule based or MDX alternative then you might be stuck. It does feel like something is wrong but I'm not sure we have the whole picture.
One thought is it possible to use MDX to further limit the Level 0 subset of 02.Codigo_Funcional to exactly the intersections that are required?
FYI the condition in bold is no longer required since you have limited the loop to a Level 0 subset (I think), this should improve the performance. (If what you posted is really what you have written)...Note also that with the counter increase where it is you are skipping the first element in the subset.
vCounter = 1;
vMaxCounter = SubsetGetSize ('02.Codigo_Funcional','N0');
WHILE (vCounter <= vMaxCounter);
vCounter = vCounter + 1;
vElement = DIMNM ('02.Codigo_Funcional', vCounter);
IF (ELLEV ('02.Codigo_Funcional', DIMNM ('02.Codigo_Funcional', vCounter)) = 0);
CellPutN (vVALUE, 'CMU.FUNCTIONAL_PART',
vVERSION,
vCENARIO,
MONTHS,
vANO,
vDATA_REFERENCE,
vCOD_MARCA,
vDATA_FIM_GARRANTY,
vCOD_MERCADO,
vECONOMIC GROUP,
vCOD_VERSAO,
vCOD_MODEL,
vCOD_FAMILIA,
vCOD_MVS,
vElement,
'QTD_VEI');
ENDIF;
1. The source with the 113770 records, how long does it take to export to a text file and does the text file just contain the records I think it should.
2. Activate different bits of the code to see where the performance hit is.
If it is the while loop and there is no rule based or MDX alternative then you might be stuck. It does feel like something is wrong but I'm not sure we have the whole picture.
One thought is it possible to use MDX to further limit the Level 0 subset of 02.Codigo_Funcional to exactly the intersections that are required?
FYI the condition in bold is no longer required since you have limited the loop to a Level 0 subset (I think), this should improve the performance. (If what you posted is really what you have written)...Note also that with the counter increase where it is you are skipping the first element in the subset.
vCounter = 1;
vMaxCounter = SubsetGetSize ('02.Codigo_Funcional','N0');
WHILE (vCounter <= vMaxCounter);
vCounter = vCounter + 1;
vElement = DIMNM ('02.Codigo_Funcional', vCounter);
IF (ELLEV ('02.Codigo_Funcional', DIMNM ('02.Codigo_Funcional', vCounter)) = 0);
CellPutN (vVALUE, 'CMU.FUNCTIONAL_PART',
vVERSION,
vCENARIO,
MONTHS,
vANO,
vDATA_REFERENCE,
vCOD_MARCA,
vDATA_FIM_GARRANTY,
vCOD_MERCADO,
vECONOMIC GROUP,
vCOD_VERSAO,
vCOD_MODEL,
vCOD_FAMILIA,
vCOD_MVS,
vElement,
'QTD_VEI');
ENDIF;
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- MVP
- Posts: 3128
- Joined: Mon Dec 29, 2008 6:26 pm
- OLAP Product: TM1, Jedox
- Version: PAL 2.0.9.18
- Excel Version: Microsoft 365
- Location: Brussels, Belgium
- Contact:
Re: Optimize While
Those who post here are repeating themselves since they cannot be sure about what you changed and what not.
Nevertheless, I’m pretty sure you can only marginally reduce the processing time with the suggestions made here, including my own suggestions. The real gain can come from processing data in a parallel fashion rather than sequentially. Plenty of examples here on the forum in other threads.
Nevertheless, I’m pretty sure you can only marginally reduce the processing time with the suggestions made here, including my own suggestions. The real gain can come from processing data in a parallel fashion rather than sequentially. Plenty of examples here on the forum in other threads.
Best regards,
Wim Gielis
IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
Wim Gielis
IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly