Fitting of Data Based on Reference Curve

Post Reply
image2x
Posts: 125
Joined: Tue Jun 02, 2009 7:05 pm
OLAP Product: TM1, PAX, PAW, SPSS
Version: 2.0.916.10 on RHEL
Excel Version: 2016
Location: Minneapolis, MN

Fitting of Data Based on Reference Curve

Post by image2x »

Here's a challenge I've been struggling with for a while now...

In a nutshell, it is a fitting exercise involving two constraints and a reference curve. The reference curve is the prior year actuals of product sales by location. The constraints relate to establishing next year's budget which are dictated in a tops-down manner for:

1) total company sales by product
2) total store sales by location

The challenge is to best fit the two above constraints using last year's actuals as the starting point. The "best" answer is the one that minimizes (in all directions - stores & product) the deviation from last year's curve.

I can imagine an iterative hold / solve approach in TM1 but I'm concerned that this approach will by nature "squeeze" the final hold/solve elements to a much greater degree than the beginning elements and thus not reflect the best overall solution.

The attached spreadsheet details the problem and has a small example to solve.

If this isn't possible within TM1, I assume I'll need to look for a statistical tool to do the work and I'd value thoughts on how one might generally configure such a tool to tackle tihs problem.

Thanks,
-- John
Attachments
fit-example.xls
(23 KiB) Downloaded 229 times
User avatar
John Hobson
Site Admin
Posts: 330
Joined: Sun May 11, 2008 4:58 pm
OLAP Product: Any
Version: 1.0
Excel Version: 2020
Location: Lytham UK
Contact:

Re: Fitting of Data Based on Reference Curve

Post by John Hobson »

This type of problem is quite common in merchandise planning John.

Typically we plan products at a high level of store and stores at a high level of product as per your example

If we need to fill in the blanks then we need to do some sort iterative "multi-balancing"

I am not sure how you judge "best fit" here. (Or indfeed why you'd get over-concerned by it)

Here's what I would do (not mathematically scientific - no regression calculations but fairly practical, and it could be automated via a TI I suppose)

Looking at your numbers I first of all allocated the product total values to each store pro rata to last year.

I then spread the desired values for store down based on the values created in step 1

I then spread the product total values again on the values in step 2

By this stage I have < .1 difference on any of your example values for store / product totals

This, by the way is what JDA call "multi-balancing" and is the way they approach this problem in their Arthur Planning suite (or to be strictly accurate , is how they did it back in 2003 when they explained the process to me at a clients' site).

I have this vision of that fairgound game where you hit animals on the head and then another appears. The animals are the discrepancies.
Do this multibalancing thing often enough and all the animals finally stay in their holes and you have numbers that add up in both axes.

The quality of fit of each line to its comparative actual varies, but that's pretty unavoidable given the variations you have imposed (e.g Location 6 being 23% higher overall but Lcation 7 being 9% lower). There is probably a mathematical way to weight the variation allowed, but why bother yourself?

If it were my budget I wouldn't lose sleep over it as if you are generally closer than 5% at store/month/ product category level when you start to measure actual against plan you deserve some sort of bonus!

I'd spend my time worrying about the quality of those higher level numbers myself instead of getting over concerned about theoretical alternative best fits (or am I missing something?) :D
John Hobson
The Planning Factory
Post Reply