Eric wrote:Once again in my annoyance this weekend I had another question. Currently I have 2 TM1 Servers DEV & Production. Obviously all of my Live cubes are in the production server. When TM1 loads a server none of the cubes are available until entire server is ready.
I am thinking about breaking my Production Server into 3 Servers: Finance, Marketing, and Supply Chain. This would be beneficial in case one server freezes everyone else would be fine and if I need to restart all of them some would come up quicker depending on the cubes within each server, even though the total time to reload should be the same.
No, load time (startup to availability) will probably be quicker. Obviously there will be physical limitations on how fast the server can access its hard disk drive, but it can still be loading a Supply Chain cube from disk while (say) the feeders of one of the Marketing cubes are being punched in. With only a single production server, the Supply Chain cube may just have to wait in the queue until the Marketing feeding is done.
The price you pay for this is that your users will have to log in more times. Unless of course you're using LDAP.
Eric wrote:
Also for this to work I would need to be able to write rules and feeders referenceing oher cubes. This is allowed isn't it?
DB(Server:Cube,elem1,elem2, etc)
Ah. No, it's not. Try it between your production and dev environments and see. Between cubes? Yes. Between servers? Not so much. Each server session is a self-contained "box" within the physical server. It don't know nuttin' about other sessions, and it don't want them to know nuttin' about it. Think about what a potential great gaping security hole would exist if you were somehow able to define a rule on a session that you created which pulled any data you want from another server. Yes, I know that in the real world if you're an admin of one you'll be an admin of all, but there's still the POTENTIAL for a hole. That's why even with replication you need to furnish a set of logon credentials.
Replication is the only way I can see you getting around this problem, but that may pose problems in itself if the cubes are very large; you'd be chewing up twice the memory. We replicate a couple of cubes between servers but they aren't big, and don't get updated often.