Page 1 of 1

GPU - Palo at the forefront

Posted: Mon May 18, 2009 2:55 pm
by John Hobson
http://translate.google.com/translate?j ... ry_state0=

This is a link to a translation of a French article on using graphic cards to power muti-dimensional software. Jedox (Palo) are leading the pack here.

Re: GPU - Palo at the forefront

Posted: Wed May 20, 2009 4:05 pm
by David Usherwood

Re: GPU - Palo at the forefront

Posted: Thu May 21, 2009 9:38 am
by mattgoff
Kristian Raue mentioned this in his talk at the Palo seminar in London yesterday. Very interesting stuff. Makes sense as rules (and anything else non-procedural) should be very parallelizable.

The problem I see for us moving off TM1 to Palo is that we have a pretty big investment in TM1. Not so much in the backend/model but in user training and reports. Even if we could save in maintenance fees, we'd have to retrain hundreds of users and rewrite hundreds of reports. It's a big hurdle.

An intriguing idea that came up as a question from the audience was to use sync a Palo server up with TM1 to allow the use of the Palo Web server as a reporting server instead of TM1 Web.

Matt

Re: GPU - Palo at the forefront

Posted: Thu May 21, 2009 10:24 am
by Alan Kirk
This is an interesting development but I can't see it being one that TM1 will follow. It's great for people who are doing stand-alone desktop analysis a-la a local server running Perspectives. (That's IF the user has control over their hardware configuration, OR can convince the IT department that they need a bitchin' NVidia graphics card for something other than the rumoured PC release of HALO 3,and their SOE desktops have enough in their power supply to power the things.)

However in the collaborative OLAP market, where everyone is pulling data from huge honking great servers in the company data centre, there are a couple of problems. (a) Most HP / Dell / whoever servers have only the most rudimentary graphics card (because in most cases, that's all a server needs); and (b) while they, unlike their gutless corporate SOE desktop cousins, might have the power supply needed to run an nVidia, they generally won't be able to fit the card in (particularly with slim-line rack-style servers) and even if they did there would be issues with cooling the things. GPUs are generally worse heat demons than any other single component inside the box, and often need a fan the size of the propeller from a C-47 to prevent them from sending the data centre into its very own China syndrome.

So great for stand alone... not so sure about the server market.

Re: GPU - Palo at the forefront

Posted: Thu May 21, 2009 10:27 am
by Paul Segal
Yeah, that was me. It's going to be interesting to see the V3.0 Palo Web Server when it's available. A lot of it looks based on OnVision as far as I can see, which isn't too much of a surprise given the whole TM1-->Alea-->Palo inheritance. It's always a bad idea to get too effusive based on vendor presentations, but it's certainly something I'd like to explore further.

Paul

Re: GPU - Palo at the forefront

Posted: Thu May 21, 2009 11:04 am
by Paul Segal
Alan Kirk wrote: However in the collaborative OLAP market, where everyone is pulling data from huge honking great servers in the company data centre, there are a couple of problems. (a) Most HP / Dell / whoever servers have only the most rudimentary graphics card (because in most cases, that's all a server needs); and (b) while they, unlike their gutless corporate SOE desktop cousins, might have the power supply needed to run an nVidia, they generally won't be able to fit the card in (particularly with slim-line rack-style servers) and even if they did there would be issues with cooling the things. GPUs are generally worse heat demons than any other single component inside the box, and often need a fan the size of the propeller from a C-47 to prevent them from sending the data centre into its very own China syndrome.

So great for stand alone... not so sure about the server market.
But the OLAP market isn't the only market that's interested. So, adding something like the NVidia Tesla S870 (http://www.hardwaresecrets.com/article/495) to an existing server might very well work. Not sure how it gets round the heat problem, mind.

Re: GPU - Palo at the forefront

Posted: Thu May 21, 2009 3:29 pm
by David Usherwood
mattgoff wrote:Kristian Raue mentioned this in his talk at the Palo seminar in London yesterday. Very interesting stuff. Makes sense as rules (and anything else non-procedural) should be very parallelizable.

<snip>
Matt
So....
Why then does TM1 _calculation_ not multi thread?

Re: GPU - Palo at the forefront

Posted: Wed May 27, 2009 4:44 pm
by mattgoff
Alan Kirk wrote:This is an interesting development but I can't see it being one that TM1 will follow. It's great for people who are doing stand-alone desktop analysis a-la a local server running Perspectives. (That's IF the user has control over their hardware configuration, OR can convince the IT department that they need a bitchin' NVidia graphics card for something other than the rumoured PC release of HALO 3,and their SOE desktops have enough in their power supply to power the things.)

However in the collaborative OLAP market, where everyone is pulling data from huge honking great servers in the company data centre, there are a couple of problems. (a) Most HP / Dell / whoever servers have only the most rudimentary graphics card (because in most cases, that's all a server needs); and (b) while they, unlike their gutless corporate SOE desktop cousins, might have the power supply needed to run an nVidia, they generally won't be able to fit the card in (particularly with slim-line rack-style servers) and even if they did there would be issues with cooling the things. GPUs are generally worse heat demons than any other single component inside the box, and often need a fan the size of the propeller from a C-47 to prevent them from sending the data centre into its very own China syndrome.

So great for stand alone... not so sure about the server market.
Sounds like your organization has a bit of the tail wagging the dog. I've worked at those before-- some IT depts forget that their job is to ENABLE everyone else. If they can standardize to make things faster and/or cheaper, great, but it shouldn't come at the expense of their users.

In our case, our standard servers in the data center (T-6 days till the move, so I'm officially refusing to spell it "centre" any more, this UK spellchecker be damned) are HP DL385s, so we have plenty of space for add-in cards. If we needed it, bigger chassis are an option from IT. Still, even 1U boxes typically have space for a PCIe card or two to handle things like dedicated SSL hardware, fibrechannel, RAID. Blades are another story....
David Usherwood wrote:So....
Why then does TM1 _calculation_ not multi thread?
Preaching to the choir, my friend. All we've managed to get so far is launch multi-threading (except conditional feeders) and spreading multiple users across the cores (assuming there are no writes going on). IBM/Cognos really need to get on the ball with true multi-threading if they want to move the product forward and not get passed up by the likes of Jedox.

Matt