"Pulse" for TM1
-
- Posts: 19
- Joined: Wed Sep 28, 2016 1:43 pm
- OLAP Product: Cognos TM1
- Version: 10.1.1
- Excel Version: 2007
"Pulse" for TM1
Does anyone use this product for documentation/diagnostics/deployments etc? We are looking into purchasing it but would appreciate any reviews/experience if anyone uses it and would like to share their experience.
-
- Site Admin
- Posts: 6643
- Joined: Sun May 11, 2008 2:30 am
- OLAP Product: TM1
- Version: PA2.0.9.18 Classic NO PAW!
- Excel Version: 2013 and Office 365
- Location: Sydney, Australia
- Contact:
Re: "Pulse" for TM1
TLDR version; if the cost is not a problem, I think you'll find it to be a useful tool. Some features are more useful than others, but on balance it has quite a few merits. I'll try to pad this out with specifics later.missspeedy23 wrote:Does anyone use this product for documentation/diagnostics/deployments etc? We are looking into purchasing it but would appreciate any reviews/experience if anyone uses it and would like to share their experience.
The one caution I'd have is if your servers use IPV6 addresses. One of my servers was moved within the data centre a couple of months back and went over to an IPV6 address in the process. Since then I have not been able to get either the Pulse web client or the Windows application client to talk to the thing. I know that the server component is still running because it will still send out an e-mail if the delay threshold is exceeded, but I just can't get either client to talk to it. (Including the client on the server box itself.) I don't rely on Pulse all that heavily so I haven't done an "all hands on deck, we need this fixed" thing, especially as all but one of my servers are still on an IPV4 box. The vendor has told me that it's supposed to work with IPV6 boxes... but it ain't. At some point I'll get around to trying to find out what's going on.
However what I do recommend is that you approach the vendor and see whether they will give you a 30 day trial on it. I don't know whether they still do but I know it has happened in the past.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-
- Posts: 38
- Joined: Thu Oct 11, 2012 6:15 am
- OLAP Product: TM1
- Version: 10.2.2.4
- Excel Version: 2010
- Location: Melbourne, Australia
Re: "Pulse" for TM1
Yes, we use Pulse heavily in our day-to-day work.
Specifically for
1. Monitoring - Alerts, TM1Top actions and Instance restarts
2. User/Thread management - Basically replacing TM1Top
3. Development - Moving code from Dev to Test to Prod (Source control, with some limitations, but still very useful)
4. Various ad-hoc reporting - Process/Chore histories, User sessions
Documentation or Technical validation - Not really
Specifically for
1. Monitoring - Alerts, TM1Top actions and Instance restarts
2. User/Thread management - Basically replacing TM1Top
3. Development - Moving code from Dev to Test to Prod (Source control, with some limitations, but still very useful)
4. Various ad-hoc reporting - Process/Chore histories, User sessions
Documentation or Technical validation - Not really
-
- Site Admin
- Posts: 6643
- Joined: Sun May 11, 2008 2:30 am
- OLAP Product: TM1
- Version: PA2.0.9.18 Classic NO PAW!
- Excel Version: 2013 and Office 365
- Location: Sydney, Australia
- Contact:
Re: "Pulse" for TM1
In a nutshell, the way the thing works is to create two services on the TM1 server box. These hook into the server in a similar way to TM1Top.
To configure these services, you use local clients. These are:
(a) A thick Windows application client; and/or
(b) A Web client which, by default, communicates through port 8099 of the TM1 server box.
Thick Client
I rarely use the former; I find it clunky and annoying given the number of times you need to log into it to do anything, though it does have a good display. It provides you with a classic Windows Task Viewer-style display of CPU, memory and client usage by server. And I mean classic, not the new, rubbish "see one thing at a time but gee it looks kewl" style in Windows 10. This layout does give you a pretty good view of what is happening server by server (though you need to expand the window to see more than one line of the session monitor (the equivalent to TM1 Top); that may be a good enough reason to use it but often I also can't be bothered launching it when the Web one can be set as one of my home pages (and is).
Web Client
On the web side the dashboard gives you a high level overview for the physical / virtual server that Pulse is running on; memory usage, total users, server and O/S version, memory, CPU cores and so on. The Active Users list is perhaps a tad misleading; it doesn't mean users who are logged on but rather users who are actively doing something. If you get a hang it is useful to be able to see at a glance who is actively doing something without needing to check server by server.
Below that a message log is essentially a combined list of the message logs for all servers, but only for the last 20 listings and it's not searchable as far as I can see. It's good for seeing a "right now" view but the more servers you have and the more processes you have running on them the less useful it is. The Instance State gives you an "at a glance" view of which servers are up or down and how many users are on each. Donut charts show you memory and disk space usage. They're useful enough but I rarely pay much heed to them; I have e-mail alerts set up for offline servers and disk space issues. I'll come back to alerts shortly.
Under that is a Live Monitor for each server, showing CPU usage, memory usage, clients, wait time as well as a TM1 Top style list of threads. The big up side of that list ? If you are logged into Pulse (which you have to do annoyingly often) there are buttons to allow you to either disconnect a client or stop a process. There is also supposed to be a panel showing the current records in the server's message log, but this seems to be a bit glitchy. As I was typing this I noticed that mine had stopped working for all of my server instances. After doing a data save it came back.
Next down is the Model Spotlight and I must admit that I probably don't spend enough time here. You can assign a description of the server (and all of the objects on it) for internal documentation. It also reads a few key values from the config file settings, and there is a change history of views and subsets.
There is also a tab to generate a flow diagram. Now before you get too excited about that... I would have to say that I have never really seen any diagramming solution for TM1 do it exceptionally well or in a way that is meaningful, and this isn't truly an exception. Even on my least complex model you get an animated graphic that looks like a star exploding in a Brian Cox series, which then coalesces together into what looks like a string of Christmas lights which have been knotted into a ball by a feral tabby cat. It would probably be somewhat useful in a dev environment which has a limited subset of cubes and dimensions but, in short, I don't use it.
The panel down the left, however, can give you some useful information. You can select cubes, dimensions, processes or chores and get detailed information on them including (for cubes in particular) their memory usage and performance over time. Some of the stats may be a little iffy (I'm looking at one cube right now which has 0 public and 0 private views apparently, despite me staring at several of them in Server Explorer) but the other values seem to gel with the Performance Monitor cubes. There's some incredibly clever code in this, and theyve clearly put a lot of work into it. If you look at a process, for instance, you can see the syntax of the code highlighted in different colours (eg pink for keywords, blue for function names, green for comments etc) but it can also summarise all of the objects that are affected by the code via CellGet or CellPut functions. It can even do this if you are using (say) a variable rather than a hard coded cube name; eg CellPutN (SC_CUBE_DATA, V1, V2... etc). (Assuming that the variable (in this case SC_CUBE_DATA) is defined somewhere in the code and not loaded as a parameter, obviously.)
Change History lets you check when changes have been made to an object on the server. You can even click through to see what the changes were.
Migration Packages are something that I haven't actually needed to try to date but I may do shortly. Or would have done had it not been for the fact that my dev server is the one with the IPV6 problem so I can't connect to it.
Chore/Process history gives you an outline of when they last ran and whether there were any errors; it would probably be faster than scouring the server log for that.
Technical documentation is something that I used once or twice but haven't really had a big need for, but it's very flexible and definitely worth exploring if you have documentation needs. The Flow Diagram link is just another path to the dialog above. The Validation report... I'm not a believer in. I wouldn't discount it as worthless, but as long as you have coding and naming conventions that you follow I think it's a very heavyweight solution to a lightweight problem. The cube and dimension lists are things that everyone needs from time to time, and all of the additional information (such as which cubes a dim is used in) make it a lot easier to cull dead wood.
Similarly the cube and dimension matrix can save you some time. The export to CSV function is rather obscurely placed, though; scroll all the way over to the right to find it. (It exports as a .csv.)
The User Sessions report is almost good, and has the potential to be great. What's wrong with it? It always defaults to the top 25. You then have to manually select 100; it never remembers your preference. And I have between 120 and 140 users depending on the date; more if you count comings and goings over the period being searched. I therefore never get them all on one page. And when you export to Excel (again, a .csv) you only get as many records as are shown on the page, not all of them. Also, I don't need to see "Yesterday at 3:32pm". I would rather see a date that is convertible to an Excel date and time, and an extra column showing the number of serial dates since the last logon. (20.25 for someone who logged on 20 days and 6 hours ago, for instance). This could then be sorted to show people who have been inactive for the longest period of time. "Last Monday" does not lend itself to such sorting.
There's an option for tracking open Excel workbooks, but it only works if you also licence the vendor's Excel add-in, the name of which escapes me at the moment.
Alerts
This is worth a heading in itself because it's probably the single most useful feature. The alerts are outstandingly customisable. You can set warnings for things like the server being offline, the amount of memory usage, the amount of free disk space, wait times, chore run times, if certain text pops up in the server log or a bunch of other things. You can specify how often the warning should repeat if the condition persists, you can specify that it only happen on certain days and at certain times (so if you have an overnight reboot of the server, for instance, you can tell it not to generate error logs during that reboot window); essentially you can have an e-mail in your inbox warning you that there's a problem before you even know there is one.
Conclusion
If you have the budget in my opinion it's worth having for the alerts alone. However writing this article has reminded me of exactly how much I've under-utilised this thing. Properly utilised it can be a very good tool to have in your locker.
To configure these services, you use local clients. These are:
(a) A thick Windows application client; and/or
(b) A Web client which, by default, communicates through port 8099 of the TM1 server box.
Thick Client
I rarely use the former; I find it clunky and annoying given the number of times you need to log into it to do anything, though it does have a good display. It provides you with a classic Windows Task Viewer-style display of CPU, memory and client usage by server. And I mean classic, not the new, rubbish "see one thing at a time but gee it looks kewl" style in Windows 10. This layout does give you a pretty good view of what is happening server by server (though you need to expand the window to see more than one line of the session monitor (the equivalent to TM1 Top); that may be a good enough reason to use it but often I also can't be bothered launching it when the Web one can be set as one of my home pages (and is).
Web Client
On the web side the dashboard gives you a high level overview for the physical / virtual server that Pulse is running on; memory usage, total users, server and O/S version, memory, CPU cores and so on. The Active Users list is perhaps a tad misleading; it doesn't mean users who are logged on but rather users who are actively doing something. If you get a hang it is useful to be able to see at a glance who is actively doing something without needing to check server by server.
Below that a message log is essentially a combined list of the message logs for all servers, but only for the last 20 listings and it's not searchable as far as I can see. It's good for seeing a "right now" view but the more servers you have and the more processes you have running on them the less useful it is. The Instance State gives you an "at a glance" view of which servers are up or down and how many users are on each. Donut charts show you memory and disk space usage. They're useful enough but I rarely pay much heed to them; I have e-mail alerts set up for offline servers and disk space issues. I'll come back to alerts shortly.
Under that is a Live Monitor for each server, showing CPU usage, memory usage, clients, wait time as well as a TM1 Top style list of threads. The big up side of that list ? If you are logged into Pulse (which you have to do annoyingly often) there are buttons to allow you to either disconnect a client or stop a process. There is also supposed to be a panel showing the current records in the server's message log, but this seems to be a bit glitchy. As I was typing this I noticed that mine had stopped working for all of my server instances. After doing a data save it came back.
Next down is the Model Spotlight and I must admit that I probably don't spend enough time here. You can assign a description of the server (and all of the objects on it) for internal documentation. It also reads a few key values from the config file settings, and there is a change history of views and subsets.
There is also a tab to generate a flow diagram. Now before you get too excited about that... I would have to say that I have never really seen any diagramming solution for TM1 do it exceptionally well or in a way that is meaningful, and this isn't truly an exception. Even on my least complex model you get an animated graphic that looks like a star exploding in a Brian Cox series, which then coalesces together into what looks like a string of Christmas lights which have been knotted into a ball by a feral tabby cat. It would probably be somewhat useful in a dev environment which has a limited subset of cubes and dimensions but, in short, I don't use it.
The panel down the left, however, can give you some useful information. You can select cubes, dimensions, processes or chores and get detailed information on them including (for cubes in particular) their memory usage and performance over time. Some of the stats may be a little iffy (I'm looking at one cube right now which has 0 public and 0 private views apparently, despite me staring at several of them in Server Explorer) but the other values seem to gel with the Performance Monitor cubes. There's some incredibly clever code in this, and theyve clearly put a lot of work into it. If you look at a process, for instance, you can see the syntax of the code highlighted in different colours (eg pink for keywords, blue for function names, green for comments etc) but it can also summarise all of the objects that are affected by the code via CellGet or CellPut functions. It can even do this if you are using (say) a variable rather than a hard coded cube name; eg CellPutN (SC_CUBE_DATA, V1, V2... etc). (Assuming that the variable (in this case SC_CUBE_DATA) is defined somewhere in the code and not loaded as a parameter, obviously.)
Change History lets you check when changes have been made to an object on the server. You can even click through to see what the changes were.
Migration Packages are something that I haven't actually needed to try to date but I may do shortly. Or would have done had it not been for the fact that my dev server is the one with the IPV6 problem so I can't connect to it.
Chore/Process history gives you an outline of when they last ran and whether there were any errors; it would probably be faster than scouring the server log for that.
Technical documentation is something that I used once or twice but haven't really had a big need for, but it's very flexible and definitely worth exploring if you have documentation needs. The Flow Diagram link is just another path to the dialog above. The Validation report... I'm not a believer in. I wouldn't discount it as worthless, but as long as you have coding and naming conventions that you follow I think it's a very heavyweight solution to a lightweight problem. The cube and dimension lists are things that everyone needs from time to time, and all of the additional information (such as which cubes a dim is used in) make it a lot easier to cull dead wood.
Similarly the cube and dimension matrix can save you some time. The export to CSV function is rather obscurely placed, though; scroll all the way over to the right to find it. (It exports as a .csv.)
The User Sessions report is almost good, and has the potential to be great. What's wrong with it? It always defaults to the top 25. You then have to manually select 100; it never remembers your preference. And I have between 120 and 140 users depending on the date; more if you count comings and goings over the period being searched. I therefore never get them all on one page. And when you export to Excel (again, a .csv) you only get as many records as are shown on the page, not all of them. Also, I don't need to see "Yesterday at 3:32pm". I would rather see a date that is convertible to an Excel date and time, and an extra column showing the number of serial dates since the last logon. (20.25 for someone who logged on 20 days and 6 hours ago, for instance). This could then be sorted to show people who have been inactive for the longest period of time. "Last Monday" does not lend itself to such sorting.
There's an option for tracking open Excel workbooks, but it only works if you also licence the vendor's Excel add-in, the name of which escapes me at the moment.
Alerts
This is worth a heading in itself because it's probably the single most useful feature. The alerts are outstandingly customisable. You can set warnings for things like the server being offline, the amount of memory usage, the amount of free disk space, wait times, chore run times, if certain text pops up in the server log or a bunch of other things. You can specify how often the warning should repeat if the condition persists, you can specify that it only happen on certain days and at certain times (so if you have an overnight reboot of the server, for instance, you can tell it not to generate error logs during that reboot window); essentially you can have an e-mail in your inbox warning you that there's a problem before you even know there is one.
Conclusion
If you have the budget in my opinion it's worth having for the alerts alone. However writing this article has reminded me of exactly how much I've under-utilised this thing. Properly utilised it can be a very good tool to have in your locker.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.