Page 1 of 1

Tm1 and ibm cloud

Posted: Tue Mar 01, 2022 9:10 am
by Houps
:!: :!: :!: :!: :!: :!: :!: :!:
Hello everyone.

Need really urgent help.
Case: I work with tm1 in conjunction with ibm cloud.
We have a tm1 web application. Users enter values for consolidation and spread values. Recently they had a problem that performance very low. But we didn't make any updates. We also found in the bucket in ibm cloud that
"Inconplete Upload Part of your upload was not completed, incomplete objects will not appear in the object list but will occupy billable space in storage."
We also found that bucket size is constantly increasing. in the morning he was 4 tb, after a few hours already 7 tb

We assume that users write data to the cube for consolidation and something happens wrong.
Who can help figure out what the problem is?

Re: Tm1 and ibm cloud

Posted: Tue Mar 01, 2022 9:20 am
by Steve Rowe
You'll first need to identify which of the services on the box are causing the RAM spike. It might not even be a TM1 problem, can you get to the task managed or similar on the box?

I'm going to assume it is the Tm1 DB. You'd have to make some significant changes to the config of T1Web to allow it to hit that much RAM without it failing.

Do you know what size you would consider normal for your TM1 DB?
Turn on the performance monitor of TM1 and identify which cube is consuming the most RAM.
Diagnoise what might have changed.

Use TM1Top or the admin page of PAW to identify any long running threads and understand what they are doing, i.e. if you have a very large cube and a user tried to ascii export it all this might hit the RAM hard.

Unless your application is already consuming TBs of RAM in in its normal state then something is clearly very wrong. Since the overage cost of this RAM might cost your company significat £££, I'd be inclined to stop all the applications / restart the box. This may of course prevent you from understanding what is going wrong.

Good luck!

Re: Tm1 and ibm cloud

Posted: Tue Mar 01, 2022 9:36 am
by Houps
Yesterday it was 8 gb - and i think it is ok.

Re: Tm1 and ibm cloud

Posted: Tue Mar 01, 2022 9:50 am
by Steve Rowe
Just to follow up.

Is the bucket size referring to the RAM or the disc consumption?

If it disc consumption rather than RAM, then you need to look at and understand how the logging is configured on TM1.
Areas to check.
  • Make sure you understand how the disc is used, can it only be TM1 causing the issue?
  • Are you loading alot of data frequantly and do you have logging turned on during the data load?
  • Is the performance monitor turned on and the transaction logging of the performance monitor cubes turned on? This can generate very large log files, very quickly.
  • The audit log can generate a lot of records on a busy server.
  • Look in the logging directory for TM1 and see if there are any very large log files. You'll then need to analyse them to try and determine what is causing the log records.
  • It's possible but very unlikley this is being caused by a TI running at a high frequency and generating errors. There are controls in place to prevent a TI generating very long error logs, but if you have a lot of TIs running in a chore at a high frequency then this could eat up your disc too.
If this is new behaviour, something must have changed, you've just not found it yet, ask around, are there any other admins of the TM1 application?

Cheers,

Re: Tm1 and ibm cloud

Posted: Tue Mar 01, 2022 2:37 pm
by gtonkin
Would recommend logging in to the Rich Environment and mapping a drive so that you can view the data and logs folder in Windows Explorer.
At least this way you are likely to see it it is one large file, multiple smaller files e.g. Logs etc.

Re: Tm1 and ibm cloud

Posted: Wed Mar 09, 2022 11:21 am
by Houps
Probles was with transaction log
I created a chor that do "Save data" and after that all working good

Re: Tm1 and ibm cloud

Posted: Thu Mar 10, 2022 11:54 am
by John Hammond
Watch out with spreading esp auto spread. When you have loads of dimensions you can inadvertently generate a massive query. We had this and turned spreading off.