Some of us are busy packing up and preparing for the upcoming IBM Think conference in Boston, May 20-23. So excited to be there !
In this post I wanted to take you through a custom and generic Turbo Integrator process related to the management of Excel files and blobs in TM1. When authoring Excel files in IBM Planning Analytics for Excel (PAfE) or IBM TM1 Perspectives, you could upload the files to the TM1 Content Store. Subsequently, these files can be used in IBM Planning Analytics Workspace (PAW) or IBM PASS (Spreadsheet services, TM1 Web). Excel files do not need to be uploaded of course, you could use them standalone. Uploaded files will be physically stored in the }Externals subfolder of the TM1 Data directory. A related blob file is created in the }Applications subfolder of the TM1 Data directory, mimicking the navigation of Application entries in TM1. The TM1 dimension '}ApplicationEntries' maintains these structures.
When working on a recent Move to cloud, I noticed a whole lot of files in the }Externals subfolder of the TM1 Data directory of the customer. Also, thousands of application entries. I was pretty sure that a good number of files would have become orphaned and obsolete by now, given the 10+ years of life of the TM1 model. But how do you know what Excel files are referenced and what files are not ? You could interrogate the *.blob files with Notepad++ or similar (a programming language like Python or PowerShell for instance) but it's not very obvious. Given the existence of the up to date dimension }ApplicationEntries, I prefer to come up with a decent generic TI process. It will be part of a larger TM1 library of custom processes that I can easily tap into when needed.
The attached TI process is the result of my automation. Basically:
- I loop over all the blob }ApplicationEntries in TM1
- I then recursively call the same process and open the blob file as a text file to compile the needed information: the blob text file contains the reference to the used Excel source file
- I update a text attribute to store that link
- In the Epilog of the main process, after all the information is gathered, I wrote code to delete the source Excel files that are NOT referenced in the blob files
Needless to say, this is quite advanced TI programming. The process features things like:
- recursive process call without setting a useless parameter (you would not want a user to use that parameter and pull down your beautiful process)
- setting and passing global variables to manage the process iterations
- checking if such a called process ran fine, what error did it produce (if any, hopefully none)? We take appropriate actions depending on the outcome.
- changing the data source in a dynamic way: from a subset to a blob text file to NULL. The subset source to the }Clients dimension is very convenient since this dimension exists in any TM1 model.
- counting elements based on a temporary, dynamic, subset. Like that, I avoid extra hierarchy loops. Temporary substs are automatically cleaned up by TM1 at the end of the process.
- looping over Excel files in rather short TI code
- using relative folder paths and file paths. In fact we do not need to know the absolute folder paths to work with files. The relative location with respect to the TM1 Data directory is what matters.
- calling a generic process of mine, to find and/or create a custom logging folder in the TM1 Logging directory (in the absence of that process, I simply retrieve the TM1 Logging directory)
- generating custom info/error messages using the
Code: Select all
Expand()
- escaping a single quote character by doubling them
As a matter of fact, even seasoned TM1 developers and admin people do not just write down such a generic process. Much thought and rewriting/testing goes into it. For me, initially, I thought that I needed to use the function
Code: Select all
AsciiOutputOpen()
Code: Select all
AsciiOutput()
I am confident that inspecting this well-structured process can make you learn new things in TI. In TM1 V12 we will need to revise some parts of this process, since the file system will be out. But that's a story for later. Enjoy !