Greetings Everyone.
Really appreciated if any expert can help out your thoughts or with your past experience come across the same scenario:
I have requirement to load more than 15 different text file names into one tm1 cube and also data is different but data headers of all the files are same. while loading i need to load each file into one different business type (this is one of my dim). each time i receive files, the number files are increase or decreases and also name of the files are varies each version.
right now i am accomplishing by creating those many TI Processes and loading each one of them into different type business ( in this dim i created 15 elements). is there any way this to dynamic in tm1, i need to load each file into one type of business for those number of files...
TM1 Thoughts appreciated
-
- Regular Participant
- Posts: 155
- Joined: Fri May 20, 2011 8:17 am
- OLAP Product: Applix,Cognos TM!
- Version: applix9.0Cognos tm1 9.5.1
- Excel Version: Excel 2010 2007
-
- Site Admin
- Posts: 6645
- Joined: Sun May 11, 2008 2:30 am
- OLAP Product: TM1
- Version: PA2.0.9.18 Classic NO PAW!
- Excel Version: 2013 and Office 365
- Location: Sydney, Australia
- Contact:
Re: TM1 Thoughts appreciated
This one would be tricky to create a generic solution for. Not impossible, but tricky.mincharug.shulft wrote: I have requirement to load more than 15 different text file names into one tm1 cube and also data is different but data headers of all the files are same. while loading i need to load each file into one different business type (this is one of my dim). each time i receive files, the number files are increase or decreases and also name of the files are varies each version.
right now i am accomplishing by creating those many TI Processes and loading each one of them into different type business ( in this dim i created 15 elements). is there any way this to dynamic in tm1, i need to load each file into one type of business for those number of files...
If I understand you correctly when you say "data is different but data headers of all the files are same", you mean that the columns all represent the same characteristics of the value(s), and it's only the values on each row that vary. If that's the case, you don't need to have many, many upload TI processes; you can just have one which you call from a master process passing it parameters for the file name to use, and the business type that you will be loading to. You then obviously assign the filename parameter to the DatasourceNameForServer variable on the Prolog tab, which allows your common upload Process to set its data source. You use the business type to create conditional blocks in your Metadata and Data tabs (as the case may be) to act on the data received.
Having only a single centralised upload process will help enormously with maintenance since things need be changed in only one place rather than 15 different places.
Now the question is how your master process knows which business type and file name to use.
The key to that is usually in having consistent filenames. They don't have to be identical in every run, they just need to follow a consistent pattern. Some part of the filename should represent the business type that you need to upload to. Even something like 20160328_Data_ABC.csv, 20160328_Data_DEF.csv etc where "ABC" or "DEF" etc represents the business type would suffice. The date part of the filename will obviously be different for each one but that's irrelevant. If this isn't the case then you should talk to the admin of the source system and get that to become the case.
Having done that your master process can use the WildcardFileSearch function to look in the relevant folder to see what files are there. You can then use the SubSt() rules function to parse out the business type from the filename. From there you can use the ExecuteProcess() function to call your common upload process, passing the filename and business type as parameters.
If the filenames can't be made consistent in that way? Ideally someone needs a kick up the backside until they are. But I will admit that I did once come across a situation like this; the files that I received from a legacy system could not have the required data included in the name. (It was some really obscure little third party system that was created with low budget and high coding eccentricity, and not very customisable.) However they could be identified by the first couple of rows of the file contents. I ended up writing a batch file to crack the thing open, read the first couple of lines to find out what it was and rename the file accordingly. The master process called that batch file, then after it completed summoned the common upload process to do the upload using the renamed file as the source. You don't want to go down that path if you don't need to, though. It's easier with Powershell than with ye olde scripting but still ugly as anything.
However as a general rule of thumb for any situation where you have an unknown number of data source files, consistent naming of the files and the WildcardFileSearch function are the key to handling it.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-
- Regular Participant
- Posts: 152
- Joined: Sat May 25, 2013 10:32 am
- OLAP Product: TM1
- Version: 9.5.2; 10.2.2
- Excel Version: 2007
Re: TM1 Thoughts appreciated
Hi Mincharug,
Alan suggestions are helpful.
I would have given a try with below approach.
If you have same file format, and all of those are placed in same folder, below are things I would have tried
Assuming a situation with csv file format
1. Create a csv file which will have a file names which you want to process (Using Command Prompt, and keep it in a scheduler of windows).
2. Use CSV file and pass csv file names as parameter through internal process.
Regards,
Deepak Jain
Alan suggestions are helpful.
I would have given a try with below approach.
If you have same file format, and all of those are placed in same folder, below are things I would have tried
Assuming a situation with csv file format
1. Create a csv file which will have a file names which you want to process (Using Command Prompt, and keep it in a scheduler of windows).
2. Use CSV file and pass csv file names as parameter through internal process.
Regards,
Deepak Jain
- paulsimon
- MVP
- Posts: 808
- Joined: Sat Sep 03, 2011 11:10 pm
- OLAP Product: TM1
- Version: PA 2.0.5
- Excel Version: 2016
- Contact:
Re: TM1 Thoughts appreciated
Hi
I am not sure whether you have control over the file names
If not you can have a top level process that on the Prolog uses ExecuteCommand to run a DIR with an appropriate pattern to pick up the file(s) using > to redirect the output to a file, then read the Dir file in to get a list of files to be processed and then call a sub process with the name of the file to be loaded which can read this in. Once it has processed a file successfully, in the epilog if should use another ExecuteCommand to move the file to another folder, or rename it so that it won't be picked up for processing again.
Ideally you want the business unit as a column in the file, but if not then you need it to appear in the file name.
You need to zero out the cube for just that business unit before loading data into it
Regards
Paul Simon
I am not sure whether you have control over the file names
If not you can have a top level process that on the Prolog uses ExecuteCommand to run a DIR with an appropriate pattern to pick up the file(s) using > to redirect the output to a file, then read the Dir file in to get a list of files to be processed and then call a sub process with the name of the file to be loaded which can read this in. Once it has processed a file successfully, in the epilog if should use another ExecuteCommand to move the file to another folder, or rename it so that it won't be picked up for processing again.
Ideally you want the business unit as a column in the file, but if not then you need it to appear in the file name.
You need to zero out the cube for just that business unit before loading data into it
Regards
Paul Simon