Text File TI reads is retained in memory...?
Posted: Fri Aug 17, 2018 8:42 pm
Has anyone seen this problem? It is the first time I've ever seen it.
We have a TI that reads a text file and updates a cube. Just updating a calendar dimension with American and TM1 formatted dates with an integer as the primary element name. Text file has the key, the American date and TM1 date in three columns.
Broke this up into two groups, one for 1/1/1920 to 12/31/1959. Created the file with a spreadsheet, saved it as a csv file. It loaded just fine.
Then, using the spreadsheet to create the second group, 1/1/1960 to 5/31/2014. Saved that to the same file, just overwriting it. Ran the TI, nothing updated. Added asciioutput and Lo! the same 1920-1959 records are what is being read! Open the file with a text editor, the second group of records is what is in there. The Preview on the data tab screen shows the expected 1960 group of records, but that is not what it is processing. Data Source name and Data Source Name on Server entries match.
Have double & tripled checked the datasource tab. Have stopped and restarted the server. Have searched the sources folder and no files contain the 1920 set of data. They do contain the 1960 set of data which, again, is what pops up on the preview on the data source tab.
I have a user who has reported this a number of times over the last couple of years, but I never saw it until now. Just thought it was user error.
It seems to be holding the old source file in memory. I've never seen anything like this. Has anyone seen this behavior and how is it prevented?
We have a TI that reads a text file and updates a cube. Just updating a calendar dimension with American and TM1 formatted dates with an integer as the primary element name. Text file has the key, the American date and TM1 date in three columns.
Broke this up into two groups, one for 1/1/1920 to 12/31/1959. Created the file with a spreadsheet, saved it as a csv file. It loaded just fine.
Then, using the spreadsheet to create the second group, 1/1/1960 to 5/31/2014. Saved that to the same file, just overwriting it. Ran the TI, nothing updated. Added asciioutput and Lo! the same 1920-1959 records are what is being read! Open the file with a text editor, the second group of records is what is in there. The Preview on the data tab screen shows the expected 1960 group of records, but that is not what it is processing. Data Source name and Data Source Name on Server entries match.
Have double & tripled checked the datasource tab. Have stopped and restarted the server. Have searched the sources folder and no files contain the 1920 set of data. They do contain the 1960 set of data which, again, is what pops up on the preview on the data source tab.
I have a user who has reported this a number of times over the last couple of years, but I never saw it until now. Just thought it was user error.
It seems to be holding the old source file in memory. I've never seen anything like this. Has anyone seen this behavior and how is it prevented?