Guys,
We currently have process that archives cubes, including the dimensions, attributes and data. What we are missing are the dimension public subsets. I can't seem to find a way of finding the list of subsets apart from going through the file list in the subset folder. The current archiving process for the dimensions is generic and I'll like to keep it that way. Doing a file trawl of the subset folder, while being something we can do, I'd rather not go down that path,
Jim.
Archiving Subsets
- jim wood
- Site Admin
- Posts: 3961
- Joined: Wed May 14, 2008 1:51 pm
- OLAP Product: TM1
- Version: PA 2.0.7
- Excel Version: Office 365
- Location: 37 East 18th Street New York
- Contact:
Archiving Subsets
Struggling through the quagmire of life to reach the other side of who knows where.
Go Build a PC
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
Go Build a PC
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
-
- Site Admin
- Posts: 6667
- Joined: Sun May 11, 2008 2:30 am
- OLAP Product: TM1
- Version: PA2.0.9.18 Classic NO PAW!
- Excel Version: 2013 and Office 365
- Location: Sydney, Australia
- Contact:
Re: Archiving Subsets
OK, I may be missing something here but I'm not sure what.jim wood wrote: We currently have process that archives cubes, including the dimensions, attributes and data. What we are missing are the dimension public subsets. I can't seem to find a way of finding the list of subsets apart from going through the file list in the subset folder. The current archiving process for the dimensions is generic and I'll like to keep it that way. Doing a file trawl of the subset folder, while being something we can do, I'd rather not go down that path,
When you say "archive" you're talking about taking the files in the data directory and moving them somewhere else; .cub files, .dim files, etc, are you?
In that case you'd just be taking the entire DimName}subs folder (if it exists) from the data directory (which you'd have to know already to be able to archive the cubes, etc)), since all that should contain is the Public subsets. Or did you mean the Private subsets files, which take a little more searching for?
Qualification: It is of course possible to split the server's data files over multiple folders. I've never seen anyone do it, I can't think of a good reason for doing it since all it's likely to lead to is confusion, but the facility still exists (unfortunately IMHO) and it depends on whether you're factoring that into your process.
How is the archiving being done? Did you mean TI process, or some other kind of coding?
If it's TI, and if you meant private subsets, you'd still need to skim through to find the right folders but you could at least just iterate the }Clients dimension to find any client sub-folders which may in turn contain subset folders.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
- jim wood
- Site Admin
- Posts: 3961
- Joined: Wed May 14, 2008 1:51 pm
- OLAP Product: TM1
- Version: PA 2.0.7
- Excel Version: Office 365
- Location: 37 East 18th Street New York
- Contact:
Re: Archiving Subsets
Good point Alan. We're creating archive versions of the cube within the same service, so the process firstly creates date stamped version of all the dimensions then it creates a date stamped version of the cube. Next we export all data at level 0 (including rule base numbers) and then we import them in to the new cube. (As part of the dimension creation, the process spools through all attributes and applies them to the archive dimension.) Next we have a base set of C: level rules in a file. We have a batch process that does a find and replace in the file to add the date stamp to all dimension references, then this file is loaded as rules for the archive cube. This leaves us with a direct copy of the current cube that because of no level 0 rules takes less space. (Just in case your worried we have a process that checks for any archive cubes older than 6 months and removes them)
Now as things stand we create a new default for each of the dimensions a seperate non generic process. We would like to be able to copy all the subsets over to the new archive. As you'd imagine to do so we'd need a list of all the subsets to spool through, hence my question,
Jim.
PS. I hope my original question makes more sense now.
Now as things stand we create a new default for each of the dimensions a seperate non generic process. We would like to be able to copy all the subsets over to the new archive. As you'd imagine to do so we'd need a list of all the subsets to spool through, hence my question,
Jim.
PS. I hope my original question makes more sense now.

Struggling through the quagmire of life to reach the other side of who knows where.
Go Build a PC
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
Go Build a PC
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
-
- MVP
- Posts: 3706
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Archiving Subsets
Jim, it sounds like an awful lot of trouble to go to. (why not just archive the data directory per date stamp, ready to fire up an exact copy of the cube data, dimensions and rules at a moment's notice as needed?) What exactly is the use case? I'm intrigued. Sounds analogous to the real time backup thread i.e. although the solution may be technically impressive is it really necessary and maybe sometimes it is better to question the requirements to get to the bottom of the actual need as opposed to the perceived need. Not saying thats the case, but like I said I'm intrigued.
Given the amount of coding going into the the date stamping of all dimensions, copy of data into date stamped cubes, find/replace in rules, ... some extra code to trawl the data directory for the subs folders doesn't actually seem like much extra all things considered.
I think you have 2 approaches (there might be more if API is in play beyond TI and windows script)
- get then names of the subsets by trawling the file system, then iterate the subsets and create static duplicated in the time stamped dimensions
- copy the subs folders and rename to the timestamped dimension
I would go with the 2nd option, seems slightly less hassle but would require a server restart to register the subsets. You might also need to watch out for any dynamic subsets as incorrect dimension reference in a dynamic subset can cause the server not to load or crash.
Given the amount of coding going into the the date stamping of all dimensions, copy of data into date stamped cubes, find/replace in rules, ... some extra code to trawl the data directory for the subs folders doesn't actually seem like much extra all things considered.
I think you have 2 approaches (there might be more if API is in play beyond TI and windows script)
- get then names of the subsets by trawling the file system, then iterate the subsets and create static duplicated in the time stamped dimensions
- copy the subs folders and rename to the timestamped dimension
I would go with the 2nd option, seems slightly less hassle but would require a server restart to register the subsets. You might also need to watch out for any dynamic subsets as incorrect dimension reference in a dynamic subset can cause the server not to load or crash.
Please place all requests for help in a public thread. I will not answer PMs requesting assistance.
-
- Site Admin
- Posts: 6667
- Joined: Sun May 11, 2008 2:30 am
- OLAP Product: TM1
- Version: PA2.0.9.18 Classic NO PAW!
- Excel Version: 2013 and Office 365
- Location: Sydney, Australia
- Contact:
Re: Archiving Subsets
OK, I see where you're coming from now. The "generic" bit is the stumbling block.jim wood wrote: Now as things stand we create a new default for each of the dimensions a seperate non generic process. We would like to be able to copy all the subsets over to the new archive. As you'd imagine to do so we'd need a list of all the subsets to spool through, hence my question,
I think you may be slightly screwed on creating a completely generic process to be run entirely from within the server, but you may be able to come moderately close. The missing piece is of course the data directory. If you have that then WildcardFileSearch can do the rest, but it needs a starting point.
There's no native TI function which will return the data path as far as I know. The API can do it, a TI can't. You have the GetProcessErrorFileDirectory function which will return the output path which, by default, is the data directory but I don't know of anyone other than maybe one or two newbies who would allow log files to be mixed in with their data files. (On the other hand some of the tm1s.cfg files for the sample data directories seem to lack a LoggingDirectory parameter, as a result of which the log files go into the data directory. Just one more instance of showing that there are some, by no means all but some, at IBM who have not the first bleeping idea of how to run a TM1 instance out in the wild.)
Anyhow, the point is that there seems to be no function that will return that. I can understand this in a way since as noted previously you can in theory have multiple data folders for a server but an error logging path for a process must by definition be a single path. The absence of a GetServerDataFileDirectory function is therefore less surprising than it might seem.
Nor can I see a way of getting the path to the tm1s.cfg file, which you could parse to obtain that information.
Some, maybe many sites might have a system cube which stores this information, and indeed that's a good practice... but the odds are that no two system cubes will be designed alike so again, it's not a generic option. If Excel is installed on the server then it might have been possible to search the tm1p.ini file however the presence of Excel (or even Architect) on the server is never a given, and even if it was I've seen instances where the Perspectives data folder points to a different location from the actual data files. (Where the .xdis and .xrus are stored separately, for instance.)
What I'd do is to make the generic process have a parameter which receives the data directory. in theory that would require the user to enter it, but in practice you could write a wrapper process which calls your generic one. The wrapper process could pull the relevant path from the site's system cube as discussed above. (It may of course require you to implement one if the site doesn't have one.) This is as close as I think you can get to a true generic method.
The last thing that you need to be aware of is the aliases on the subsets. There's a SubsetAliasSet, but no corresponding SubsetAliasGet. Obviously if the .sub files were copied to another folder they could come across with the alias in place but that won't work as a real time thing because the dimensions will have been renamed under your method.
Once you know where the files are you could parse them to get the alias, but that's a significant amount of effort. But there's no way to read and recreate the subsets exactly as they were, alias and all, otherwise.
"To them, equipment failure is terrifying. To me, it’s 'Tuesday.' "
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
-----------
Before posting, please check the documentation, the FAQ, the Search function and FOR THE LOVE OF GLUB the Request Guidelines.
- jim wood
- Site Admin
- Posts: 3961
- Joined: Wed May 14, 2008 1:51 pm
- OLAP Product: TM1
- Version: PA 2.0.7
- Excel Version: Office 365
- Location: 37 East 18th Street New York
- Contact:
Re: Archiving Subsets
Guys,
Thanks for your responses. Lotsa your right about it being a pain. The client involved recently (12months ago) went through an internal merger and the reporting solution I've been working on is the result of 2 previous TM1 systems. The archiving of cubes is something that was introduced very early on at pace and due to development restrictions replacing the whole thing is seen as low priority project at this time. We've had to put up with refining which is why it's turned in to the technical monster it is now. It's a problem with working with such large company and internal lead resources. On top of this the red tape involved to make a minor change in production is insane.
Both of you have kind of confirmed what I thought. Lots I like your option 2 but to stop the production service we have put a request in with a lead time of at least a week, so option 1 is only viable option for a generic solution. I'm more now leaning towards a less generic solution, kicking of a generic subset copy process 4 times passing it the name of the source dimension & subset and the name of the destination subset,
Jim.
Thanks for your responses. Lotsa your right about it being a pain. The client involved recently (12months ago) went through an internal merger and the reporting solution I've been working on is the result of 2 previous TM1 systems. The archiving of cubes is something that was introduced very early on at pace and due to development restrictions replacing the whole thing is seen as low priority project at this time. We've had to put up with refining which is why it's turned in to the technical monster it is now. It's a problem with working with such large company and internal lead resources. On top of this the red tape involved to make a minor change in production is insane.
Both of you have kind of confirmed what I thought. Lots I like your option 2 but to stop the production service we have put a request in with a lead time of at least a week, so option 1 is only viable option for a generic solution. I'm more now leaning towards a less generic solution, kicking of a generic subset copy process 4 times passing it the name of the source dimension & subset and the name of the destination subset,
Jim.
Struggling through the quagmire of life to reach the other side of who knows where.
Go Build a PC
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
Go Build a PC
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7