Best practice for versioning

Post Reply
Joris
Posts: 21
Joined: Wed Sep 30, 2015 2:19 pm
OLAP Product: TM1
Version: 10_2_2
Excel Version: 2010

Best practice for versioning

Post by Joris »

Hi

I'm looking for some good practices on project versioning in TM1. I really want to avoid ending up with objects like "cube.somestuff.version7.revised.january.bis" (ok, I'm slightly exaggerating the issue, but I think you'll get the point).

I have no exotic requirements, not even change tracking, I just want to keep different versions of a project so not to delete or overwrite important history. My idea is to copy/paste the whole directory containing the database etc. everytime I want to start a new version. The disadvantage would however be the disk size every version takes.

Any suggestions? How do you handle it?
User avatar
paulsimon
MVP
Posts: 808
Joined: Sat Sep 03, 2011 11:10 pm
OLAP Product: TM1
Version: PA 2.0.5
Excel Version: 2016
Contact:

Re: Best practice for versioning

Post by paulsimon »

Hi

First of all decide what you want to version. You probably don't want to version cubes as these contain data. If you need to recover cubes then conventional backups are probably the best bet. If you want versioning within a cube, consider adding a version dimension.

However, the area where you probably need versioning is on the code parts of the system rather than the data parts. These tend to be the processes, rules, subsets, etc.

In the past I have used a tool called Subversion. This creates text deltas, so you can see all previous versions but in practice it starts with a base version and only stores what has changed, so the database does not get too huge. We did not apply it directly to the \Cubes folder but instead copied .pro, etc, files into a separate folder to which Subversion was applied. Part of the release mechanism was that we only released what was checked in to Subversion. You could therefore be certain that the version you released was in there (or at least reasonably certain - there is nothing to stop developers editing things directly in Prod other than cutting off their access rights to Prod, but then support becomes an issue, and most TM1 sites are not big enough to have separate Development and Support Teams).

I found the ability in Subversion to see changes to a Process over time useful and it allowed you to easily compare two versions to see what had changed. The fact that you can tell who checked the code in gives a clue as to who changed it.

Regards

Paul Simon
Edward Stuart
Community Contributor
Posts: 247
Joined: Tue Nov 01, 2011 10:31 am
OLAP Product: TM1
Version: All
Excel Version: All
Location: Manchester
Contact:

Re: Best practice for versioning

Post by Edward Stuart »

This link contains the IBM Backup and Recovery Guide which is a useful starting point:

http://www.ibm.com/developerworks/data/ ... ge439.html

I generally follow the path of least resistance between usability and functionality, the projects I work on usually have only a few developers and I can easily get by using WinMerge otherwise I will produce packages to update out of hours.

I prefer to take full backups where possible and run a process to archive/ delete backups to a set schedule to prevent disk space blowing out.
User avatar
qml
MVP
Posts: 1094
Joined: Mon Feb 01, 2010 1:01 pm
OLAP Product: TM1 / Planning Analytics
Version: 2.0.9 and all previous
Excel Version: 2007 - 2016
Location: London, UK, Europe

Re: Best practice for versioning

Post by qml »

For a while now I have been using the following approach and it works well:

- Change your development to always be code-driven (it sounds like a tautology, I know). Basically the only objects you want to version and promote between environments are the TI processes plus any other 'code' you might have (rules, batch files, worksheets, Java etc.). All the other objects (dimensions, cubes, subsets, views etc.) should be created by your deployment TIs only, never manually.

- Once you have moved to that code-driven setup and mindset you can set up any version control software and use that to version your changes. GIT or Subversion, for example.

It really changes how you think about your projects and has some clear advantages.
Kamil Arendt
Joris
Posts: 21
Joined: Wed Sep 30, 2015 2:19 pm
OLAP Product: TM1
Version: 10_2_2
Excel Version: 2010

Re: Best practice for versioning

Post by Joris »

Thank you guys, this is very interesting!

@qml: We already do practice code-driven setup for a large part, so it would be possible to make everything with that approach. After that I guess you'll only need to keep track of changes in *.PRO (Processes) and *.RUX (Rules) files, right?
Paul Segal
Community Contributor
Posts: 306
Joined: Mon May 12, 2008 8:11 am
OLAP Product: TM1
Version: TM1 11 and up
Excel Version: Too many to count

Re: Best practice for versioning

Post by Paul Segal »

Interesting, Kamil. Could you expand? What would you do with, say, a dimension that you can't create from another system?
Paul
Wim Gielis
MVP
Posts: 3105
Joined: Mon Dec 29, 2008 6:26 pm
OLAP Product: TM1, Jedox
Version: PAL 2.0.9.18
Excel Version: Microsoft 365
Location: Brussels, Belgium
Contact:

Re: Best practice for versioning

Post by Wim Gielis »

Paul Segal wrote:Interesting, Kamil. Could you expand? What would you do with, say, a dimension that you can't create from another system?
I *guess* Kamil means that dimensions should not be updated manually, but rather sourced from a text file (CSV file). Please correct me if I'm wrong.
So users would update their dimensions in CSV files or other text files, and TI picks up the files.
Best regards,

Wim Gielis

IBM Champion 2024
Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 121 TM1 articles and a lot of custom code
Newest blog article: Deleting elements quickly
User avatar
George Regateiro
MVP
Posts: 326
Joined: Fri May 16, 2008 3:35 pm
OLAP Product: TM1
Version: 10.1.1
Excel Version: 2007 SP3
Location: Tampa FL USA

Re: Best practice for versioning

Post by George Regateiro »

qml wrote:For a while now I have been using the following approach and it works well:

- Change your development to always be code-driven (it sounds like a tautology, I know). Basically the only objects you want to version and promote between environments are the TI processes plus any other 'code' you might have (rules, batch files, worksheets, Java etc.). All the other objects (dimensions, cubes, subsets, views etc.) should be created by your deployment TIs only, never manually.
We follow a similar logic. We have used Git to manage process and rule change tracking and this is all that is ever migrated. Every dimension is through a TI to add elements, ODBC or CSV, never manually (cubes also). We have extended it one step further and have started using branching to manage separate projects, this allows us to easily switch between tasks and not have to worry about resetting up environments. This has also worked well for code reviews and testing since I can just boot up another devs branch while they are working on something else in their own environment.

Once you get comfortable with source control there are alot of additional things you can do that make it really nice. We had to get around some initial fear of the additional overhead to our dev process but it has worked out well.
User avatar
jim wood
Site Admin
Posts: 3951
Joined: Wed May 14, 2008 1:51 pm
OLAP Product: TM1
Version: PA 2.0.7
Excel Version: Office 365
Location: 37 East 18th Street New York
Contact:

Re: Best practice for versioning

Post by jim wood »

qml wrote:For a while now I have been using the following approach and it works well:

- Change your development to always be code-driven (it sounds like a tautology, I know). Basically the only objects you want to version and promote between environments are the TI processes plus any other 'code' you might have (rules, batch files, worksheets, Java etc.). All the other objects (dimensions, cubes, subsets, views etc.) should be created by your deployment TIs only, never manually.

- Once you have moved to that code-driven setup and mindset you can set up any version control software and use that to version your changes. GIT or Subversion, for example.

It really changes how you think about your projects and has some clear advantages.
We follow a similar approach. We have directories for source files. Within the rule directory we have .rule file versions of all rules. Then we have rules.txt which contains the list of rules that need to be deployed. We take the same approach for dimensions. All of the directory locations are contained within a Server control cube. This means that not only can we deploy multiple objects with only a couple of processes we can also flex the deployment of any Ti processes for different environments and different directory structures.
Struggling through the quagmire of life to reach the other side of who knows where.
Shop at Amazon
Jimbo PC Builds on YouTube
OS: Mac OS 11 PA Version: 2.0.7
User avatar
qml
MVP
Posts: 1094
Joined: Mon Feb 01, 2010 1:01 pm
OLAP Product: TM1 / Planning Analytics
Version: 2.0.9 and all previous
Excel Version: 2007 - 2016
Location: London, UK, Europe

Re: Best practice for versioning

Post by qml »

Clearly a lot of developers are converging on similar solutions independently. What Jim, George and Wim are saying is all close to what I have been doing.

For static dimensions I do one of the following (in order - from simplest to the most complex) depending on circumstances:
a) Hardcode the elements directly in the TI that creates the dim - this is often good enough for really static dims that will never change like value_type that will always have two elements: 'string' and 'number'.
b) Use a csv file as the data source for the TI that creates the dim. All such csv files are then versioned too, together with the code.
c) Use a relational DB to store all metadata definitions, even the structure of cubes and all that jazz. The TIs are then quite generic and create all the objects on the basis of the metadata definitions read from the DB.

Just like Jim, I have rules in separate text files (also versioned) that get attached to cubes after the cubes are built by the deployment code. Specifically, the code cycles through all cubes and searches if there is a matching *.RULE file - if yes, then it attaches it to the respective cube, at which point a *.RUX file is created.
Kamil Arendt
User avatar
paulsimon
MVP
Posts: 808
Joined: Sat Sep 03, 2011 11:10 pm
OLAP Product: TM1
Version: PA 2.0.5
Excel Version: 2016
Contact:

Re: Best practice for versioning

Post by paulsimon »

Hi

My development approach is pretty much in line with the above. I am perhaps not quite so purist as to create every small static dimension in a TI process. However, in most cases when promoting changes to an existing system, promoting a dimension or cube from Dev to Prod is not an option as you will overwrite Production structure/data with Dev.Therefore the only viable approach is to use a TI to make the change.

I tend to have separate processes for setting up the dimension from those that maintain it. The setup process creates a common set of subsets with a standard naming convention, such as subsets of base level elements, a Default subset, etc.

Similarly I have a cube setup process which creates the cube and defines views. I often create the views using the cube viewer, and then I have a utility which reverse engineers the TI statements needed to define the View. I can just paste those into a setup TI process, to create the same views in other environments. It is easier than having to code each view creation by hand. Obviously here I am talking about permanent views that are used for display/entry rather than views that are used as data sources or for zeroing out. Those I would generally create/modify in the TI. Hopefully IBM will soon sort out the issue of not being able to use the new Temp Views as Data Sources.

For dimension building, I have used csv files for a long time, but due to recent experiences I am now moving more towards holding the dimension element and hierarchy information in cubes. This is because we have a lot of dimensions that are user maintained. Using a cube I can deploy a view / sheet via TM1 Web to allow the users to maintain the dimension information, and the users can update this. They can then run a process to update the dimension from the information in the cube.

When they are happy with the dimension there is a routine to export the part of the cube that defines the dimension or the user group's part of the dimension to a file. The csv file can then be promoted up to the next environment, and imported into the cube there. So csv files are still used. However, the reason for using cubes to capture the dimension information is that it was not practical to allow the users to directly edit csv files. Many of the users are on different networks, and cannot access the same domain as the TM1 server. Even for those that can, it is difficult to get IT to apply the appropriate security to the folders and files. A cube is more structured than a csv file and less prone to user error. Keeping the maintenance of the information that defines the dimensions in TM1 and deploying it via TM1 Web seemed to be a solution. The exported csv file can still be subject to a versioning tool like Subversion. The update process also produces an audit trail log of what got changed, ie elements inserted, links created, etc, which is another aspect of versioning.

In practice certain approaches work well for an initial deployment, However, most of us, are probably maintaining existing systems and applying changes to those. Those changes will probably be triggered by some sort of Change Request or Bug Report. One of the challenges I am struggling with at the moment is that we have several developers working on the system who are in different geographic locations. Due to changing priorities there can be several Change Requests waiting for promotion. We define a folder of the TM1 objects that were changed relating to each Change Request. These are usually TI processes, but also Web Sheets, Rules, etc. The inevitable problem that you then get is the issue where two developers have both had to change the same process, and that process then appears in two different Change Request folders. I have not come up with a better way of dealing with this other than merging all the Change Request Folders for a Release into a single folder, and investigating any cases where a file from a Change Request already exists in the merged folder. In most cases it is just a matter of ensuring that the file with the latest date goes into the merged folder. However, there is nothing in TM1 that prevents two developers opening the same TI for editing. Therefore it is always possible that the earlier TI has changes that are not in the later TI eg Developer A changed the Prolog and Developer B changed the Data Tab. The only real solution to this is to read the code and manually merge the TIs. Avoiding this is partly a matter of how the work is divided up. However, that alone is not enough as one cube can be used for many purposes. Potentially using a Version / Source Control system like Subversion to Check Out code can help to show another developer that someone is working on that code, but as there is no system that integrates with TM1, it is up to the discipline of the developers to always remember to check out before editing the process.

We are trying to adopt the approach of updating a Change History comment at the start of each TI, and commenting the particular lines that were changed, both with a date and change request number.

Regards

Paul Simon
hiits100rav
Posts: 8
Joined: Fri Feb 26, 2016 8:22 pm
OLAP Product: TM1
Version: 10.2.2
Excel Version: Excel 2013

Re: Best practice for versioning

Post by hiits100rav »

Hi everyone,

I am a newbie and trying to understand the concepts of TM1. We are struggling to manage our TI processes version and found this post to be very helpful. However it would e great if I could get a guide on integrating our TM1 environment to a version control system (preferably GIT).

Thanks!
st2000
Posts: 62
Joined: Mon Aug 15, 2016 8:48 am
OLAP Product: TM1 (Windows) & SSAS 2014 Ent.
Version: 10.2.0 FP3
Excel Version: Excel 2013
Location: Hamburg, DE, EU
Contact:

Re: Best practice for versioning

Post by st2000 »

paulsimon wrote: a utility which reverse engineers the TI statements
I guess, this is nothing out of the open source community somewhere? If so, I'd be happy for a reference to dowload :mrgreen:
holding the dimension element and hierarchy information in cubes. This is because we have a lot of dimensions that are user maintained. Using a cube I can deploy a view / sheet via TM1 Web to allow the users to maintain the dimension information, and the users can update this. They can then run a process to update the dimension from the information in the cube.
Is this a public promoted best practice (e.g. like Bedrock)? Is there an example available?
Or moreover a private asset to be kept as such? :)
-----------------------------------
Best regards,
Stefan
User avatar
paulsimon
MVP
Posts: 808
Joined: Sat Sep 03, 2011 11:10 pm
OLAP Product: TM1
Version: PA 2.0.5
Excel Version: 2016
Contact:

Re: Best practice for versioning

Post by paulsimon »

Hi

Unfortunately neither of those items are open source. I will send you a private message since this forum prefers to restrict commercial posts to the commercial section.

Regards

Paul Simon
Post Reply