schlemiel29 wrote: ↑Tue Sep 19, 2023 9:22 am
I'm using SQL Server. So when importing, I can use a TI process reading SQL Server Table and transfer data into the cube.
When inspecting Bedrock functions, there are only file operations for importing and exporting data to files. Is there a reason, e.g. performance?
I can't speak for the authors of Bedrock but I would say that there are a couple of reasons.
One is that it's easy to provide a sample text data file as a working example. On the other hand you can't provide an ODBC data source as an example because it will depend on the connection, the access level, the schema(s), the table names....
The second is that there is no NEED to provide examples of an ODBC connection since the same principles apply anyway. When you create an ODBC connection you're getting the data coming across in rows and columns just as it does for a text file, so anything that Bedrock shows you how to do with a text file source is equally valid for an ODBC source (outside of the Prolog (where you can set the source specifics) and the source tab itself, anyway).
schlemiel29 wrote: ↑Tue Sep 19, 2023 9:22 am
What's best practice using a SQL Server as data source?
There isn't one; it will depend on the circumstances.
Broadly speaking you can pull data from SQL Server in one of two ways:
(a) Have SQL Server generate a text file that TM1 can then ingest. This could be done entirely on the SQL server side or by you connecting to ODBC and triggering a sproc (stored procedure) to generate the file; or
(b) Pull the data directly from either the tables or a view across an ODBC connection.
As a GENERAL rule, which one you use will depend on how fresh you need the data to be, and how much of an impact the query will have on the SQL Server side. You want the latter to be as little as possible, and it's in everyone's interest to keep ODBC connections open for the shortest time possible.
(10, 20 years ago it used to be that the speed of your network would also be an issue in deciding which path to take, though it's less of an issue now.)
It will also depend on which tables you're hitting on the SQL Server side. If you're hitting live transaction (OLTP) tables, there's the potential for a greater impact than if you're hitting an SQL Server data warehouse which uses denormalised fact and dimension (OLAP) tables.
In the former case in particular you don't want your live queries to be locking up the server for an extended period (though as with network speed, in a modern implementation of SQL Server on modern hardware this is likely to be less of an issue that it once might have been), because you may well be locking up both the SQL Server users and the TM1 users. (See also "it's in everyone's interest to keep ODBC connections open for the shortest time possible".) If that's happening then you may want to look at using generated text files as a data source (since they can be generated as and when SQL Server has time), though I would also look at other factors such as:
(a) Whether you're ONLY pulling the data you need, or whether you're just doing a grab-fest and pulling everything regardless of whether you need it;
(b) Whether the data source on the SQL Server side is really optimised in terms of joins, view definitions, etc.
These days I'm less of a fan of using text files if it can be avoided (it can't always be, of course) because they clutter up storage drives and need to be secured and/or cleaned up and purged periodically. On the other hand they do provide you with a built in audit trail of what came across from the source.
In short, I would argue that there is no single best practice; you just need to weigh up the pros and cons of using each approach in terms of the issues above to find the one that works best for you.