• Articles
  • Tutorials
  • Interview Questions

Advanced Techniques for TM1 TI Scripting

Techniques for TM1 TI Scripting

TurboIntegrator is a component of IBM Cognos TM1 solutions, which is used to create cubes and dimensions based on data sources such as flat files and Open Database Connectivity (ODBC). With TurboIntegrator, a developer can create and schedule chores, implement ETL capabilities, and utilize advanced scripting features of Cognos TM1.

Below are the topics you will be reading about in this part of the IBM Cognos TM1 tutorial:

ETL

Most applications built with Cognos TM1 require some way to perform data ETL, which is an acronym for extract, transform, and load. It is a process that involves extracting data from an outside source, transforming it into some form to fit an operational need, and then loading that newly formatted data into a target.

TM1 TurboIntegrator Review

TM1 TurboIntegrator is the programming or scripting tool that allows you to automate data importation, metadata management, and many other tasks.

Within each process, there are six sections or tabs. They are as follows:

  • Data Source Tab
  • Variables Tab
  • Maps Tab
  • Advanced Tab
  • Schedule Tab

The Data Source Tab

You can use the Data Source tab to identify the source from which you want to import data to TM1. The fields and options available on the Data Source tab vary according to the data source type that you select. Refer to the following screenshot:

c18

TurboIntegrator local variables:

  • DatasourceNameForServer: This variable lets you set the name of the data source to be used. This value can (and should) include a fully qualified path or other specific information that qualifies the data source.
  • DatasourceNameForClient: It is similar to DatasourceNameForServer and in most cases will be the same value.
  • DatasourceType: It defines the type or kind of data source to be used, e.g., view or character delimited.
  • DatasourceUsername: When connecting to an ODBC data source, this is the name used to connect to it.
  • DatasourcePassword: It is used to set the password when connecting to an ODBC data source.
  • DatasourceQuery: It is used to set the query or SQL string used when connecting to an ODBC data source.
  • DatasourcecubeView: It is used to set the name of the cube view when reading from a TM1 cube view. This can be the name of an existing cube view or a view that the process itself defines.
  • DatasourceDimensionsubset: It is used to set the name of the subset when reading from a TM1 subset. This can be the name of an existing subset or a subset that the process itself defines.
  • DatasourceASCIIDelimiter: This can be used to set the ASCII character to be used as a field delimiter when DatasourceType is character delimited.
  • Datasource ASCII Decimal Separator: This TI local variable sets the decimal separator to be used in any conversion from a string to a number or a number to a string.
  • Datasource ASCII Thousand Separator: This TI local variable sets the thousands separator to be used in any conversion from a string to a number or a number to a string.
  • DatasourceASCIIQuoteCharacter: This TI local variable sets the ASCII character used to enclose the fields of the source file when DatasourceType is character delimited.
  • DatasourceASCIIHeaderRecords: It specifies the number of records to be skipped before processing the data source.

Two additional variables to be aware of are:

  • OnMinorErrorDoItemSkip: This TI local variable instructs TI to skip to the next record when a minor error is encountered while processing a record.
  • MinorErrorLogMax: This TI local variable defines the number of minor errors that will be written to the TM1ProcessError.log file during process execution. If this variable is not defined in the process, the default number of minor errors written to the log file is 1000.

Variables Tab (Using TurboIntegrator Variables)

An information cube may be defined with two dimensions-application names and the application measure. These two dimensions can be used to define a data point in a cube for specific applications to store and retrieve specific information, such as in the following example. Therefore, a TI process would read a file name to be loaded and the location of that file to be loaded:

Data source path
=CellGetS(NameOfInformationCubeName,MeasureNameForCategoryMeasureNameForFilePathName );
Data filename 
= CellGetS( InformationCubeName, MeasureNameForCategory, MeasureNameForFileName) ;

Additionally, the process may also read a location to write exceptions or errors that may occur during the processing:

exceptionFilePath = CellGetS( systemVariableCubeName, systemVariableCategory , systemVariableLogsFileName ) ;

Then, you can build your exception variable:

exceptionFileName = exceptionFilePath | 'my process name' |_ Exceptions.txt';

Finally, you can use some of the previously mentioned process variables to set up your data source (here, an ASCII text file is used as a data source):

# — set the data source info for this process

DatasourceNameForServer = datasourcePath | dataFileName;
DataSourceType = 'CHARACTERDELIMITED';
DatasourceASCIIDelimiter = ',';

Dynamic Definitions

Another advanced technique is to programmatically define a view in a TI process and then set that process’s data source to that view name. It is not that uncommon to define a view to zero-out cells in a cube to which the process will load the incoming data, but it is a little more interesting to define a view that will be read as input by the process itself.

Important TurboIntegrator Functions

ExecuteProcess

It allows you to execute a TI process from within a TI process. The format is as follows:

ExecuteProcess(ProcessName, [ParamName1, ParamValue1, ParamName2, ParamValue2]);

The most important return values are as follows:

  • ProcessExitNormal(): If your ExecuteProcess returns this, it indicates that the process is executed normally.
  • ProcessExitMinorError(): If your ExecuteProcess returns this, it indicates that the process is executed successfully but encountered minor errors.
  • ProcessExitByQuit(): If your ExecuteProcess returns this, it indicates that the process exited because of an explicit quit command.
  • ProcessExitWithMessage(): If your ExecuteProcess returns this, it indicates that the process exited normally, with a message written to Tm1smsg.log.
  • ProcessExitSeriousError(): If your ExecuteProcess returns this, it indicates that the process exited because of a serious error.

ItemSkip

The ItemSkip function can be used to skip a record or row of data.

If (record_is_invalid);
ASCIIOutput (exceptionFile, recordId);
ItemSkip;
Endif;

ProcessBreak, ProcessError, and ProcessQuit

It is worth mentioning the following functions available in TI:

  • ProcessBreak
  • ProcessError
  • ProcessQuit

These processes are important because they can be used to:

  • Stop all processing and force control to the epilog
  • Terminate a process immediately
  • Terminate a process immediately with errors
View Handling
ViewZeroOut

This function sets all data points (all cells) in a named view to zero. It is most often used after defining a very specific view.

 PublishView

This function is provided to publish a private view to the TM1 Server so that other TM1 Clients can see and use the view. This was not possible in early versions of TM1 unless you were a TM1 Administrator. The format of this function is:

PublishView(Cube, View, PublishPrivateSubsets, OverwriteExistingView);

The arguments (parameters) passed to the function are extremely important!

  • PublishPrivateSubsets
  • OverwriteExistingView

CubeClearData

The importance of this function is that if you want to clear the entire cube, this function is extremely fast.

CellIsUpdateable

This is an important function as you can use this to avoid runtime TI-generated errors, but when using it before each cell, you need to insert the following lines:

If (CellsUpdateable(CubeName, DimName1, DimName2, DimName3=1);
CellPutN(myValue, CubeName, DimName1, DimName2, DimName3);
Else;
ASCIIOuput(ExcpetionFile, RecordID, ErrorMsg);
Endif;

SaveDataAll

This function saves all TM1 data from the server memory to disk and restarts the log file—IBM.

To avoid this problem, you can use the SaveDataAll function. However, it is important to use this function appropriately as if this function is used incorrectly, it can cause server locks and crashes.

SubsetGetSize

SubsetGetSize is a useful function that returns a count of the total elements that are present in a given subset.

Security functions:

  • AddClient and DeleteClient: The AddClient function creates and the DeleteClient function deletes clients on the TM1 Server. These functions must be used in the metadata section of a TI process.
  • AddGroup and DeleteGroup: The AddGroup function creates and the DeleteGroup function deletes groups on the TM1 Server. These functions must be used in the metadata section of the TI process.
  • AssignClientToGroup and RemoveClientFromGroup: The AssignClientToGroup function will assign and the RemoveClientFromGroup will remove an existing client from an existing group.
  • ElementSecurityGet and ElementSecurityPut: The ElementSecurityGet function is used to assign and the ElementSecurityPut function is used to retrieve a security level for an existing group for a specific dimension element. The security levels can be None, Read, Write, Reserve, Lock, and Admin.
  • SecurityRefresh: The SecurityRefresh function reads all of the currently set up TM1 security information and applies that information to all of the TM1 objects on the TM1 Server. However, depending on the complexity of the TM1 model and the security that has been set up, this function may take a very long time to execute, and during this time, all users are locked.

Rules and feeders management functions:

The following functions can be used to maintain cube rules and feeders:

CubeProcessFeeders

This function becomes important if you are using conditional feeders. Whenever you edit and save a cube rule file, feeders get automatically reprocessed by TM1. You can use this function to ensure that all of the conditional feeders are reprocessed. However, keep in mind that when using this function, all of the feeders for the cube will be reprocessed:

CubeProcessFeeders(CubeName);

DeleteAllPersistentFeeders

To improve performance, you can define feeders as persistent. TM1 then saves these feeders into a .feeder file. These feeder files will persist (remain) until they are physically removed. You can use the DeleteAllPersistentFeeders function to clean up these files.
Learn BI
ForceSkipCheck

This function can be placed in the prolog section of a TI process to force TM1 to perform as if the cube to which the process is querying has SkipCheck in its rules—i.e., it will only see cells with values rather than every single cell.

RuleLoadFromFile

This function will load a cube’s rule file from a properly formatted text file. If you leave the text file argument empty, TI looks for a source file with the same name as the cube (but with a .rux extension) in the server’s data directory.

RuleLoadFromFile(Cube, TextFileName);

SubsetCreateByMDX

This function creates a subset based on a properly formatted MDX expression.

MDX is a powerful way to create complicated lists. However, TM1 only supports a small number of MDX functions (not the complete MDX list). The format of this function is as follows:

SubsetCreatebyMDX(SubName, MDX_Expression);

ExecuteCommand

This is a function that you can use to execute a command line. The great thing about this is that you can do all sorts of clever things from within a TI process. The most useful among them is to execute an external script or MS Windows command file. The format of this function is as follows:

ExecuteCommand(CommandLine, Wait);

Here, CommandLine is the command line that you want to execute, and the Wait parameter will be set to either 1 or 0 to indicate if the process should wait for the command to complete before proceeding.

Maps Tab

The Maps tab in TM1 is used to specify how the data from different sources maps with cubes, dimensions, attributes, and consolidations in the database. A single TI Map consists of a series of sub-tabs, where each tab allows you to map your data variables to existing TM1 metadata structures. The sub-tabs you map may vary in size based on the type of values your source data is holding.

The Map tab consists of the following sub-tabs:

  • Cube: The Cube sub-tabs specifies how data imports to TM1 cubes through TurboIntegrator Maps. The Cube sub-tab has the following options:
Option Description
Cube Action Used to create, update, or recreate a cube
Cube Name While creating a new cube, enter the cube name in the entry field, and the actions will be applied in that specific cube
Zero Out Position Select this box if you want to update the cube action and set all data points to zero
View Name Allows you to select or define the data points you want
Data Action Used to determine how the processed data is stored, overwrite the existing values, and add more values to the cube
Enable Cube Logging A check box is used to write alterations down to a cube in the TM1s.log file
  • Dimensions: In the TM1 database, Dimensions are used to map variables to dimension elements, and each sub-tab has a grid to map these element variables to the dimensions. Each grid has the following columns:
Column Description
Element Variable Store the names of each variable for which some content value is specified in the Variables tab
Sample Value Used to identify the dimension an element variable is mapped to
Dimension Select all the dimensions to which the element variables are mapped and list them on the server
Order in Cube After the cube element is created, with this option, the user can specify the order of each dimension in the cube
Action Uses Create action for mapping new dimensions
Element Type The element type can be either string or numeric
Element Order In any dimension, the elements can be ordered based on the hierarchy, level, name, and order of input
  • Data: The Data sub-tab is used to map data variables with specific elements and includes the following columns.
Column Description
Data Variable The content value specified for data is stored in the Data Variable
Element With the Subset editor, you can choose the element to which the variable should be mapped
Element Type Specifies the element type
Sample Value It is the first record in your data source that helps the user identify the mapping variable
  • Consolidations: Consolidation sub-tabs are used to map the children’s elements with the parent tab. Each consolidation grid includes the following columns:
Column Description
Cons.

Variable

Stores the name of variables for which the value of the content of consolidation is specified
Dimension List of dimensions mapped with the consolidation
Child Variable Variables from which the immediate child is consolidated
Weight Weight is assigned to the specified child variables

Advanced Tab

Based on the options selected in the TurboIntegrator Editor, the Advanced tab has many sub-tabs used to display the statements generated by Cognos Table Manager. In the Advanced tab, users can also define parameters for the process they want to execute. There are several sub-tabs such as parameters, prolog, metadata, data, and epilog that provide specific information about each process.

Below is the table that describes each of these sub-tabs with the items used to perform certain operations:

Sub-tab Items Description
Parameters Insert To insert new parameters in the sub-tab
Delete To delete selected parameters
Parameters To add new parameters with the name
Type To define the type for new and existing parameters
Default Value To enter a value that can be used as the default value when the TurboIntegrator is running
Prompt Question Used for the parameters when TI processes are running
Prolog, Metadata, Epilog, and Data Statement Text Box To generate statements that define that a series of actions are displayed before the data source is processed
Goto Line Button Select the button, type in the line you want to go to, and click on OK, and you can directly go to that line of code in the statement text box

Schedule Tab

The Schedule tab is used to schedule one or more processes to execute at regular time stamps. There are three main items that help users define a schedule for the process:

  • Schedule Processes as a Chore Name: A process can be executed as a chore at regular intervals with the same names as the process by default. However, you can change the chore name by typing it in the entry field.
  • Chore by Date and Time: This TI feature allows you to schedule a chore on a specific date or time.
  • Chore Execution Frequency: Specific time frames can be assigned to chores based on resource availability and priority. For this, you just need to fill the appropriate fields and create the time schedule.

Advanced TM1 TI Scripting

The Advanced tab in TM1 TurboIntegrator helps a user create or change the existing parameters and pass them to various processes, thereby making the TI process more efficient. These parameters can be added to a process at the runtime or during the edit process procedures.

To edit a process, the user has to create scripts that incorporate both TI functions and TM1 rules functions. There are three main steps for scripting a process in Cognos TM1 TurboIntegrator:

  1. Editing Prolog, Metadata, Data, and Epilog Procedures
  2. Creating Subsets
  3. Creating Attributes

Editing Prolog, Metadata, Data, and Epilog Procedures

For scripting, there are four main procedures that need to be executed in sequence. Within each procedure, several statements are created based on the options selected in the TurboIntegrator window.

These procedures can be edited by adding custom statements incorporated by TurboIntegrator functions and rules functions. The procedures contained within each process are Prolog, Metadata, Data, and Epilog.

Order of Operations within a TurboIntegrator Process

  • When you run a TI process, the procedures are executed in the following sequence (basically, from left to right as they are displayed):
  1. Prolog
  2. Metadata
  3. Data
  4. Epilog
  • Prolog is executed one time only.
  • If the data source for the process is None, Metadata and Data are disabled but are not executed.
  • If the data source for the process is None, Epilog executes one time immediately after Prolog finishes processing.
  • If the data source for the process is None, Metadata and Data will execute for each record in the data source.
  • Code to build or modify a TM1 dimension resides in Metadata.
  • The Metadata tab is used to build TM1 subsets.
  • All lines of code in the Metadata procedure are sequentially executed for each record in the data source.
  • All lines of code in the Data procedure are sequentially executed for each record in the data source.
  • Because the Data procedure is executed for each row or record in the data source, an error is generated in the procedure multiple times.
  • The Data procedure is the procedure used to write/edit code that is used to load data into a cube.
  • The data source is closed after the Data procedure is completed.
  • The Epilog procedure is always the last procedure to be executed.
  • Not all TM1 functions will work as expected in every procedure.

Creating Subsets

After editing the Prolog, Metadata, Data, and Epilog of a procedure, you need to move forward to the next step in Advanced TM1 TI Scripting, i.e., creating subsets. You can create subsets for new dimensions from the Subsets process in the TurboIntegrator window.

You can change the parameters and apply filters on your data sources based on the requirements and the result you want to achieve. If you want to keep all the variables but change the data source, you need to specify how your variables should be handled. After that, you can Select All Variables and execute the process. The newly created subset will be reflected in the Server Explorer’s newDim dimension.

Creating Attributes

The Attribute function in TurboIntegrator assigns the value to a string element, and it is denoted by AttrPutS. Following is the syntax for assigning a string to the attribute in a specific dimension:

AttrPutS(‘YourString’, ‘NewDim’, ‘Cube’, ‘Attribute’);

Below is the procedure to create an Attribute in TurboIntegrator Scripting:

  • Open Attributes in Cognos TI, load the file from the TI_data directory, and edit your data source if required. If you want to change the data source, you need to specify how these process variables should be handled by selecting Keep All Variables
  • Next, select the Variables tab, and check the formula for the attributes. Then, click on the Maps tab, and the Attributes sub-tab to define an attribute as an alias
  • Finally, click on the Advanced tab and the Data sub-tab to see generated statements with two additional ones. Here, is an example of the generated statements before executing the Attributes process
AttrPutS(V4,'newdim',V1,'continent');

You can view your assignment after assigning the attribute values as follows:

  • Double-click on the newDim dimension in the Server Explorer, and open the Subset Editor
  • Select Subset All, and choose the desired filters such as Edit, Filter by, and Attribute from the menu to display them
  • Finally, select the value name from the filter’s drop-down list in the dialog box, and display all the regions for a specific attribute in the Subset Editor

Aliases in TurboIntegrator Functions

Suppose that a company does forecasting every month. Each month, a new version of the working forecast is created. For example, in January the forecast consists of 12 months of forecasted sales (January through December). In February, the working forecast consists of one month (January) of actual sales data and 11 months of forecasted sales (February through December). In March, the working forecast consists of two months (January and February) of actual sales and 10 months of forecasted sales—and so on.

In this example, you can define an alias attribute on the version dimension and use it to refer to whatever version of the forecast is currently running (or is the working version). Then, TI processes can refer to a version as the working forecast (the alias) and always connect to the correct version.

CubeSetLogChanges

In TurboIntegrator, you can turn on (or turn off) cube logging using the function, CubeSetLogChanges. However, you have to be very careful to understand when and why to turn on (or off) cube logging. If data changes are to be made to a cube with your TI process and you need to ensure that you can recover those changes in the event of a server crash, you need to assure that logging is turned on. If it is not important to recover changes made by the TI process, it is better to set cube logging turned off to improve performance.

For example, you can copy a version of the forecast to a reporting cube from the source cube, where cube logging is not that important.

CubeName = 'SalesReport';

Revisiting Variable Types

In the Contents column of the TurboIntegrator Variables tab, you can indicate what you want to do with the data in that variable. The options are as follows:

  • Ignore: If you have a variable set to Ignore, TI will not be able to see or use it at all, and if you refer to it in any part of your code, you will get an error message when you try to save the process.
  • Element, Consolidation, Data, or Attribute: These are only used when you are having TI generate the code via GUI. This tells TI that the contents of this variable should be one of the following:
  1. An element name or consolidation name (which TI can use to update the relevant dimension)
  2. A piece of data, which TI can write into a cube
  3. An attribute, which is also used to update a dimension
  • Other: This is the most common selection. You will set the value to this if you want to write the code yourself.

Course Schedule

Name Date Details
Power BI Training 14 Dec 2024(Sat-Sun) Weekend Batch View Details
21 Dec 2024(Sat-Sun) Weekend Batch
28 Dec 2024(Sat-Sun) Weekend Batch

About the Author

Data Analyst & Machine Learning Associate

As a Data Analyst and machine learning associate, Nishtha combines her analytical skills and machine learning knowledge to interpret complicated datasets. She is also a passionate storyteller who transforms crucial findings into gripping tales that further influence data-driven decision-making in the business frontier.