New Git and HP adapter videos available on YouTube

I’m pleased to announce that, at long last, we have an introductory video on the Rational Lifecycle Integration Adapter for Git available on YouTube. This video covers the architecture and primary use case of the adapter, and also provides a demo of Deb the developer automatically associating her Git commit with her RTC work item via the adapter.

We also have a new HP adapter video available, demonstrating the suspect traceability and update features of the HP adapter. This is a great demo of the end-to-end integration between Rational Requirements Composer and HP Quality Center.

This video supplements the existing videos available, introducing use of the HP adapter for integrating HP ALM with Rational Team Concert and Rational Requirements Management tools:

Posted in Git, HP, Rational Lifecycle Integration Adapters, Uncategorized | Leave a comment

Wrapping the Rational Adapter for Git pre-receive hook

Toto, I’ve a feeling we’re not in Kansas anymore.

If you’re wondering why you can’t find the “z” in this post, I’ve taken on a new challenge as the development lead for the Rational Lifecycle Integration Adapters for Git and HP ALM. So, while this blog will still maintain its Jazz-y theme, I’ll no longer be focusing on the Rational Team Concert Enterprise Extensions. Hopefully by now you’ve started following and for all the latest and greatest EE news.

What are the Rational Lifecycle Integration Adapters? You can find a good introduction to our three Standard Edition (OSLC-based) adapters, as well as an announcement of our latest V1.1 release, on 60-day trial versions of all three adapters are available for download on the Downloads page.

Today I’d like to address a question we’ve received on more than one occasion regarding the Git adapter pre-receive hook. The Git adapter’s primary function is to create a link between a Git commit and an RTC work item when a developer pushes his changes to the shared repository. The Git adapter provides a pre-receive hook that parses the commit message for pre-defined (and configurable) keywords such as “bug” or “task” and automatically establishes the link to the RTC work item. If the developer forgets to include the work item reference in his commit message, he can view his commit in Gitweb and manually add a link to his RTC work item using the banner provided by the Git adapter.

Git Adapter banner in Gitweb

Some users would rather see the push fail if there is no work item referenced in the commit message. If you fall into this category, or if you have other custom validation you need to perform prior to establishing the link, you can write your own custom pre-receive hook that in turn calls the Git adapter’s hook after any custom logic is performed.

I tested this out by creating a simple Perl script based on a sample provided in Scott Chacon’s Pro Git book. It tests the commit message for the word “bug” before calling the Git adapter’s pre-receive hook. If “bug” is not found, the hook ends in error. I named this script pre-receive, saved it in my Git repository’s hooks directory, and renamed the Git adapter’s pre-receive symbolic link as lia-pre-receive.

A few things to note about this solution:

  1. Like any other sample on this blog, this code is in no way, shape, or form a supported solution. It is only intended to get you started.
  2. This sample doesn’t account for things like excluded branches and customized work item tags (like “task”, “defect”, etc).
  3. If the regular expression in this sample is giving you nightmares, you can pop open the pre-receive hook shipped with the Git adapter for a nice explanation of all the complexity. Fun!
  4. We have a reprocess script available for re-attempting the links if something goes wrong on the push. This reprocess script refers to the Git adapter’s pre-receive hook by name, and as such would need to be appropriately updated to refer to the original Git adapter pre-receive hook and not your new custom hook.
  5. If you thought adding your custom checks to the update hook would be an easier solution, think again. The update hook runs after pre-receive, so at that point it’s too late.
  6. Lastly, if you have a whole bunch of logic to perform during your pre-receive, a quick google will turn up some much sexier options for chaining your hooks.

So, with no further ado, I give you my sample script. Enjoy! And as always, feel free to comment back with your own, better ideas for handling custom hook logic.

#!/usr/bin/env perl

use strict;
use diagnostics;
use IPC::Run3;

sub main() {
    my @list = <STDIN>;
    my ($name,$path,$suffix) = File::Basename::fileparse($0);
    my $cmd = $path . 'lia-pre-receive';

    foreach my $line (@list) {
        run3($cmd, \$line);

sub validate() {
    #print "validate: @_\n";
    my ($line) = @_;
    my @inputs = split(/\s+/, $line);
    my $oldrev = $inputs[0];
    my $newrev = $inputs[1];
    my $refname = $inputs[2];

    my $tag = "bug";

    my $revs = `git rev-list $oldrev..$newrev`;

    my @missed_revs = split("\n", $revs);
    foreach my $rev (@missed_revs) {
        my $sed = q(sed '1,/^$/d');
        my $message = `git cat-file commit $rev | $sed`;
        if ($message =~ m/(?<!\w)$tag(?:\s*(\(.*?\)))?[:\s]\s*(\d+)/) {
            #Nothing to do here...
        } else {
            print "No work item was specified for commit $rev. Exiting...\n";
            exit 1;

exit 0;
Posted in Git, Rational Lifecycle Integration Adapters | Leave a comment

RTC V4.0 Enterprise Extensions Build Administration Workshop available on!

My colleague, Jorge Díaz, and I have been hard at work the last several months preparing a workshop for System z build administrators to learn the concepts and steps involved in migrating and maintaining source control and build infrastructure using Rational Team Concert Enterprise Extensions. We are excited to announce that the Rational Team Concert 4.0 Enterprise Extensions Build Administration Workshop is now available for download on! Follow the link to find everything you need to run through this workshop, including installation and setup instructions, a lab book, a sample application, and a supporting slide deck that you can refer to for additional information on the concepts you’re applying. We hope you find this workshop both educational and easy to follow, and we invite you to submit your feedback through the discussion section at the bottom of the article. Enjoy!

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

More RTC EE videos available on

Back in August, I shared some New RTC EE 3.0.1 videos available on Since then, several additional videos have been published:

I hope you enjoy these videos and find them educational. Feel free to comment back with additional topics you would like to see covered in the video library!

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

Delivering your dependency build outputs back to the stream

Several months back I started on an effort to create an Ant task that would deliver outputs of a dependency build back into the SCM. I did a couple quick tests to make sure that the underlying support was there by zimporting a hello world application, zloading it back out, and confirming it could still run. Full success. Then I coded up a quick prototype Ant task to create and run an IShareOperation to confirm that I could share the build outputs using available Java API rather than zimport. Again, full success. Great! I delivered the good news that we would be able to deliver this sample Ant task.

Unfortunately the devil (many devils, in fact) was in the details, as I discovered recently when I finally sat down again to properly implement this Ant task. This story really doesn’t have a happy ending, but it is worth sharing given everything I’ve learned while trying to implement this sample.

The first thing I realized was that, in order for this task to really be useful, I would need to not just store the outputs in the SCM, but also create and assign data set definitions to the folders the outputs would be stored in. Otherwise, you wouldn’t be able to load the outputs back out to MVS. It became quickly apparent that (a) I would basically be copy pasting the entire zimport tool and (b) I would need to use undocumented APIs to get this done. So, I threw away the idea of creating an Ant task and decided that I would invoke zimport from an Ant task, and that I would configure a post-build script in my dependency build definition to call this task. I would also use the scm command line deliver operation to deliver from the zimport workspace to the stream. I used this Using the SCM Command Line Interface in builds article as a starting point.

With this new approach in mind, I started coding my post-build Ant script. It was then that I started seeing failures when I tried to zimport programs of any meaningful size. It turns out I’d found a new bug: Hash mismatch TRE when zimporting load modules (244317). Unfortunately, there is as of this posting no fix for this bug, and therefore the solution I’m presenting is not currently a working option. You can however test out this sample on very small applications in the meantime, which is what I did to continue my work.

The next thing I realized was that in order to deliver the change sets created by zimport, I would need to add a comment or associate a work item in order to pass the commonly used “Descriptive Change Sets” precondition on deliver. Unfortunately (there’s that word again!), zimport does not tell you what change sets it created, or provide you with a way to annotate them. So, this would require some additional scripting and use of the SCM CLI. My sample adds comments; I leave it as an exercise for you, dear reader, to associate a work item (and potentially use curl to create a new one) should you so desire.

An important piece of the sample was to show how we could figure out what output needed to be delivered. So, I had created as part of my initial Ant task a utility to take the dependency build report as input and generate a zimport mapping file. I figured that piece at least was salvageable, even if my main solution was not going to be an Ant task. I discovered something interesting though during my testing: the build report is actually not created until AFTER the post-build script executes. Rats! So, I had to change my approach from using a post-build Ant script to adding a Post Build Command Line to my dependency build configuration and implementing everything as a shell script. I then converted my Java utility to some javascript that would generate my zimport mapping file.

There is much more to this painful saga, but I will spare you the details and share my solution here. Remember that as usual this sample is JUST a sample to get you started. It is not guaranteed or supported in any way. It has been only minimally tested. You will also see quickly that I am not an expert at shell scripting, perl, or javascript. My code is not sexy by any stretch of the imagination, nor is it robust. But, it works (at least for me!) and is hopefully enough to get you well on your way to your own full solution.

First, here is the main script that you will invoke from your dependency build:

#set environment variables required by SCM tools
export JAVA_HOME=/var/java16_64/J6.0_64
export SCM_WORK=/u/jazz40

#specify locations of jrunscript, perl, and javascript used in this script
#some of these could be passed in as build properties set on the build engine
SCM="/u/ryehle/rtcv401/usr/lpp/jazz/v4.0.1/scmtools/eclipse/scm --non-interactive"
LSCM="/u/ryehle/rtcv401/usr/lpp/jazz/v4.0.1/scmtools/eclipse/lscm --non-interactive"

#function to clean up temporary files from previous run
function remove_temp_file {
	if [ -f $1 ]
	    echo "Deleting $1"
	    rm $1

echo "Running $0 as $(whoami)"

#specify port for scm daemon. this could also be passed in,
#or let it auto assign and parse out the value

#gather the arguments and echo for log
echo "Repository address[$REPOSITORYADDRESS] HLQ[$HLQ] Fetch directory[$FETCH]
System definition project area[$SYSDEFPROJECTAREA] Build workspace UUID[$BLDWKSP]
Build label[$LABEL] Personal build[$PERSONAL_BUILD]"

#do not store outputs if this is a personal build
if [ $PERSONAL_BUILD = "personalBuild" ]
#    echo "team build"
	echo "Personal build...exiting."
	exit 0

#delete any existing temporary files
remove_temp_file $MAPPING
remove_temp_file $FLOW
remove_temp_file $CS_LIST
remove_temp_file $CS_UPDATE
remove_temp_file $SCM_DAEMON

#create the zimport mapping file from the build report
echo "$(date) Creating the zimport mapping file from the build report"
$JRUNSCRIPT $PARSE_SCRIPT $FETCH/buildReport.xml $HLQ Output CompOutput > $MAPPING

#check to see if there is anything to be zimported
if [ ! -s $MAPPING ]
	echo "Nothing to zimport...exiting."
	exit 0

#start the scm daemon process in the background and wait for it
echo "$(date) Starting the SCM daemon at port $PORT"
$SCM daemon start --port $PORT --description "zimport and deliver" >  $SCM_DAEMON 2>&1 &
while true
	[ -f $SCM_DAEMON -a -s $SCM_DAEMON ] && break
	sleep 2
$PERL -ne 'if (/^Port: /) {exit 0;} exit 1;' $SCM_DAEMON
if [ $? -eq 1 ]
	echo "$(date) The SCM daemon failed to start...exiting."
	exit 1;

#display the running daemons
$LSCM ls daemon

#calculate the stream being built and where outputs will be stored. this could be hardcoded to save time.
echo "$(date) Calculating stream being built: $LSCM list flowtargets $BLDWKSP -r $REPOSITORYADDRESS"
cat $FLOW
PERL_SCRIPT="if (/\"(.*)\".*\(current\)/)
	{ print qq(\$1); exit 0;}
	die(qq(Could not find current flow for build workspace));"

#create the zimport workspace
ZIMP_WKSP=zimport_$(date +%Y%m%d-%H%M%S)
echo "$(date) Creating workspace for zimport named $ZIMP_WKSP flowing to $STREAM"

#perform the zimport
echo "$(date) Starting zimport"
$SCM zimport --binary -r $REPOSITORYADDRESS --hlq $HLQ --mapfile "$FETCH//outputMappingFile.txt" --projectarea "$SYSDEFPROJECTAREA" --workspace $ZIMP_WKSP

#gather list of change sets created from zimport and add a comment
#note: does not annotate new components
echo "$(date) Adding comment to generated change sets"
$LSCM compare workspace $ZIMP_WKSP stream "$STREAM" -r $REPOSITORYADDRESS -p c -f o > $CS_LIST
cat $CS_LIST
PERL_SCRIPT="if (/    \((\d+)\)/)
	print qq($LSCM changeset comment \$1 \\\"Change set created by zimport from build $LABEL\\\" -r $REPOSITORYADDRESS \n);
chmod 777 $CS_UPDATE

#we can deliver everything since we just created the workspace. otherwise we could have delivered the individual change sets.
echo "$(date) Delivering the changes"

#delete the zimport workspace
echo "$(date) Deleting the zimport workspace $ZIMP_WORKSPACE"

#stop the daemon
echo "$(date) Stopping the daemon at port $PORT"
$SCM daemon stop --port $PORT

echo "$(date) Done"

And here is the parseBuildReport.js javascript that takes a build report and generates a zimport mapping file:

Array.prototype.contains = function(object) {
	var i = this.length;
	while (i--) {
		if (this[i] === object) {
			return true;
	return false;

var doc = new XMLDocument(arguments[0]);
var hlq = String(arguments[1]);
var output_project_suffix = arguments[2];
var output_component_suffix = arguments[3];
var componentList = doc.getElementsByTagName('bf:component');
var outputArray = [];
for ( var i = 0; i < componentList.length; i++) {
	var component = componentList.item(i);
	var componentName = component.getAttribute('bf:name');
	var projectList = component.getElementsByTagName('bf:project');
	for ( var j = 0; j < projectList.length; j++) {
		var project = projectList.item(j);
		var projectName = project.getAttribute('bf:name');
		var fileList = project.getElementsByTagName('bf:file');
		for (var k = 0; k < fileList.getLength(); k++) {
			var file = fileList.item(k);
			var reason = file.getAttribute('bf:reason');
			if (reason != 0) {
				var outputList = file.getElementsByTagName('outputs:file');
				for (var l = 0; l < outputList.getLength(); l++) {
					var output = outputList.item(l);
					var member = output.getElementsByTagName('outputs:buildFile').item(0).getTextContent();
					var dataset = output.getElementsByTagName('outputs:buildPath').item(0).getTextContent();
					var outputModel = new OutputModel(dataset, member, componentName, projectName);
var projectsArray = [];
var componentsArray = [];
for ( var i = 0; i < outputArray.length; i++) {
	var output = outputArray[i];
	//println(output.dataset + output.member + output.component + output.project);
	var member = output.dataset.substr(hlq.length + 1) + "." + output.member;
	var project = output.project + output_project_suffix + ":" + output.dataset.substr(hlq.length + 1);
	println("P:" + member + "=" + project);

	//stash the zComponent project and component.. we only want one entry per project
	if (!projectsArray.contains(output.project)) {
for (var i = 0; i < projectsArray.length; i++) {
	println("C:" + projectsArray[i] + output_project_suffix + "=" + componentsArray[i] + output_component_suffix);

function OutputModel (dataset,member,component,project) {
	this.dataset = dataset;
	this.member = member;
	this.component = component;
	this.project = project;

To try this out, you will need to:

  1. Store the two sample scripts above on your build machine.
  2. Add a Post Build Command Line to your dependency build definition. Specify a command to invoke the script and pass in all of the required parameters. I specify the following command: ${repositoryAddress} ${team.enterprise.scm.resourcePrefix} ${team.enterprise.scm.fetchDestination} “Common Build Admin Project” ${teamz.scm.workspaceUUID} ${buildLabel} ${personalBuild}
  3. Logged in to the build machine as the user under whom the build will run, run the “scm login” command to cache your jazz credentials. E.g., scm login -r https://your_host:9443/ccm -u builder -P fun2test -c. This allows you to run your scm commands from the script without hardcoding a password. By default, the credentials are cached in ~/.jazz-scm. Unfortunately, the scm command line does not allow you to specify a password file as you can for the build agent. Note that the –non-interactive flag passed to the SCM CLI ensures the CLI does not ask for a password and hang the build.

Now you should be able to run your dependency build and see that your outputs are stored back in the source stream. This sample script creates and leaves behind several temporary files for easier reading and debugging. You could certainly refactor the script to not use the temporary files once your solution is fully implemented and tested.

Some additional things to note about this sample:

  1. Notice in the shell script that we start an SCM daemon at the beginning and stop it at the end. We use “lscm” rather than “scm” to leverage this started daemon and reduce the overhead time of these commands. You could consider hard coding some things that don’t need to be calculated to save additional time, such as the current stream with which the build workspace flows. Note also that the zimport subcommand is not supported from lscm, so we use scm for that command. The open enhancement to support zimport from lscm can be seen here.
  2. This script checks the ${personalBuild} build property and exits if it is true. Outputs should likely only be stored in the SCM if they were generated by a team build.
  3. This sample zimports all outputs as binary. You will need to expand the sample if you want to import generated source as text.
  4. This sample uses a convention to create new zComponent projects and RTC components to store the outputs. We do not store the outputs in the .zOSbin folder of the source zComponent projects because there is no way to load files in that folder back out to MVS. We also would not want to run the risk of developers accidentally loading the outputs to their sandbox, nor would we want to potentially cause issues with the dependency build by intermingling the source and outputs.
  5. This sample requires RTC V4.0.1 for the scm list flowtargets command, and for a fix that allows you to specify a workspace for zimport.

Hopefully this sample is useful to you in some capacity, even without a working zimport. Feel free to comment back with your suggested improvements. Lastly, I would be remiss if I did not say THANK YOU to the many folks who helped me stumble through my lack of SCM CLI and scripting skills (Eric, Nicolas, John…) for this post.

Posted in Enterprise Extensions, Rational Team Concert, System z | 2 Comments

Specifying compiler options on a per file basis using translator variables

When building your mainframe applications with Rational Team Concert, you capture your compiler options in a translator. The translator represents one step in your build process, such as a compile or a link-edit. The translator belongs to a language definition, which gathers and orders all of the steps necessary to build a file. Each file that needs to be built is associated with an appropriate language definition. So, you’d have one language definition for your main COBOL programs, another for your subroutines, yet another for your BMS maps, and so on. But what do you do if not all of your files of any given type require the same compile or link options? Or if you want to use different options depending on if you are building at DEV or TEST?

Starting in RTC V4.0, you can use variables in your translator when you specify your compile options. You provide a default value for the variable in the translator, and then can override that value at the file or build definition level. The order of precedence for resolving the variable value at build time is:

  1. File property
  2. Build property
  3. Default value in translator

For a simple example, this PL/I compile translator uses a variable to indicate whether to generate compile listings:

Translator with variable

Translator with variable

The default value is LIST. If you want to specify NOLIST for a particular file, you provide the override in the file properties like so:

File level compiler option override

File level compiler option override

If you want to specify NOLIST for all files, you can use a build property that you add to the build definition or the build request. You create the build property by adding the prefix to the variable name. So, in this example, we would create a build property of name and give it a value of NOLIST to override the translator default value.

Variables can be used in the “Command/member” field for ISPF/TSO command or exec translators, and in the “Default options” field for called program translators. For more information, visit the Information Center.

Posted in Uncategorized | 1 Comment

Pre-processing your promotion build

Suppose you want to run some checks on the build outputs you are going to promote, for example to ensure they were not compiled with the debug option on. You’ve written a custom REXX script that parses the generated promotionInfo.xml file that contains the name of the build outputs to be promoted and then checks each output. Now what?

Your promotion definition can be configured to run a pre-build and/or post-build command. So, you can just call your REXX from the pre-build command, right? Wrong. The problem is that at the time the pre-build command executes, none of the intermediate files produced on the server to serve as input to the promotion have been transferred to the build machine yet. This means you can’t access promotionInfo.xml to see which outputs to check. Rats!

Rather than use a pre-build command, you will need to use a custom Ant script to perform the promotion. Perform the following steps:

  1. Save the generatedBuild.xml from one of your successful promotion build results to a USS directory on the host where your build agent is running.
  2. Edit the build script and add an additional target that executes your REXX.
  3. Update the “all” target in the build script to execute your target before performing the promote.
  4. In the Promotion definition, on the z/OS Promotion tab, choose “Use an existing build file”.
  5. Specify the build file you created in step 1.

Note that the dependency build offers more flexibility in this area than promotion, in that you can specify a pre or post build script to be executed right before or right after the main build script. This gives you the ability to inject additional Ant tasks while still generating the build script on each run. This capability was added in version 4.0 and can be found on the z/OS Dependency Build tab of your build definition.

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

Migrating your mainframe application to Rational Team Concert for SCM and build

Your organization has successfully been using RTC for work items and planning, and you’re now ready to move your source control and build as well to take advantage of the full capabilities available to you and reap the benefits of an all-in-one tool. Where do you begin?

Two of the biggest design decisions you will face when migrating your code will be how to logically organize your source into components, and how many streams you will need to properly flow your changes from development to production. You will need to consider things like common code, access control, and the recommended best practice of limiting the number of files in your component to approximately 1000 (500 for earlier releases). You may decide to enlist the help of IBM or a business partner to assist you in devising your strategy.

Once you’ve devised your component and stream strategy, you’ll need to organize your source data sets and prepare for zimport. You can read a bit about zimport in my earlier post, Getting my MVS files into the RTC repository (and getting them back out again). At this point, best practice dictates that you do the following:

1. Import your source to your highest level stream (i.e., production). You may choose to do this as a series of zimports in order to create a history of your major releases to be captured in RTC.

2. Perform a dependency build at the production level. Dependency build creates artifacts known as build maps, one for each program, to capture all of the inputs and outputs involved in building a program. These maps are used in subsequent builds to figure out what programs need to be re-built based on what has changed. This initial build at the production level will build all of your programs, serving two purposes: (1) to prove you have successfully imported your source and you are properly configured to build everything and avoid surprises down the road and (2) to create build maps for all of your programs so that going forward during actual development you will only build based on what has changed.

3. Component promote your source and outputs down through your hierarchy (e.g, production -> QA -> test -> development). This will populate your streams and propagate your build maps down through each level to seed your development-level dependency build. Note that regardless of if you are going to build at a given level (e.g., test), you still need the build maps in place at that level for use in the promotion process.

Once these steps are complete, actual development can begin. Your developers can start delivering changes at your lowest (development) level, build only what’s changed, and use work item promotion to propagate your changes up through your hierarchy to the production level. Each time you begin work on a new release, you will again use component promotion to seed that new release (source code and build artifacts) from the production level.

Great! Except, if you’re like most users, one sentence above has left you reeling: This initial build at the production level will build all of your programs. You want me to do WHAT?! Re-build EVERYTHING?? Yep. For the reasons stated above. But the reality is that this may not be practical or even feasible for a number of reasons. So let’s talk about your options.

Hopefully your biggest objection here is that you don’t want a whole new set of production-level modules, when your current production modules are already tested and proven. No problem! Simply perform the production dependency build to prove out your build setup and generate your build maps, and then throw away all of the build outputs and replace them with your current production modules. This is actually the recommended migration path. You will simply need to use the “Skip timestamp check when build outputs are promoted” option when you are component promoting down (but don’t skip it when you work item promote back up). Also ensure that your dependency builds are configured to trust build outputs. This is the default behavior, and allows the dependency build to assume that the outputs on the build machine are the same outputs that were generated by a previous build. When this option is turned off, the dependency build checks for the presence of the build outputs and confirms that the timestamp on each output matches the timestamp in the build map. A non-existent build output or a mismatched timestamp will cause the program to be rebuilt.

Ideally you are satisfied and can follow the recommended path of building everything and replacing the outputs with your production modules. However, this may not be the case, so let’s explore a few other possible scenarios and workarounds:

1. Issue: Some of my programs need changes before they can be built, and it’s not feasible to do all of that work up-front before the migration.

Workaround: Assign your unbuildable programs a language definition with no translator. We will not consider these programs buildable and they will be ignored during dependency build. When you are ready to update the programs, assign them a proper language definition at that time.You can also use NO language definition on your unbuildable program if you’re not using default language definitions (i.e. language definitions assigned based on file extension). In this case, the file will also not be scanned. Note: The approach of adding a language definition after the initial build is broken in V4 on, and a fix is currently targeted for 4.0.2. See the defect new file is not built if Lang def is assigned after 1st build (243516) for details.

2. Issue: All those copies of outputs at each level in my hierarchy are just taking up space. I don’t want them there.

Workaround: You can modify the promotion script to promote the build maps but not copy the outputs themselves. Again, ensure that trust build outputs is true (default) in your dependency build definitions. If you are building at a level where you don’t have outputs, ensure that your production libraries are included in the SYSLIB in your translators.

Follow these steps to utilize this workaround:
1. Copy generatedBuild.xml from a promotion result to the host.
2. In the Promotion definition, on the z/OS Promotion tab, choose “Use an existing build file”.
3. Specify the build file you created in step 1.
4. For build targets, specify “init, startFinalizeBuildMaps, runFinalizeBuildMaps” without the quotes.

3. Issue: I refuse to build all of my programs. That’s ridiculous and way too expensive.

Workaround: Seed the dependency build by creating IEFBR14 translators. This will give you the build maps you need without actually building anything. Then switch to real translators. There is a major caveat here: Indirect dependencies are not handled automatically until the depending program is actually built. For example, if you have a BMS map that generates a copybook that is included by a COBOL program, the dependency of the COBOL program on the BMS map is not discovered until the COBOL program actually goes through a real build. If you can accept this limitation, one approach to this workaround is as follows:
1. Create two sets of translators: your real translators, and one IEFBR14 translator per original translator to make sure there are no issues with SYSLIBs changing when you switch from IEFBR14 to real translators.
2. Use build properties in your language definition to specify translators, and set those properties in the build definition.
3. Request the build with the properties pointing to the IEFBR14 translators. Everything “builds” but no outputs are generated.
4. Change all of the translator properties in the build definition to point at the real translators.
5. Request another build and see that nothing is built.
This approach again requires that we trust build outputs so we don’t rebuild based on none of the load modules listed in the build maps actually existing.

With any of these approaches, it’s essential that you test out the full cycle (build at production, component promote down, deliver various changes — e.g., a main program, a copybook, a BMS map, an ignored change, etc — at development, build the changes, work item promote the changes up) on a small subset of your programs to ensure that your solution works for your environment and situation before importing and building your full collection of programs.

Posted in Enterprise Extensions, Rational Team Concert, System z | 1 Comment

New Enterprise Extensions blog to follow!

Two of the RTC-EE developers have started blogging! Check out their first post here on the great improvements made to the IBM i capability in RTC V4. I am especially excited about the Useful Links page they’ve included on their site. Enjoy!

Posted in Enterprise Extensions, Rational Team Concert | Leave a comment

A simple build chaining example

Imagine the case where your mainframe application is not purely COBOL, but also leverages Java, web services, etc. In this scenario, you will not be able to build with dependency build alone. What can you do to coordinate the build of your entire application? In this post, we’ll look at a simple way to coordinate the builds of a COBOL application and a Java application, such that they run one after the other.

We will build our example into Money that Matters, so you can easily try it out too. Out of the box, Money that Matters has a Java build ( and a COBOL dependency build ( Let’s chain those two together.

First, we will create a third parent build definition to control the two child builds. It’s possible instead to add the request for the Java build directly to the end of the COBOL dependency build, but with the parent-child approach you can easily add more children and also easily run the children on their own. Our third build will be an Ant – Jazz Build Engine build, and we’ll call it You will need to use the Jazz Source Control Pre-Build option if you want to store your build script in the repository. Otherwise, you can maintain it on the system where will execute and reference it directly in the build definition. This build definition will simply invoke the Ant script, build.xml:

The build script will simply request the build, wait for it to complete, and then request and wait for the build. This is done using Ant tasks included with the build system toolkit, described here. It will look something like the following:

<?xml version="1.0" encoding="UTF-8"?>
<project default="all" name="Cross platform build">
	<property name="userId" value="builder2" />
	<property name="password" value="rtc9fun" />
	<taskdef name="requestTeamBuild" classname="" />
	<taskdef name="startBuildActivity" classname="" />
	<taskdef name="waitForTeamBuild" classname="" />
	<taskdef name="buildResultRetriever" classname="" />
	<taskdef name="linkPublisher" classname="" />
	<target description="Trigger build and wait" name="trigger_build">
		<startBuildActivity autoComplete="true" buildResultUUID="${buildResultUUID}" label="Requesting ${chainedBuildDefinitionId} build" password="${password}" repositoryAddress="${repositoryAddress}" userId="${userId}" />
		<!--request the child build-->
		<requestTeamBuild buildDefinitionId="${chainedBuildDefinitionId}" requestUUIDProperty="buildRequestUUID" resultUUIDProperty="childBuildResultUUID" failOnError="true" password="${password}" repositoryAddress="${repositoryAddress}" userId="${userId}" />
		<startBuildActivity autoComplete="true" buildResultUUID="${buildResultUUID}" label="Waiting for ${chainedBuildDefinitionId} build" password="${password}" repositoryAddress="${repositoryAddress}" userId="${userId}" />
		<!--wait for the child build-->
		<waitForTeamBuild repositoryAddress="${repositoryAddress}" userId="${userId}" password="${password}" requestUUID="${buildRequestUUID}" statesToWaitFor="COMPLETED,CANCELED,INCOMPLETE" buildStatusProperty="buildStatus" verbose="true" interval="5" />
		<!--retrieve the label for the child build-->
		<buildResultRetriever repositoryAddress="${repositoryAddress}" userId="${userId}" password="${password}" buildResultUUID="${childBuildResultUUID}" labelProperty="childBuildLabel" failonerror="false" />
		<!--publish link to child build result in parent-->
		<linkPublisher label="${chainedBuildDefinitionId}:  ${childBuildLabel}" url="${repositoryAddress}/resource/itemOid/${childBuildResultUUID}" buildResultUUID="${buildResultUUID}" repositoryAddress="${repositoryAddress}" userId="${userId}" password="${password}" failOnError="false" />
		<echo message="${chainedBuildDefinitionId} build status: ${mortgageBuildStatus}" />
		<fail message="${chainedBuildDefinitionId} failed. Exiting.">
				<equals arg1="${buildStatus}" arg2="ERROR" />
	<target description="Trigger the build" name="build_mortgage">
		<antcall target="trigger_build">
			<param name="chainedBuildDefinitionId" value="" />
	<target description="Trigger the build" name="build_jke">
		<antcall target="trigger_build">
			<param name="chainedBuildDefinitionId" value="" />
	<target depends="build_mortgage,build_jke" description="Cross-platform build" name="all" />

Notice the use of the buildResultRetriever and linkPublisher tasks to include labeled links to your child builds in the parent build result.

The build is supported by a Jazz Build Engine called, likely running on a distributed platform, while the build is supported by a Rational Build Agent called running on the mainframe. We need to add a second Jazz Build Engine to support the build. We can’t reuse since we need it available to service the request for while is still running. Simply create a new Jazz Build Engine and configure it to support

Now, building your COBOL assets and your Java assets in order is simply a matter of requesting your build. A more complicated scenario might require the use of an additional tool, such as Rational Build Forge, to orchestrate the build.

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment