New Git and HP adapter videos available on YouTube

I’m pleased to announce that, at long last, we have an introductory video on the Rational Lifecycle Integration Adapter for Git available on YouTube. This video covers the architecture and primary use case of the adapter, and also provides a demo of Deb the developer automatically associating her Git commit with her RTC work item via the adapter.



We also have a new HP adapter video available, demonstrating the suspect traceability and update features of the HP adapter. This is a great demo of the end-to-end integration between Rational Requirements Composer and HP Quality Center.



This video supplements the existing videos available, introducing use of the HP adapter for integrating HP ALM with Rational Team Concert and Rational Requirements Management tools:

Posted in Git, HP, Rational Lifecycle Integration Adapters, Uncategorized | Leave a comment

Wrapping the Rational Adapter for Git pre-receive hook

Toto, I’ve a feeling we’re not in Kansas anymore.

If you’re wondering why you can’t find the “z” in this post, I’ve taken on a new challenge as the development lead for the Rational Lifecycle Integration Adapters for Git and HP ALM. So, while this blog will still maintain its Jazz-y theme, I’ll no longer be focusing on the Rational Team Concert Enterprise Extensions. Hopefully by now you’ve started following jorgediazblog.wordpress.com and rtcee.wordpress.com for all the latest and greatest EE news.

What are the Rational Lifecycle Integration Adapters? You can find a good introduction to our three Standard Edition (OSLC-based) adapters, as well as an announcement of our latest V1.1 release, on jazz.net. 60-day trial versions of all three adapters are available for download on the jazz.net Downloads page.

Today I’d like to address a question we’ve received on more than one occasion regarding the Git adapter pre-receive hook. The Git adapter’s primary function is to create a link between a Git commit and an RTC work item when a developer pushes his changes to the shared repository. The Git adapter provides a pre-receive hook that parses the commit message for pre-defined (and configurable) keywords such as “bug” or “task” and automatically establishes the link to the RTC work item. If the developer forgets to include the work item reference in his commit message, he can view his commit in Gitweb and manually add a link to his RTC work item using the banner provided by the Git adapter.

Git Adapter banner in Gitweb

Some users would rather see the push fail if there is no work item referenced in the commit message. If you fall into this category, or if you have other custom validation you need to perform prior to establishing the link, you can write your own custom pre-receive hook that in turn calls the Git adapter’s hook after any custom logic is performed.

I tested this out by creating a simple Perl script based on a sample provided in Scott Chacon’s Pro Git book. It tests the commit message for the word “bug” before calling the Git adapter’s pre-receive hook. If “bug” is not found, the hook ends in error. I named this script pre-receive, saved it in my Git repository’s hooks directory, and renamed the Git adapter’s pre-receive symbolic link as lia-pre-receive.

A few things to note about this solution:

  1. Like any other sample on this blog, this code is in no way, shape, or form a supported solution. It is only intended to get you started.
  2. This sample doesn’t account for things like excluded branches and customized work item tags (like “task”, “defect”, etc).
  3. If the regular expression in this sample is giving you nightmares, you can pop open the pre-receive hook shipped with the Git adapter for a nice explanation of all the complexity. Fun!
  4. We have a reprocess script available for re-attempting the links if something goes wrong on the push. This reprocess script refers to the Git adapter’s pre-receive hook by name, and as such would need to be appropriately updated to refer to the original Git adapter pre-receive hook and not your new custom hook.
  5. If you thought adding your custom checks to the update hook would be an easier solution, think again. The update hook runs after pre-receive, so at that point it’s too late.
  6. Lastly, if you have a whole bunch of logic to perform during your pre-receive, a quick google will turn up some much sexier options for chaining your hooks.

So, with no further ado, I give you my sample script. Enjoy! And as always, feel free to comment back with your own, better ideas for handling custom hook logic.

#!/usr/bin/env perl

use strict;
use diagnostics;
use IPC::Run3;

sub main() {
    my @list = <STDIN>;
    my ($name,$path,$suffix) = File::Basename::fileparse($0);
    my $cmd = $path . 'lia-pre-receive';

    foreach my $line (@list) {
        validate($line);
        run3($cmd, \$line);
    }
}

sub validate() {
    #print "validate: @_\n";
    my ($line) = @_;
    my @inputs = split(/\s+/, $line);
    my $oldrev = $inputs[0];
    my $newrev = $inputs[1];
    my $refname = $inputs[2];

    my $tag = "bug";

    my $revs = `git rev-list $oldrev..$newrev`;

    my @missed_revs = split("\n", $revs);
    foreach my $rev (@missed_revs) {
        my $sed = q(sed '1,/^$/d');
        my $message = `git cat-file commit $rev | $sed`;
        if ($message =~ m/(?<!\w)$tag(?:\s*(\(.*?\)))?[:\s]\s*(\d+)/) {
            #Nothing to do here...
        } else {
            print "No work item was specified for commit $rev. Exiting...\n";
            exit 1;
        }
    }
}

main();
exit 0;
Posted in Git, Rational Lifecycle Integration Adapters | Leave a comment

RTC V4.0 Enterprise Extensions Build Administration Workshop available on jazz.net!

My colleague, Jorge Díaz, and I have been hard at work the last several months preparing a workshop for System z build administrators to learn the concepts and steps involved in migrating and maintaining source control and build infrastructure using Rational Team Concert Enterprise Extensions. We are excited to announce that the Rational Team Concert 4.0 Enterprise Extensions Build Administration Workshop is now available for download on jazz.net! Follow the link to find everything you need to run through this workshop, including installation and setup instructions, a lab book, a sample application, and a supporting slide deck that you can refer to for additional information on the concepts you’re applying. We hope you find this workshop both educational and easy to follow, and we invite you to submit your feedback through the discussion section at the bottom of the article. Enjoy!

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

More RTC EE videos available on jazz.net

Back in August, I shared some New RTC EE 3.0.1 videos available on jazz.net. Since then, several additional videos have been published:

I hope you enjoy these videos and find them educational. Feel free to comment back with additional topics you would like to see covered in the video library!

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

Delivering your dependency build outputs back to the stream

Several months back I started on an effort to create an Ant task that would deliver outputs of a dependency build back into the SCM. I did a couple quick tests to make sure that the underlying support was there by zimporting a hello world application, zloading it back out, and confirming it could still run. Full success. Then I coded up a quick prototype Ant task to create and run an IShareOperation to confirm that I could share the build outputs using available Java API rather than zimport. Again, full success. Great! I delivered the good news that we would be able to deliver this sample Ant task.

Unfortunately the devil (many devils, in fact) was in the details, as I discovered recently when I finally sat down again to properly implement this Ant task. This story really doesn’t have a happy ending, but it is worth sharing given everything I’ve learned while trying to implement this sample.

The first thing I realized was that, in order for this task to really be useful, I would need to not just store the outputs in the SCM, but also create and assign data set definitions to the folders the outputs would be stored in. Otherwise, you wouldn’t be able to load the outputs back out to MVS. It became quickly apparent that (a) I would basically be copy pasting the entire zimport tool and (b) I would need to use undocumented APIs to get this done. So, I threw away the idea of creating an Ant task and decided that I would invoke zimport from an Ant task, and that I would configure a post-build script in my dependency build definition to call this task. I would also use the scm command line deliver operation to deliver from the zimport workspace to the stream. I used this Using the SCM Command Line Interface in builds article as a starting point.

With this new approach in mind, I started coding my post-build Ant script. It was then that I started seeing failures when I tried to zimport programs of any meaningful size. It turns out I’d found a new bug: Hash mismatch TRE when zimporting load modules (244317). Unfortunately, there is as of this posting no fix for this bug, and therefore the solution I’m presenting is not currently a working option. You can however test out this sample on very small applications in the meantime, which is what I did to continue my work.

The next thing I realized was that in order to deliver the change sets created by zimport, I would need to add a comment or associate a work item in order to pass the commonly used “Descriptive Change Sets” precondition on deliver. Unfortunately (there’s that word again!), zimport does not tell you what change sets it created, or provide you with a way to annotate them. So, this would require some additional scripting and use of the SCM CLI. My sample adds comments; I leave it as an exercise for you, dear reader, to associate a work item (and potentially use curl to create a new one) should you so desire.

An important piece of the sample was to show how we could figure out what output needed to be delivered. So, I had created as part of my initial Ant task a utility to take the dependency build report as input and generate a zimport mapping file. I figured that piece at least was salvageable, even if my main solution was not going to be an Ant task. I discovered something interesting though during my testing: the build report is actually not created until AFTER the post-build script executes. Rats! So, I had to change my approach from using a post-build Ant script to adding a Post Build Command Line to my dependency build configuration and implementing everything as a shell script. I then converted my Java utility to some javascript that would generate my zimport mapping file.

There is much more to this painful saga, but I will spare you the details and share my solution here. Remember that as usual this sample is JUST a sample to get you started. It is not guaranteed or supported in any way. It has been only minimally tested. You will also see quickly that I am not an expert at shell scripting, perl, or javascript. My code is not sexy by any stretch of the imagination, nor is it robust. But, it works (at least for me!) and is hopefully enough to get you well on your way to your own full solution.

First, here is the main script zimport_and_deliver.sh that you will invoke from your dependency build:

#!/bin/sh
#set environment variables required by SCM tools
export JAVA_HOME=/var/java16_64/J6.0_64
export SCM_WORK=/u/jazz40

#specify locations of jrunscript, perl, and javascript used in this script
#some of these could be passed in as build properties set on the build engine
JRUNSCRIPT="/var/java16_64/J6.0_64/bin/jrunscript"
SCM="/u/ryehle/rtcv401/usr/lpp/jazz/v4.0.1/scmtools/eclipse/scm --non-interactive"
LSCM="/u/ryehle/rtcv401/usr/lpp/jazz/v4.0.1/scmtools/eclipse/lscm --non-interactive"
PERL=/usr/lpp/perl/bin/perl
PARSE_SCRIPT="/u/ryehle/parseBuildReport.js"

#function to clean up temporary files from previous run
function remove_temp_file {
	if [ -f $1 ]
	then
	    echo "Deleting $1"
	    rm $1
	fi
}

echo "Running $0 as $(whoami)"

#specify port for scm daemon. this could also be passed in,
#or let it auto assign and parse out the value
PORT=15869

#gather the arguments and echo for log
REPOSITORYADDRESS=$1
HLQ=$2
FETCH=$3
SYSDEFPROJECTAREA=$4
BLDWKSP=$5
LABEL=$6
PERSONAL_BUILD=$7
echo "Repository address[$REPOSITORYADDRESS] HLQ[$HLQ] Fetch directory[$FETCH]
System definition project area[$SYSDEFPROJECTAREA] Build workspace UUID[$BLDWKSP]
Build label[$LABEL] Personal build[$PERSONAL_BUILD]"

#do not store outputs if this is a personal build
if [ $PERSONAL_BUILD = "personalBuild" ]
then
#    echo "team build"
else
	echo "Personal build...exiting."
	exit 0
fi

#delete any existing temporary files
MAPPING=$FETCH/outputMappingFile.txt
FLOW=$FETCH/flow.tmp
CS_LIST=$FETCH/cslist.tmp
CS_UPDATE=$FETCH/csupdate.sh
SCM_DAEMON=$FETCH/daemon.tmp
remove_temp_file $MAPPING
remove_temp_file $FLOW
remove_temp_file $CS_LIST
remove_temp_file $CS_UPDATE
remove_temp_file $SCM_DAEMON

#create the zimport mapping file from the build report
echo "$(date) Creating the zimport mapping file from the build report"
$JRUNSCRIPT $PARSE_SCRIPT $FETCH/buildReport.xml $HLQ Output CompOutput > $MAPPING

#check to see if there is anything to be zimported
if [ ! -s $MAPPING ]
then
	echo "Nothing to zimport...exiting."
	exit 0
fi

#start the scm daemon process in the background and wait for it
echo "$(date) Starting the SCM daemon at port $PORT"
$SCM daemon start --port $PORT --description "zimport and deliver" >  $SCM_DAEMON 2>&1 &
while true
do
	[ -f $SCM_DAEMON -a -s $SCM_DAEMON ] && break
	sleep 2
done
cat $SCM_DAEMON
$PERL -ne 'if (/^Port: /) {exit 0;} exit 1;' $SCM_DAEMON
if [ $? -eq 1 ]
then
	echo "$(date) The SCM daemon failed to start...exiting."
	exit 1;
fi

#display the running daemons
$LSCM ls daemon

#calculate the stream being built and where outputs will be stored. this could be hardcoded to save time.
echo "$(date) Calculating stream being built: $LSCM list flowtargets $BLDWKSP -r $REPOSITORYADDRESS"
$LSCM list flowtargets $BLDWKSP -r $REPOSITORYADDRESS > $FLOW
cat $FLOW
PERL_SCRIPT="if (/\"(.*)\".*\(current\)/)
	{ print qq(\$1); exit 0;}
	die(qq(Could not find current flow for build workspace));"
STREAM=`$PERL -ne "$PERL_SCRIPT" $FLOW`

#create the zimport workspace
ZIMP_WKSP=zimport_$(date +%Y%m%d-%H%M%S)
echo "$(date) Creating workspace for zimport named $ZIMP_WKSP flowing to $STREAM"
$LSCM create workspace -r $REPOSITORYADDRESS -s "$STREAM" $ZIMP_WKSP

#perform the zimport
echo "$(date) Starting zimport"
$SCM zimport --binary -r $REPOSITORYADDRESS --hlq $HLQ --mapfile "$FETCH//outputMappingFile.txt" --projectarea "$SYSDEFPROJECTAREA" --workspace $ZIMP_WKSP

#gather list of change sets created from zimport and add a comment
#note: does not annotate new components
echo "$(date) Adding comment to generated change sets"
$LSCM compare workspace $ZIMP_WKSP stream "$STREAM" -r $REPOSITORYADDRESS -p c -f o > $CS_LIST
cat $CS_LIST
PERL_SCRIPT="if (/    \((\d+)\)/)
	{
	print qq($LSCM changeset comment \$1 \\\"Change set created by zimport from build $LABEL\\\" -r $REPOSITORYADDRESS \n);
	}"
$PERL -ne "$PERL_SCRIPT" < $CS_LIST > $CS_UPDATE
chmod 777 $CS_UPDATE
$CS_UPDATE

#we can deliver everything since we just created the workspace. otherwise we could have delivered the individual change sets.
echo "$(date) Delivering the changes"
$LSCM deliver -s $ZIMP_WKSP -r $REPOSITORYADDRESS

#delete the zimport workspace
echo "$(date) Deleting the zimport workspace $ZIMP_WORKSPACE"
$LSCM workspace delete $ZIMP_WKSP -r $REPOSITORYADDRESS

#stop the daemon
echo "$(date) Stopping the daemon at port $PORT"
$SCM daemon stop --port $PORT

echo "$(date) Done"

And here is the parseBuildReport.js javascript that takes a build report and generates a zimport mapping file:

Array.prototype.contains = function(object) {
	var i = this.length;
	while (i--) {
		if (this[i] === object) {
			return true;
		}
	}
	return false;
}

var doc = new XMLDocument(arguments[0]);
var hlq = String(arguments[1]);
var output_project_suffix = arguments[2];
var output_component_suffix = arguments[3];
var componentList = doc.getElementsByTagName('bf:component');
var outputArray = [];
for ( var i = 0; i < componentList.length; i++) {
	var component = componentList.item(i);
	var componentName = component.getAttribute('bf:name');
	var projectList = component.getElementsByTagName('bf:project');
	for ( var j = 0; j < projectList.length; j++) {
		var project = projectList.item(j);
		var projectName = project.getAttribute('bf:name');
		var fileList = project.getElementsByTagName('bf:file');
		for (var k = 0; k < fileList.getLength(); k++) {
			var file = fileList.item(k);
			var reason = file.getAttribute('bf:reason');
			if (reason != 0) {
				var outputList = file.getElementsByTagName('outputs:file');
				for (var l = 0; l < outputList.getLength(); l++) {
					var output = outputList.item(l);
					var member = output.getElementsByTagName('outputs:buildFile').item(0).getTextContent();
					var dataset = output.getElementsByTagName('outputs:buildPath').item(0).getTextContent();
					var outputModel = new OutputModel(dataset, member, componentName, projectName);
					outputArray.push(outputModel);
				}
			}
		}
	}
}
var projectsArray = [];
var componentsArray = [];
for ( var i = 0; i < outputArray.length; i++) {
	var output = outputArray[i];
	//println(output.dataset + output.member + output.component + output.project);
	var member = output.dataset.substr(hlq.length + 1) + "." + output.member;
	var project = output.project + output_project_suffix + ":" + output.dataset.substr(hlq.length + 1);
	println("P:" + member + "=" + project);

	//stash the zComponent project and component.. we only want one entry per project
	if (!projectsArray.contains(output.project)) {
		projectsArray.push(output.project);
		componentsArray.push(output.component);
	}
}
for (var i = 0; i < projectsArray.length; i++) {
	println("C:" + projectsArray[i] + output_project_suffix + "=" + componentsArray[i] + output_component_suffix);
}

function OutputModel (dataset,member,component,project) {
	this.dataset = dataset;
	this.member = member;
	this.component = component;
	this.project = project;
}

To try this out, you will need to:

  1. Store the two sample scripts above on your build machine.
  2. Add a Post Build Command Line to your dependency build definition. Specify a command to invoke the zimport_and_deliver.sh script and pass in all of the required parameters. I specify the following command: zimport_and_deliver.sh ${repositoryAddress} ${team.enterprise.scm.resourcePrefix} ${team.enterprise.scm.fetchDestination} “Common Build Admin Project” ${teamz.scm.workspaceUUID} ${buildLabel} ${personalBuild}
  3. Logged in to the build machine as the user under whom the build will run, run the “scm login” command to cache your jazz credentials. E.g., scm login -r https://your_host:9443/ccm -u builder -P fun2test -c. This allows you to run your scm commands from the zimport_and_deliver.sh script without hardcoding a password. By default, the credentials are cached in ~/.jazz-scm. Unfortunately, the scm command line does not allow you to specify a password file as you can for the build agent. Note that the –non-interactive flag passed to the SCM CLI ensures the CLI does not ask for a password and hang the build.

Now you should be able to run your dependency build and see that your outputs are stored back in the source stream. This sample script creates and leaves behind several temporary files for easier reading and debugging. You could certainly refactor the script to not use the temporary files once your solution is fully implemented and tested.

Some additional things to note about this sample:

  1. Notice in the shell script that we start an SCM daemon at the beginning and stop it at the end. We use “lscm” rather than “scm” to leverage this started daemon and reduce the overhead time of these commands. You could consider hard coding some things that don’t need to be calculated to save additional time, such as the current stream with which the build workspace flows. Note also that the zimport subcommand is not supported from lscm, so we use scm for that command. The open enhancement to support zimport from lscm can be seen here.
  2. This script checks the ${personalBuild} build property and exits if it is true. Outputs should likely only be stored in the SCM if they were generated by a team build.
  3. This sample zimports all outputs as binary. You will need to expand the sample if you want to import generated source as text.
  4. This sample uses a convention to create new zComponent projects and RTC components to store the outputs. We do not store the outputs in the .zOSbin folder of the source zComponent projects because there is no way to load files in that folder back out to MVS. We also would not want to run the risk of developers accidentally loading the outputs to their sandbox, nor would we want to potentially cause issues with the dependency build by intermingling the source and outputs.
  5. This sample requires RTC V4.0.1 for the scm list flowtargets command, and for a fix that allows you to specify a workspace for zimport.

Hopefully this sample is useful to you in some capacity, even without a working zimport. Feel free to comment back with your suggested improvements. Lastly, I would be remiss if I did not say THANK YOU to the many folks who helped me stumble through my lack of SCM CLI and scripting skills (Eric, Nicolas, John…) for this post.

Posted in Enterprise Extensions, Rational Team Concert, System z | 2 Comments

Specifying compiler options on a per file basis using translator variables

When building your mainframe applications with Rational Team Concert, you capture your compiler options in a translator. The translator represents one step in your build process, such as a compile or a link-edit. The translator belongs to a language definition, which gathers and orders all of the steps necessary to build a file. Each file that needs to be built is associated with an appropriate language definition. So, you’d have one language definition for your main COBOL programs, another for your subroutines, yet another for your BMS maps, and so on. But what do you do if not all of your files of any given type require the same compile or link options? Or if you want to use different options depending on if you are building at DEV or TEST?

Starting in RTC V4.0, you can use variables in your translator when you specify your compile options. You provide a default value for the variable in the translator, and then can override that value at the file or build definition level. The order of precedence for resolving the variable value at build time is:

  1. File property
  2. Build property
  3. Default value in translator

For a simple example, this PL/I compile translator uses a variable to indicate whether to generate compile listings:

Translator with variable

Translator with variable

The default value is LIST. If you want to specify NOLIST for a particular file, you provide the override in the file properties like so:

File level compiler option override

File level compiler option override

If you want to specify NOLIST for all files, you can use a build property that you add to the build definition or the build request. You create the build property by adding the prefix team.enterprise.build.var to the variable name. So, in this example, we would create a build property of name team.enterprise.build.var.LISTING and give it a value of NOLIST to override the translator default value.

Variables can be used in the “Command/member” field for ISPF/TSO command or exec translators, and in the “Default options” field for called program translators. For more information, visit the Information Center.

Posted in Uncategorized | 1 Comment

Pre-processing your promotion build

Suppose you want to run some checks on the build outputs you are going to promote, for example to ensure they were not compiled with the debug option on. You’ve written a custom REXX script that parses the generated promotionInfo.xml file that contains the name of the build outputs to be promoted and then checks each output. Now what?

Your promotion definition can be configured to run a pre-build and/or post-build command. So, you can just call your REXX from the pre-build command, right? Wrong. The problem is that at the time the pre-build command executes, none of the intermediate files produced on the server to serve as input to the promotion have been transferred to the build machine yet. This means you can’t access promotionInfo.xml to see which outputs to check. Rats!

Rather than use a pre-build command, you will need to use a custom Ant script to perform the promotion. Perform the following steps:

  1. Save the generatedBuild.xml from one of your successful promotion build results to a USS directory on the host where your build agent is running.
  2. Edit the build script and add an additional target that executes your REXX.
  3. Update the “all” target in the build script to execute your target before performing the promote.
  4. In the Promotion definition, on the z/OS Promotion tab, choose “Use an existing build file”.
  5. Specify the build file you created in step 1.

Note that the dependency build offers more flexibility in this area than promotion, in that you can specify a pre or post build script to be executed right before or right after the main build script. This gives you the ability to inject additional Ant tasks while still generating the build script on each run. This capability was added in version 4.0 and can be found on the z/OS Dependency Build tab of your build definition.

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment