Migrating your mainframe application to Rational Team Concert for SCM and build

Your organization has successfully been using RTC for work items and planning, and you’re now ready to move your source control and build as well to take advantage of the full capabilities available to you and reap the benefits of an all-in-one tool. Where do you begin?

Two of the biggest design decisions you will face when migrating your code will be how to logically organize your source into components, and how many streams you will need to properly flow your changes from development to production. You will need to consider things like common code, access control, and the recommended best practice of limiting the number of files in your component to approximately 1000 (500 for earlier releases). You may decide to enlist the help of IBM or a business partner to assist you in devising your strategy.

Once you’ve devised your component and stream strategy, you’ll need to organize your source data sets and prepare for zimport. You can read a bit about zimport in my earlier post, Getting my MVS files into the RTC repository (and getting them back out again). At this point, best practice dictates that you do the following:

1. Import your source to your highest level stream (i.e., production). You may choose to do this as a series of zimports in order to create a history of your major releases to be captured in RTC.

2. Perform a dependency build at the production level. Dependency build creates artifacts known as build maps, one for each program, to capture all of the inputs and outputs involved in building a program. These maps are used in subsequent builds to figure out what programs need to be re-built based on what has changed. This initial build at the production level will build all of your programs, serving two purposes: (1) to prove you have successfully imported your source and you are properly configured to build everything and avoid surprises down the road and (2) to create build maps for all of your programs so that going forward during actual development you will only build based on what has changed.

3. Component promote your source and outputs down through your hierarchy (e.g, production -> QA -> test -> development). This will populate your streams and propagate your build maps down through each level to seed your development-level dependency build. Note that regardless of if you are going to build at a given level (e.g., test), you still need the build maps in place at that level for use in the promotion process.

Once these steps are complete, actual development can begin. Your developers can start delivering changes at your lowest (development) level, build only what’s changed, and use work item promotion to propagate your changes up through your hierarchy to the production level. Each time you begin work on a new release, you will again use component promotion to seed that new release (source code and build artifacts) from the production level.

Great! Except, if you’re like most users, one sentence above has left you reeling: This initial build at the production level will build all of your programs. You want me to do WHAT?! Re-build EVERYTHING?? Yep. For the reasons stated above. But the reality is that this may not be practical or even feasible for a number of reasons. So let’s talk about your options.

Hopefully your biggest objection here is that you don’t want a whole new set of production-level modules, when your current production modules are already tested and proven. No problem! Simply perform the production dependency build to prove out your build setup and generate your build maps, and then throw away all of the build outputs and replace them with your current production modules. This is actually the recommended migration path. You will simply need to use the “Skip timestamp check when build outputs are promoted” option when you are component promoting down (but don’t skip it when you work item promote back up). Also ensure that your dependency builds are configured to trust build outputs. This is the default behavior, and allows the dependency build to assume that the outputs on the build machine are the same outputs that were generated by a previous build. When this option is turned off, the dependency build checks for the presence of the build outputs and confirms that the timestamp on each output matches the timestamp in the build map. A non-existent build output or a mismatched timestamp will cause the program to be rebuilt.

Ideally you are satisfied and can follow the recommended path of building everything and replacing the outputs with your production modules. However, this may not be the case, so let’s explore a few other possible scenarios and workarounds:

1. Issue: Some of my programs need changes before they can be built, and it’s not feasible to do all of that work up-front before the migration.

Workaround: Assign your unbuildable programs a language definition with no translator. We will not consider these programs buildable and they will be ignored during dependency build. When you are ready to update the programs, assign them a proper language definition at that time.You can also use NO language definition on your unbuildable program if you’re not using default language definitions (i.e. language definitions assigned based on file extension). In this case, the file will also not be scanned. Note: The approach of adding a language definition after the initial build is broken in V4 on, and a fix is currently targeted for 4.0.2. See the defect new file is not built if Lang def is assigned after 1st build (243516) for details.

2. Issue: All those copies of outputs at each level in my hierarchy are just taking up space. I don’t want them there.

Workaround: You can modify the promotion script to promote the build maps but not copy the outputs themselves. Again, ensure that trust build outputs is true (default) in your dependency build definitions. If you are building at a level where you don’t have outputs, ensure that your production libraries are included in the SYSLIB in your translators.

Follow these steps to utilize this workaround:
1. Copy generatedBuild.xml from a promotion result to the host.
2. In the Promotion definition, on the z/OS Promotion tab, choose “Use an existing build file”.
3. Specify the build file you created in step 1.
4. For build targets, specify “init, startFinalizeBuildMaps, runFinalizeBuildMaps” without the quotes.

3. Issue: I refuse to build all of my programs. That’s ridiculous and way too expensive.

Workaround: Seed the dependency build by creating IEFBR14 translators. This will give you the build maps you need without actually building anything. Then switch to real translators. There is a major caveat here: Indirect dependencies are not handled automatically until the depending program is actually built. For example, if you have a BMS map that generates a copybook that is included by a COBOL program, the dependency of the COBOL program on the BMS map is not discovered until the COBOL program actually goes through a real build. If you can accept this limitation, one approach to this workaround is as follows:
1. Create two sets of translators: your real translators, and one IEFBR14 translator per original translator to make sure there are no issues with SYSLIBs changing when you switch from IEFBR14 to real translators.
2. Use build properties in your language definition to specify translators, and set those properties in the build definition.
3. Request the build with the properties pointing to the IEFBR14 translators. Everything “builds” but no outputs are generated.
4. Change all of the translator properties in the build definition to point at the real translators.
5. Request another build and see that nothing is built.
This approach again requires that we trust build outputs so we don’t rebuild based on none of the load modules listed in the build maps actually existing.

With any of these approaches, it’s essential that you test out the full cycle (build at production, component promote down, deliver various changes — e.g., a main program, a copybook, a BMS map, an ignored change, etc — at development, build the changes, work item promote the changes up) on a small subset of your programs to ensure that your solution works for your environment and situation before importing and building your full collection of programs.

Posted in Enterprise Extensions, Rational Team Concert, System z | 1 Comment

New Enterprise Extensions blog to follow!

Two of the RTC-EE developers have started blogging! Check out their first post here on the great improvements made to the IBM i capability in RTC V4. I am especially excited about the Useful Links page they’ve included on their site. Enjoy!

Posted in Enterprise Extensions, Rational Team Concert | Leave a comment

A simple build chaining example

Imagine the case where your mainframe application is not purely COBOL, but also leverages Java, web services, etc. In this scenario, you will not be able to build with dependency build alone. What can you do to coordinate the build of your entire application? In this post, we’ll look at a simple way to coordinate the builds of a COBOL application and a Java application, such that they run one after the other.

We will build our example into Money that Matters, so you can easily try it out too. Out of the box, Money that Matters has a Java build (jke.dev) and a COBOL dependency build (mortgage.dev). Let’s chain those two together.

First, we will create a third parent build definition to control the two child builds. It’s possible instead to add the request for the Java build directly to the end of the COBOL dependency build, but with the parent-child approach you can easily add more children and also easily run the children on their own. Our third build will be an Ant – Jazz Build Engine build, and we’ll call it crossplatform.dev. You will need to use the Jazz Source Control Pre-Build option if you want to store your build script in the repository. Otherwise, you can maintain it on the system where crossplatform.dev will execute and reference it directly in the build definition. This build definition will simply invoke the Ant script, build.xml:

The build script will simply request the mortgage.dev build, wait for it to complete, and then request and wait for the jke.dev build. This is done using Ant tasks included with the build system toolkit, described here. It will look something like the following:

<?xml version="1.0" encoding="UTF-8"?>
<project default="all" name="Cross platform build">
	<property name="userId" value="builder2" />
	<property name="password" value="rtc9fun" />
	<taskdef name="requestTeamBuild" classname="com.ibm.team.build.ant.task.RequestBuildTask" />
	<taskdef name="startBuildActivity" classname="com.ibm.team.build.ant.task.StartBuildActivityTask" />
	<taskdef name="waitForTeamBuild" classname="com.ibm.team.build.ant.task.WaitForTeamBuildTask" />
	<taskdef name="buildResultRetriever" classname="com.ibm.team.build.ant.task.BuildResultRetrieverTask" />
	<taskdef name="linkPublisher" classname="com.ibm.team.build.ant.task.LinkPublisherTask" />
	<target description="Trigger build and wait" name="trigger_build">
		<startBuildActivity autoComplete="true" buildResultUUID="${buildResultUUID}" label="Requesting ${chainedBuildDefinitionId} build" password="${password}" repositoryAddress="${repositoryAddress}" userId="${userId}" />
		<!--request the child build-->
		<requestTeamBuild buildDefinitionId="${chainedBuildDefinitionId}" requestUUIDProperty="buildRequestUUID" resultUUIDProperty="childBuildResultUUID" failOnError="true" password="${password}" repositoryAddress="${repositoryAddress}" userId="${userId}" />
		<startBuildActivity autoComplete="true" buildResultUUID="${buildResultUUID}" label="Waiting for ${chainedBuildDefinitionId} build" password="${password}" repositoryAddress="${repositoryAddress}" userId="${userId}" />
		<!--wait for the child build-->
		<waitForTeamBuild repositoryAddress="${repositoryAddress}" userId="${userId}" password="${password}" requestUUID="${buildRequestUUID}" statesToWaitFor="COMPLETED,CANCELED,INCOMPLETE" buildStatusProperty="buildStatus" verbose="true" interval="5" />
		<!--retrieve the label for the child build-->
		<buildResultRetriever repositoryAddress="${repositoryAddress}" userId="${userId}" password="${password}" buildResultUUID="${childBuildResultUUID}" labelProperty="childBuildLabel" failonerror="false" />
		<!--publish link to child build result in parent-->
		<linkPublisher label="${chainedBuildDefinitionId}:  ${childBuildLabel}" url="${repositoryAddress}/resource/itemOid/com.ibm.team.build.BuildResult/${childBuildResultUUID}" buildResultUUID="${buildResultUUID}" repositoryAddress="${repositoryAddress}" userId="${userId}" password="${password}" failOnError="false" />
		<echo message="${chainedBuildDefinitionId} build status: ${mortgageBuildStatus}" />
		<fail message="${chainedBuildDefinitionId} failed. Exiting.">
			<condition>
				<equals arg1="${buildStatus}" arg2="ERROR" />
			</condition>
		</fail>
	</target>
	<target description="Trigger the mortgage.dev build" name="build_mortgage">
		<antcall target="trigger_build">
			<param name="chainedBuildDefinitionId" value="mortgage.dev" />
		</antcall>
	</target>
	<target description="Trigger the jke.dev build" name="build_jke">
		<antcall target="trigger_build">
			<param name="chainedBuildDefinitionId" value="jke.dev" />
		</antcall>
	</target>
	<target depends="build_mortgage,build_jke" description="Cross-platform build" name="all" />
</project>

Notice the use of the buildResultRetriever and linkPublisher tasks to include labeled links to your child builds in the parent build result.

The jke.dev build is supported by a Jazz Build Engine called jke.dev.engine, likely running on a distributed platform, while the mortgage.dev build is supported by a Rational Build Agent called jke.rba.engine.dev running on the mainframe. We need to add a second Jazz Build Engine to support the crossplatform.dev build. We can’t reuse jke.dev.engine since we need it available to service the request for jke.dev while crossplatform.dev is still running. Simply create a new Jazz Build Engine and configure it to support crossplatform.dev:

Now, building your COBOL assets and your Java assets in order is simply a matter of requesting your crossplatform.dev build. A more complicated scenario might require the use of an additional tool, such as Rational Build Forge, to orchestrate the build.

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

New RTC EE 3.0.1 videos available on jazz.net

The RTC EE development team has just completed some fantastic 3.0.1 videos that are now available on jazz.net. They are:

Check them out! These videos are in addition to a round that were published earlier this year, also based on RTC 3.0.1:

And not to slight our IBM i friends, these videos are available as well:

Some new 4.0 videos are currently in the works, so stay tuned!

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

What’s new (and old) with RTC-RDz integration

I was playing with Rational Developer for System z (RDz) today and thought this might be a good time to give you a brief overview of the integration between RTC and RDz, tell you about a new feature offered with RTC 4.0 and RDz 8.5,  and share some gotchas that tripped me up some along the way. I will not even begin to try to tell you about all of the features and benefits of using RDz as your individual development environment for mainframe applications, both because there are so many and because I don’t begin to claim to know them all. Check out this overview of the Rational Developer for System z family for an understanding of all that RDz has to offer.

RDz supports both local projects, where your source code is located on your workstation, and remote projects, where your source code is located on the host. Since RTC came about, you have been able to share your RDz local projects in the SCM; they are just eclipse projects with a specific RDz local project nature. With V2.0.0.1 of RTC, you could also share your remote projects in the SCM.  This was done by establishing a local copy of the remote source files under the covers, keeping the two copies in sync, and using the local copies to interact with RTC. This needed to be done because RTC does not yet provide first class support for remote projects, RDz or otherwise. Until this support comes along, the recommended course of action when using RTC and RDz in combination is to use RDz local projects.  As such, that is what we are focusing on today. If you’d like to take a look at the RDz remote project support in RTC, you can visit the help here.

As I said above, you don’t need to do anything special to share your RDz local project in the RTC SCM. However, in order to take full advantage of some of the RDz functionality, such as Show Dependencies, Open Copy Member, and Local Syntax Check, you need to use a property group to allow RDz to resolve the location of the loaded dependencies referenced by your source. Starting in an RDz 8.0.3 fixpack, a wizard is provided that allows you to automatically generate this property group. The correct SYSLIB is constructed by analyzing the system definitions utilized by your zComponent projects.  A nice video demonstration of this feature and the integration between these products can be seen here.

But what about those system copybooks that aren’t under SCM control? And what if I don’t want to load all of my dependent copybooks to my workstation? New in RDz 8.5 and RTC 4.0, when you generate your property group, you have the option to specify a remote connection, and a remote SYSLIB will be generated to resolve dependencies on files residing on the selected system. In addition, if you specify a build definition (used previously only to resolve any substitution variables found in the system definitions), the team build data sets (located by the resource prefix in the build definition) will also be included in the remote SYSLIB. This means that any copybooks loaded to the host during a prior build will be available to resolve dependencies, even if they are not loaded to your workstation. Cool!

Now, here are a few questions I had while playing with these features that you may have as well…

Q: I generated a property group for my zComponent project, and now I need to load another zComponent project that contains some of the dependent copybooks. Will my property group by chance magically pick these up?

A: Nope. You need to regenerate the property group or manually update the one you already have to include the new local copybook folder.

Q: How does all this new generated property group function relate to that “Use for Syntax Check” box I’ve seen in my Translators (only appears when RTC and RDz are shell sharing)?

A: It doesn’t. That option is for our remote RDz project support, to allow us to show dependencies, perform syntax checks, etc.

Q: I generated a property group and I’m happy to see that my remote copybook is opening right up when I do a Show Copybook from my main program. But why are all these warnings about not being able to resolve my copybooks still showing in the editor?

A: You may need to close and re-open your file after generating the property group. I think I also had to do a refresh on the file once, but I’m not going to swear by it…

Q: I generated a property group and specified a remote connection, and now Local Syntax Check and Show Dependencies are grayed out. What did I do wrong??

A: Nothing. RDz is explicitly filtering out those actions when you specify remote libraries in your property group. Recall that you can always use the Enterprise Extensions->Impact Analysis action to see your dependencies instead.

One last word before I go… As I re-read this post, I realize I am using the terms “RDz local project” and “zComponent project” interchangeably. Recall that the zComponent project is a specialized eclipse project used by the Enterprise Extensions for build, loading files to the host, etc. It has its own nature and a specific folder structure that is required. When you create your zComponent project (via zimport or the wizard in the eclipse client), the RDz local project nature will be automatically added to it.

That’s all for now! As always, feel free to share your own tips and gotchas, and I’ll update this post as we go.

Posted in Enterprise Extensions, Rational Team Concert, System z | Leave a comment

Custom build result pruner

Out of the box, RTC allows you to specify a basic pruning policy for your build definitions. You can indicate how many of your most recent successful and failed build results you’d like to keep, and periodically the other older build results will be deleted. You may quickly find that you require a more sophisticated approach to build result pruning. That’s where this blog comes in.

I’ve created a sample custom pruner that will hopefully show you how you can use some of the various team build APIs to create your own pruner, and ideally you can use some of my code as a starting point. This is a plain java application, and I swiped the code and setup instructions from Ralph Schoon’s article Automated Build Output Management Using the Plain Java Client Libraries to use as my own starting point.

My pruner contains three classes:

  • BuildResultQueryer: I’m pretty sure queryer is not a word, but this class queries for all of the build results that we’d like to consider for pruning. You can query based on age, state, and status, and you can choose to exclude personal builds from pruning (but by default they are included). You’ll see in the comments that some nice future enhancements might include querying on more than one state, more than one status, and including or excluding based on tags.
  • BuildResultPruner: This class was written to work specifically with our dependency builds, and is where you would likely make your own modifications. It takes a list of build results and checks to see if there are any successful translator outputs. If there are, the failed listings are removed. If there aren’t, the entire build result is deleted. Take a look at the prune() method to re-purpose this tool for your needs.
  • BuildResultPrunerTool: This is the main entry point into the tool. It takes a configuration properties file as an argument, and you can also override the repository address, user id, password, and build definition name. This class connects to the repository and then invokes the queryer and pruner in turn.

I’m sharing both the jar in case you want to try this tool as-is and the source so you can tweak this to work to your own specifications. I’m also sharing my configuration properties file as an example. If you’re not pulling the project into eclipse and running it there, you can invoke it like so from the command line:

java -cp C:/tmp/export/BuildResultPrunerTool.jar;C:/RTC3013Dev/RTC-Client-plainJavaLib-3.0.1.3/* com.ibm.js.team.build.result.pruner.BuildResultPrunerTool -config C:\prune.properties

Notice that I have the RTC plain java client libraries in my classpath (per Ralph’s article mentioned above). Ralph also talks about how you could use an RTC command line build to run your plain java application on a schedule, which would make good sense for your custom build result pruner.

Remember that this code comes with the usual lack of promise or guarantee. Enjoy!

Posted in Rational Team Concert | 2 Comments

Promotion vs Deployment

More and more lately it seems the topic of promotion versus deployment keeps coming up. What are the differences, and when should I use one versus the other. This is not an entirely straightforward discussion, but we’ll do our best to muddle through it here and clarify this issue.

When you migrate from your existing SCM and build tools, one of the biggest challenges (if not THE biggest challenge) is determining how you’re going to flow your pieces and parts through from development to production. How will you lay out your streams and flow your source changes? At what levels will you build? Where do you need to deploy your outputs? Is the way you’re doing it today an optimal solution you’d like to replicate in RTC, or simply a reflection of your current tooling that you’d like to rework entirely? Understanding how promotion and deployment work and the repercussions of using each will help you make informed decisions and be successful with your migration to RTC.

If you’re quite new to RTC and the enterprise extensions, now might be a good time to jump out and read the Guide for migration to Rational Team Concert for z/OS application development before going any further here. This article will give you some good context and broad understanding of how RTC works for mainframe development.

Aaaaaaand welcome back! First let’s talk about the purpose of promotion, and let’s start the discussion with a picture.

Mainframe development typically occurs in a hierarchy, with changes being made at a development level, advanced through various levels of test and QA (quality assurance), and eventually pushed to production. Promotion is how we flow our artifacts through that hierarchy with RTC. We deliver source changes to a stream in the Jazz repository, and we use dependency build to compile those changes and build our outputs into data sets on the host. We then promote those changes and outputs together up from development to test to QA to production. In doing so, we maintain at each level a collection of outputs that reflect the version of the source at that level, and we avoid unnecessary rebuilds at each level. By promoting, you simply move your tested applications through the hierarchy, rather than rebuilding at each and running the risk of introducing an error and invalidating the test results achieved at lower levels.

So, now that we understand promotion, where do packaging and deployment fit in? Let’s answer that with another picture.

Deployment takes your outputs you’ve built, packages them up into an archive file, copies the archive to a different machine (or a different location on the same machine), and deploys the outputs to a runtime environment (e.g. test, QA, or production). Before deploying the outputs, we make a backup of the data sets that are already there, making it possible to do an n-1 rollback as necessary. You can deploy the same package over and over again to different runtimes, and you can deploy packages in any sequence.

So, what I think confuses people here is that when we talk about “promoting to production” (for example), the data sets we promote to are not actually your production environment. You can think of them instead as your “golden master” production data sets. They live on the machine where you do your builds. They’re not though the copies that are actually “in production.” You can’t rollback a promotion, and you can’t re-promote an output. You would instead package up your production-level outputs that you have promoted to production, and you would use deployment to copy them to your actual production environment.

In summary, generally speaking, you can think of promotion as what you use to keep dependency build properly seeded between levels to avoid unnecessary recompiles. It’s what you use to keep a copy of your outputs at each level that reflect the source at each level. If you’re going to rebuild everything at each level, you don’t need to promote; you can just deliver your source through the levels and run your builds. Generally, you don’t promote your outputs to an environment where they are actually going to be used (test, production, etc). You use deployment for this purpose. Some people do choose to promote to their test environments, but you need to understand if you do this you can’t roll back to a previous level of your modules. You also can only promote to the environment one time. You cannot re-promote an output like you can re-deploy your package. Understand also that deployment allows you to simplify your hierarchy. You may currently be pushing your changes from development through six different test levels and then up to QA, where really those various test levels just reflect different environments you need to deploy to for test. Our ability to re-deploy a package to all of these environments eliminates the need for these levels in your hierarchy.

Posted in Enterprise Extensions, Rational Team Concert, System z | 3 Comments