Transform Easytrieve to modern languages by using AWS Transform custom - AWS Prescriptive Guidance

Transform Easytrieve to modern languages by using AWS Transform custom

Shubham Roy, Subramanyam Malisetty, and Harshitha Shashidhar, Amazon Web Services

Summary

This pattern provides prescriptive guidance for faster and lower-risk transformation of mainframe Broadcom Easytrieve Report Generator (EZT) workloads using AWS Transform custom language-to-language transformation. It addresses the challenges of modernizing niche and proprietary mainframe EZT workloads that are commonly used for batch data processing and report generation. The pattern replaces expensive, lengthy, and error-prone migration approaches that rely on proprietary tooling and rare mainframe expertise with an agentic AI automated solution you create on AWS Transform.

This pattern provides a ready-to -use custom transformation definition for EZT transformation. The definition uses multiple transformation inputs:

  • EZT business rules extracted using AWS Transform for mainframe

  • EZT programming reference documentation

  • EZT source code

  • Mainframe input and output datasets

AWS Transform custom uses these inputs to generate functionally equivalent applications in modern target languages, such as Java or Python.

The transformation process uses intelligent test execution, automated debugging, and iterative fix capabilities to validate functional equivalence against expected outputs. It also supports continual learning, enabling the custom transformation definition to improve accuracy and consistency across successive transformations. Using this pattern, organizations can reduce migration effort and risk, address niche mainframe technical debt, and modernize EZT workloads on AWS to improve agility, reliability, security, and innovation.

Prerequisites and limitations

Prerequisites

  • An active AWS account 

  • A mainframe EZT workload with input and output data 

Limitations

Scope limitations 

  • Language support – Only EZT-to-Java transformation is supported for this specific transformation pattern. 

  • Out of scope – Transformation of other mainframe programming languages requires a new custom transformation definition in AWS Transform custom.

Process limitations 

  • Validation dependency – Without baseline output data the transformation cannot be validated. 

  • Proprietary logic – Highly specific, custom-developed utilities require additional user documentation and reference materials in order to be correctly interpreted by the AI agent.

Technical limitations 

Product versions

  • AWS Transform CLI –  Latest version

  • Node.js –  version 20 or later

  • Git –  Latest version

  • Target environment

    • Java –  version 17 or later

    • Spring Boot –  version 3.x is the primary target for refactored applications

    • Maven –  version 3.6 or later

Architecture

Source technology stack

  • Operating system – IBM z/OS

  • Programming language – Easytrieve, Job control language (JCL)

  • Database – IBM DB2 for z/OS, Virtual Storage Access Method (VSAM), Mainframe flat files

Target technology stack

  • Operating system – Amazon Linux

  • Compute – Amazon Elastic Compute Cloud (Amazon EC2)

  • Programming language – Java

  • Database Amazon Relational Database Service (Amazon RDS)

Target architecture

target architecture diagram for using AWS Transform custom to transform EZT to modern code.

Workflow

This solution uses an AWS Transform custom language-to-language migration transformation pattern to modernize mainframe Easytrieve (EZT) applications to Java through a four-step automated workflow.

Step 1 –  Provide your legacy code to AWS Transform for Mainframe, which:

  • Analyzes the code

  • Extracts the high-level business logic

  • Extracts the detailed business logic.

Step 2 –  Create a folder with the required inputs:

  • EZT business rules extracted using AWS Transform for mainframe 

  • EZT programming reference documentation 

  • EZT source code

  • Mainframe input and output datasets

Step 3 – Create and run a custom transformation definition

  1. Use the AWS Transform CLI to describe transformation objectives in natural language. AWS Transform custom analyzes the BRE, source code, and EZT programming guides to generate a custom transformation definition for developer review and approval.

  2. Then, invoke the AWS Transform CLI with the project source code. AWS Transform custom creates transformation plans, converts EZT to Java upon approval, generates supporting files, builds the executable JAR, and validates exit criteria.

  3. Use the validation agent to test the  functional equivalence against mainframe output. The Self-Debugger Agent autonomously fixes issues. Final deliverables include validated Java code and HTML validation reports.

Automation and scale

  • Agentic AI multi-mode execution architecture – AWS Transform custom leverages agentic AI with 3 execution modes (conversational, interactive, full automation) to automate complex transformation tasks including code analysis, refactoring, transformation planning and testing.

  • Adaptive learning feedback system – The platform implements continuous learning mechanisms through code sample analysis, documentation parsing, and developer feedback integration with versioned transformation definitions.

  • Concurrent application processing architecture – The system enables distributed parallel execution of multiple application transformation operations simultaneously across scalable infrastructure.

Tools

AWS services  

  • AWS Transform custom is an agentic AI service is used to transform legacy EZT applications into modern programming languages. 

  • AWS Transform uses agentic AI to help you accelerate the modernization of legacy workloads, such as .NET, mainframe, and VMware workloads.

  • AWS Transform for mainframe is used to analyze legacy EZT applications to extract embedded business logic and generate comprehensive business rule documentation, including logic summaries, acronym definitions, and structured knowledge bases. These serve as input data for AWS Transform custom. 

  • Amazon Simple Storage Service (Amazon S3) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. Amazon S3 serves as the primary storage service for AWS Transform custom for storing transformation definitions, code repositories, and processing results.

  • AWS Identity and Access Management (IAM) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. IAM provides the security framework for AWS Transform custom, managing permissions and access control for transformation operations.

Other tools

  • AWS Transform CLI is the command-line interface for AWS Transform custom, enabling developers to define, execute, and manage custom code transformations through natural language conversations and automated execution modes. AWS Transform custom supports both interactive sessions (atx custom def exec) and autonomous transformations for scalable modernization of codebases.

  • Git version control system used for branch protection, change tracking, and rollback capabilities during automated fix application. 

  • Java is the programming language and development environment used in this pattern. 

Code repository

The code for this pattern is available in Easytrieve to Modern Languages Transformation with AWS Transform Custom on GitHub.

Best practices

  • Establish standardized project structure – Create a four-folder structure (source-code, bre-doc, input-data, output-data), validate completeness, and document contents before transformation.

  • Use baseline files for validation – Use production baseline input files, perform byte-by-byte comparison with baseline output, accept zero tolerance for deviations.

  • Use all available reference documents  – To increase accuracy of transformation provide all available reference documents such as business requirements and coding checklists.

  • Provide input for quality improvement  – AWS Transform custom automatically extracts learnings from transformation executions (developer feedback, code issues) and creates knowledge items for them. after each successful transformation review knowledge items and approve the one that you would like to be used in future executions. This improves the quality of future transformations.

Epics

TaskDescriptionSkills required

Configure AWS Transform for mainframe.

Set up the environment and required AWS Identity and Access Management (IAM) permissions to support mainframe modernization workflows. For more information see Transformation of mainframe applications in AWS documentation.

App developer

Generate Business Rule Extract (BRE) documentation.

Extract business logic from source EZT or COBOL code to generate functional documentation. For instructions on how to initiate the extraction process and review the output, see Extract business logic in the AWS Transform documentation.

App developer
TaskDescriptionSkills required

Provision the infrastructure for AWS Transform custom.

Deploy the production-ready infrastructure required to host a secure transformation environment. This includes a private Amazon EC2 instance configured with the necessary tools, IAM permissions, and network settings for converting Easytrieve code. To provision the environment using infrastructure as code (IaC), follow the deployment instructions in the Easytrieve to Modern Languages Transformation with AWS Transform Custom GitHub repository.

App developer, AWS administrator

Prepare input materials for transformation.

  1. Enter this command to create the folder structure:

    mkdir -p /root/transform-workspace/mainframe-source/{source-code,bre-doc,input-data,output-data}

    This creates these folders:

    • source-code – Storage for the EZT source code

    • bre-doc – Storage for the generated BRE document

    • input-data – Storage for the input data for the mainframe batch execution (Sequential/Text/DB2 files in EBCDIC format)

    • output-data –  Storage for the output data in the mainframe after the batch execution (Sequential/Text/DB2 files in EBCDIC format)

  2. Enter these commands to initialize Git repository:

    cd /root/transform-workspace/mainframe-source/source-code git init git add . git commit -m "Initial commit"
App developer
TaskDescriptionSkills required

Create transformation definition.

Follow these steps to create the custom transformation definition for EZT to Java transformation with functional validation.

  1. Go the code repo for this pattern and copy the contents of the documents folder. This should include transformation_definition.md, summaries.md and the document_references folder with the EZT coding guide.

  2. Upload that content in the AWS Transform CLI upload to a location of your choice and note the path location to use in the next steps.

  3. Invoke AWS Transform from the CLI with the atx command.

  4. Provide this prompt in the CLI: Create a custom transformation using my transformation definition file available at path <path to content from step#2> AWS Transform creates a new custom transformation definition for EZT to Java transformation.

  5. Review the transformation definition and make changes if needed.

App developer

Publish transformation definition.

After review and validation of the transformation definition you can publish it to the AWS Transform custom registry with a natural language prompt, providing a definition name such as Easytrieve-to-Java-Migration.

App developer
TaskDescriptionSkills required

Review the transformation validation summary.

Before executing the AWS Transform custom transformation, validate that the input-data folder contains the required data files captured before execution of the mainframe batch job. After the mainframe batch job execution, ensure that the output-data folder captures the resulting files. All files are in Sequential/Text/DB2 format using EBCDIC encoding based on execution requirements.

  • Place input data in input-data/ folder

  • Place baseline output in output-data/ folder

App developer

Run the custom transformation job.

Execute the AWS Transform CLI command, choosing the non-interactive or the interactive option:

:# Non-interactive execution (fully autonomous): atx custom def exec \ --transformation-name "Easytrieve-to-Java-Migration" \ --code-repository-path ~/root/transform-workspace/mainframe-source/source-code \ --build-command "mvn clean install" \ --non-interactive \ --trust-all-tools \ # Interactive execution (with human oversight): atx custom def exec \ -n "Easytrieve-to-Java-Migration" \ -p ~/root/transform-workspace/mainframe-source/source-code \ -c "mvn clean install" # Resume interrupted execution: atx -resume # OR atx --conversation-id <conversation-id>

AWS Transform automatically validates through build/test commands during transformation execution.

App developer
TaskDescriptionSkills required

Review the transformation validation summary.

  1. Wait for AWS Transform to automatically validate the code through build and test commands.

  2. Enter this command to find the latest session:

    LATEST_SESSION=$(ls -t ~/.aws/atx/custom/ | head -1
  3. Enter this command to view the validation summary:

    cat ~/.aws/atx/custom/$LATEST_SESSION/artifacts/validation_summary.md
  4. Enter this command to check the overall status:

    grep "OVERALL STATUS" ~/.aws/atx/custom/$LATEST_SESSION/artifacts/validation_summary.md
App developer

Access validation reports.

Enter these commands to review the detailed validation artifacts:

# Full validation report cat ~/.aws/atx/custom/$LATEST_SESSION/artifacts/validation_report.html # Generated code location ls ~/.aws/atx/custom/$LATEST_SESSION/generated/ # Execution logs cat ~/.aws/atx/custom/$LATEST_SESSION/logs/execution.log
App developer

Enable knowledge items for continuous learning.

Improve future transformation accuracy by promoting suggested knowledge items to your persistent configuration. After a transformation, the agent stores identified patterns and mapping rules in your local session directory. To review and apply these learned items, run these commands on your Amazon EC2 instance:

# List all knowledge items for a specific transformation definition atx custom def list-ki -n <transformation-name> # Retrieve the details of a specific knowledge item atx custom def get-ki -n <transformation-name> --id <id> # Update the status of a knowledge item (ENABLED or DISABLED) atx custom def update-ki-status -n <transformation-name> --id <id> --status ENABLED # Update the knowledge item configuration to enable auto-approval atx custom def update-ki-config -n <transformation-name> --auto-enabled TRUE
App developer

Troubleshooting

IssueSolution

Input and output path configuration

Input files are not being read, or output files are not being written correctly. 

Specify the complete directory path where input files are stored and clearly indicate the location where output should be written. Ensure proper access permissions are configured for these directories. 

Best practices include using absolute paths rather than relative paths to avoid ambiguity and verifying that all specified paths exist with appropriate read/write permissions. 

Resuming interrupted executions

Execution was interrupted or needs to be continued from where stopped

You can resume execution from where you left off by providing the conversation ID in the CLI command.

Find the conversation ID in the logs of your previous execution attempt.  

Resolving memory constraints

Out of memory error occurs during execution.

You can ask AWS Transform to share the current in-memory JVM size and then increase the memory allocation based on this information. This adjustment helps accommodate larger processing requirements.

Consider breaking large jobs into smaller batches if memory constraints persist after adjustments. 

Addressing output file discrepancies

Output files don't match expectations, and AWS Transform indicates no further changes are possible.

Provide specific feedback and technical reasons explaining why the current output is incorrect. Include additional technical or business documentation to support your requirements. This detailed context helps AWS Transform correct the code to generate the proper output files. 

  • Specific examples comparing expected vs. actual output 

  • References to relevant documentation or standards

  • Clear explanation of business impact of the discrepancy  

Related resources

Attachments

To access additional content that is associated with this document, unzip the following file: attachment.zip