

AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

# Replatforming applications with Rocket Software (formerly Micro Focus)
Rocket Software Replatforming

This guide covers the end-to-end process of replatforming mainframe applications using AWS Mainframe Modernization solutions on AWS. It describes all tasks and includes information on configuring and operating AWS Mainframe Modernization runtime on Amazon EC2 from initial setup and analysis to building, testing, and deploying your modernized applications on AWS. It also covers advanced topics like working with legacy data structures, using templates and predefined projects, and setting up automation for streaming sessions. 

**Topics**
+ [

# Set up Rocket Software (formerly Micro Focus) (on Amazon EC2)
](mf-runtime-setup.md)
+ [

# Set up Automation for Rocket Enterprise Analyzer (formerly Micro Focus) and Rocket Enterprise Developer Streaming Sessions
](set-up-automation-m2.md)
+ [

# View data sets as tables and columns in Rocket Enterprise Developer (formerly Micro Focus Enterprise Developer)
](view-datasets-tables-m2.md)
+ [

# Edit data sets using Rocket Software (formerly Micro Focus) Data File Tools in Enterprise Developer
](edit-datasets-m2.md)
+ [

# Tutorials for Rocket Software (formerly Micro Focus)
](tutorials-mf.md)
+ [

# Available batch utilities in AWS Mainframe Modernization
](utilities-m2.md)

# Set up Rocket Software (formerly Micro Focus) (on Amazon EC2)
Set up Rocket Software (on Amazon EC2)

AWS Mainframe Modernization provides several Amazon Machine Images (AMIs) that include Rocket Software (formerly Micro Focus) licensed products. These AMIs allow you to quickly provision Amazon Elastic Compute Cloud (Amazon EC2) instances to support Rocket Software environments that you control and manage. This topic provides the steps required to access and launch these AMIs. Using these AMIs is entirely optional and they are not required to complete the tutorials in this user guide.

**Topics**
+ [

# Prerequisites for setting up Rocket Software (formerly Micro Focus) (on Amazon EC2)
](mf-runtime-setup-prereq.md)
+ [

# Create the Amazon VPC endpoint for Amazon S3
](mf-runtime-setup-vpc.md)
+ [

# Request the allowlist update for the account
](mf-runtime-setup-allowlist.md)
+ [

# Create the AWS Identity and Access Management role
](mf-runtime-setup-iam-role.md)
+ [

# Grant License Manager the required permissions
](mf-runtime-setup-lic.md)
+ [

# Subscribe to the Amazon Machine Images
](mf-runtime-setup-ami.md)
+ [

# Launch an AWS Mainframe Modernization Rocket Software (formerly Micro Focus) instance
](mf-runtime-setup-mf-instance.md)
+ [

# Subnet or VPC with no internet access
](mf-runtime-setup-no-access.md)

# Prerequisites for setting up Rocket Software (formerly Micro Focus) (on Amazon EC2)
Rocket Software (on Amazon EC2) prerequisites

When you set up Rocket Software (on Amazon EC2), make sure you meet the following prerequisites.
+ Administrator access to the account where the Amazon EC2 instances will be created.
+ Identify the AWS Region where the Amazon EC2 instances will be created and verify the AWS Mainframe Modernization service is available. See [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). Make sure to choose a Region where the service is available.
+ Identify the Amazon Virtual Private Cloud (Amazon VPC) where the Amazon EC2 instances will be created.

# Create the Amazon VPC endpoint for Amazon S3


In this section, you create a Amazon VPC endpoint for Amazon S3 to use. Setting up this endpoint will help you later when setting up internet access for VPC. 

1. Navigate to Amazon VPC in the AWS Management Console.

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.  
![\[VPC endpoints with Create Endpoint active.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-s3-endpoint_1.jpg)

1. Enter a meaningful name tag, for example: “Micro-Focus-License-S3”.

1. Choose **AWS Services** as the Service Category.  
![\[Endpoint Settings with sample name tag entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-s3-endpoint_2.png)

1. Under **Services** search for the Amazon S3 Gateway service: **com.amazonaws.[region].s3**.

   For `us-west-1` this would be: `com.amazonaws.us-west-1.s3`.

1. Choose the **Gateway** service.  
![\[Services with Amazon S3 Gateway service selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-s3-endpoint_3.png)

1. For VPC choose the VPC you will be using.  
![\[VPC with a VPC entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-s3-endpoint_4.png)

1. Choose all of the route tables for the VPC.  
![\[Route tables with all route tables selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-s3-endpoint_5.png)

1. Under **Policy** choose **Full Access**.  
![\[Policy with Full Access selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-s3-endpoint_6.png)
**Note**  
If you decide to create a custom policy, make sure it has access to the Amazon S3 bucket `s3://aws-supernova-marketplace-<region>-prod`.

1. Choose **Create Endpoint**.

# Request the allowlist update for the account


Work with your AWS representative to have your account allowlisted for the AWS Mainframe Modernization AMIs. Please provide the following information:
+ The AWS account ID.
+ The AWS Region where the Amazon VPC endpoint was created.
+ The Amazon VPC Amazon S3 endpoint ID created in [Create the Amazon VPC endpoint for Amazon S3](mf-runtime-setup-vpc.md). This is the `vpce-xxxxxxxxxxxxxxxxx` id for the **com.amazonaws.[region].s3 Gateway** endpoint.
+ The number of licenses required across all Rocket Software Enterprise Suite AMI Amazon EC2 instances.

  One license is required per CPU core (per 2 vCPUs for most Amazon EC2 instances).

  For more information, see [Optimize CPU options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-compute-optimized).

  The requested number can be adjusted in the future by AWS.

**Note**  
Reach out to your AWS representative or AWS Support who will open the support ticket for the Allowlist request on your behalf. It can't be requested directly by you and the request may take several days to complete.

# Create the AWS Identity and Access Management role


Create an AWS Identity and Access Management policy and role to be used by the AWS Mainframe Modernization Amazon EC2 instances. Creating the role through the IAM console will create an associated instance profile of the same name. Assigning this instance profile to the Amazon EC2 instances allows Rocket Software Licenses to be assigned. For more information on instance profiles, see [Using an IAM role to grant permissions to applications running on Amazon EC2 instances](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html).

## Create an IAM policy


An IAM policy is created first and then attached to the role.

1. Navigate to AWS Identity and Access Management in the AWS Management Console.

1. Choose **Policies** and then **Create Policy**.  
![\[Policy page with no filters applied.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-policy_1.png)

1. Choose the **JSON** tab.  
![\[JSON tab with no content\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-policy_2.png)

1. Replace `us-west-1` in the following JSON with the AWS Region where the Amazon S3 endpoint was defined, then copy and paste the JSON into the policy editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "S3WriteObject",
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": [
                   "arn:aws:s3:::aws-supernova-marketplace-us-west-1-prod/*"
               ]
           },
           {
               "Sid": "OtherRequiredActions",
               "Effect": "Allow",
               "Action": [
                   "sts:GetCallerIdentity",
                   "ec2:DescribeInstances",
                   "license-manager:ListReceivedLicenses"
               ],
               "Resource": [
                   "*"
               ]
           }
       ]
   }
   ```

------
**Note**  
The Actions under the Sid `OtherRequiredActions` do not support resource-level permissions and must specify `*` in the resource element.  
![\[JSON tab with policy entered and us-west-1 highlighted.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-policy_3.png)

1. Choose **Next: Tags**.  
![\[Tags with no data entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-policy_4.png)

1. Optionally enter any tags, then choose **Next: Review**.

1. Enter a name for the policy, for example “Micro-Focus-Licensing-policy”. Optionally enter a description, for example “A role that includes this policy must be attached to each AWS Mainframe Modernization Amazon EC2 instance.”  
![\[Review policy with name and description entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-policy_5.png)

1. Choose **Create Policy**.

## Create the IAM role


After creating an IAM policy, you create an IAM role and attach it to the policy. 

1. Navigate to IAM in the AWS Management Console.

1. Choose **Roles** and then **Create Role**.  
![\[Roles with no filter applied.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-role_1.png)

1. Leave **Trusted entity type** as **AWS service** and choose the **EC2** common use case.  
![\[Select trusted entity with AWS service and EC2 selected\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-role_2.png)

1. Choose **Next**.

1. Enter “Micro” into the filter and press enter to apply the filter.

1. Choose the policy that was just created, for example the “Micro-Focus-Licensing-policy”. 

1. Choose **Next**.  
![\[Add permissions with Micro Focus policy selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-role_3.png)

1. Enter the Role name, for example “Micro-Focus-Licensing-role”. 

1. Replace the description with one of your own, for example “Allows Amazon EC2 instances with this role to obtain Micro Focus Licenses”.   
![\[Role details with name and description entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-role_4.png)

1. Under **Step 1: Select trusted entities** review the JSON and confirm it has the following values:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "sts:AssumeRole"
               ],
               "Principal": {
                   "Service": [
                       "ec2.amazonaws.com"
                   ]
               }
           }
       ]
   }
   ```

------
**Note**  
The order of the Effect, Action, and Principal are not significant.

1. Confirm that **Step 2: Add permissions** shows your Licensing policy.  
![\[Step 2: Add permissions with licensing policy selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-create-iam-role_6.png)

1. Choose **Create role**.

After the allowlist request is complete, continue with the following steps.

# Grant License Manager the required permissions


You need to grant permissions to your AWS License Manager to set up Rocket Software runtime engine (on Amazon EC2).

1. Navigate to AWS License Manager in the AWS Management Console.  
![\[AWS License Manager home page.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-license-manager_1.png)

1. Choose **Start using AWS License Manager**.

1. If you see the following pop-up, view the details, then choose the check-box and press **Grant Permissions**.  
![\[IAM permissions one-time setup\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-license-manager_2.png)

# Subscribe to the Amazon Machine Images


After you are subscribed to an AWS Marketplace product, you can launch an instance from the product's AMI. You can also manage your subscribed AMIs when setting up Rocket Software (formerly Micro Focus) runtime engine (on Amazon EC2).

1. Navigate to AWS Marketplace Subscriptions in the AWS Management Console.

1. Choose **Manage subscriptions**.  
![\[AWS Marketplace home page.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-ami-subscription_1.png)

1. Copy and paste one of the following links into the browser address bar.
**Note**  
Only choose a link for one of the products you have been authorized to use.
Make sure your account is allowlisted by following the [Request the allowlist update for the account](mf-runtime-setup-allowlist.md) page to use these links.
   + Enterprise Server: [https://aws.amazon.com/marketplace/pp/prodview-g5emev63l7blc](https://aws.amazon.com/marketplace/pp/prodview-g5emev63l7blc)
   + Enterprise Server for Windows: [https://aws.amazon.com/marketplace/pp/prodview-lwybsiyikbhc2](https://aws.amazon.com/marketplace/pp/prodview-lwybsiyikbhc2)
   + Enterprise Developer: [https://aws.amazon.com/marketplace/pp/prodview-77qmpr42yzxwk](https://aws.amazon.com/marketplace/pp/prodview-77qmpr42yzxwk)
   + Enterprise Developer with Visual Studio 2022: [https://aws.amazon.com/marketplace/pp/prodview-m4l3lqiszo6cm](https://aws.amazon.com/marketplace/pp/prodview-m4l3lqiszo6cm)
   + Enterprise Analyzer: [https://aws.amazon.com/marketplace/pp/prodview-tttheylcmcihm](https://aws.amazon.com/marketplace/pp/prodview-tttheylcmcihm)
   + Enterprise Build Tools for Windows: [https://aws.amazon.com/marketplace/pp/prodview-2rw35bbt6uozi](https://aws.amazon.com/marketplace/pp/prodview-2rw35bbt6uozi)
   + Enterprise Stored Procedures: [https://aws.amazon.com/marketplace/pp/prodview-zoeyqnsdsj6ha](https://aws.amazon.com/marketplace/pp/prodview-zoeyqnsdsj6ha)
   + Enterprise Stored Procedures with SQL Server 2019: [https://aws.amazon.com/marketplace/pp/prodview-ynfklquwubnz4](https://aws.amazon.com/marketplace/pp/prodview-ynfklquwubnz4)

1. Choose **Continue to Subscribe**.  
![\[Enterprise Server offering in AWS Marketplace.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-ami-subscription_2.png)

1. If the Terms and Conditions are acceptable, choose **Accept Terms**.  
![\[Subscription terms and conditions.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-ami-subscription_3.png)

1. The subscription might take a few minutes to process.  
![\[Subscription pending message.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-ami-subscription_4.png)

1. After the Thank you message shows, copy and paste the next link from step 3 to continue adding subscriptions.  
![\[Subscription thank you message.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-ami-subscription_5.png)

1. Stop when **Manage subscriptions** shows all your subscribed AMIs.
**Note**  
The panel preferences (gear icon) are set to show the View as a Table.  

![\[Manage subscriptions with list of subscribed AMIs.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-ami-subscription_6.png)


# Launch an AWS Mainframe Modernization Rocket Software (formerly Micro Focus) instance
Launch a Rocket Software instance

After creating endpoints, IAM policy, IAM role, and subscribing to AMIs, you are ready to launch an AWS Mainframe Modernization Rocket Software (Micro Focus) instance in the AWS Management Console.

1. Navigate to AWS Marketplace Subscriptions in the AWS Management Console.

1. Locate the AMI to be launched and choose **Launch New Instance**.  
![\[Manage subscriptions with Enterprise Server and Enterprise Analyzer ready to launch.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_1.png)

1. In the launch new instance dialog, ensure the allowlisted region is selected.

1. Press **Continue to launch through EC2**.
**Note**  
The following example shows a launch of an Enterprise Developer AMI, but the process is the same for all the AWS Mainframe Modernization AMIs.  

![\[Launch new instance.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_2.png)


1. Enter a name for the server.

1. Choose an instance type.

   The Instance type selected should be determined by the project performance and cost requirements. The following are suggested starting points:
   + For Enterprise Analyzer, an r6i.xlarge
   + For Enterprise Developer, an r6i.large
   + For a standalone instance of Enterprise Server, an r6i.xlarge
   + For Rocket Software Performance Availability Cluster (PAC) with scale-out, an r6i.large
**Note**  
The Application and OS Images section has been collapsed for the screen shot.  
![\[Launch an instance with name and instance type entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_3.png)

1. Choose or create (and save) a key-pair (not shown).

   For more information on key pairs for Linux instances, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).

   For more information on key pairs for Windows instances, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html).

1. Edit the Network settings and **choose the allowlisted VPC** and appropriate Subnet.

1. **Choose or create a Security Group**. If this is an Enterprise Server EC2 instance it is typical to allow TCP traffic to ports 86 and 10086 to administer the Rocket Software configuration.

1. Optionally configure the storage for the Amazon EC2 instance.

1. Important - Expand Advanced details and under IAM instance profile choose the Licensing role created earlier, for example “Micro-Focus-Licensing-role”.
**Note**  
If this step is missed, after the instance is created you can modify the IAM role from the Security option of the Action menu for the EC2 instance.  
![\[Advanced Details with IAM instance profile entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_4.png)

1. Review the Summary and push **Launch Instance**.  
![\[Summary with selected options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_5.png)

1. The instance launch will fail if an invalid virtual server type is chosen.

   If this happens, choose **Edit instance config** and change the instance type.  
![\[Launching instance progress message.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_6.png)

1. Once the “Success” message is shown choose **Connect to instance** to get connection details.  
![\[Instance launch success message.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_7.png)

1. Alternatively, navigate to **EC2** in the AWS Management Console.

1. Choose **Instances** to see the status of the new instance.  
![\[List of instances with status.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-launch-instance_8.png)

# Subnet or VPC with no internet access


Make these additional changes if the subnet or VPC does not have outbound Internet access.

The license manager requires access to the following AWS services:
+ com.amazonaws.*region*.s3
+ com.amazonaws.*region*.ec2
+ com.amazonaws.*region*.license-manager
+ com.amazonaws.*region*.sts

The earlier steps defined the com.amazonaws.*region*.s3 service as a gateway endpoint. This endpoint needs a route table entry for any subnets without Internet access.

The additional three services will be defined as interface endpoints.

**Topics**
+ [

## Add the Route table entry for the Amazon S3 endpoint
](#mf-runtime-setup-no-access-route-table)
+ [

## Define the required security group
](#mf-runtime-setup-no-access-security-group)
+ [

## Create the service endpoints
](#mf-runtime-setup-no-access-endpoints)

## Add the Route table entry for the Amazon S3 endpoint


1. Navigate to **VPC** in the AWS Management Console and choose **Subnets**.

1. Choose the subnet where the Amazon EC2 instances will be created and choose the Route Table tab.

1. Note a few trailing digits of the Route table id. For example, the 6b39 in the image below.  
![\[Route table details.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_1.png)

1. Choose **Endpoints** from the navigation pane.

1. Choose the endpoint created earlier and then **Manage Route tables**, either from the Route Tables tab for the endpoint, or from the Actions drop down.

1. Choose the Route table using the digits identified earlier and press Modify route tables.  
![\[Route table selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_2.png)

## Define the required security group


The Amazon EC2, AWS STS, and License Manager services communicate over HTTPS via port 443. This communication is bi-directional and requires inbound and outbound rules to allow the instance to communicate with the services.

1. Navigate to Amazon VPC in the AWS Management Console.

1. Locate **Security Groups** in the navigation bar and choose **Create security group**.

1. Enter a Security group name and description, for example “Inbound-Outbound HTTPS”.

1. Press the X in the VPC selection area to **remove the default VPC**, and choose the VPC that contains the S3 endpoint.

1. Add an Inbound Rule that **allows TCP traffic on Port 443** from anywhere.
**Note**  
The inbound (and outbound rules) can be restricted further by limiting the Source. For more information, see [Control traffic to your AWS resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) in the *Amazon VPC User Guide*.  

![\[Basic details with inbound rule entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_3.png)


1. Press **Create security group**.

## Create the service endpoints


Repeat this process three times – once for each service.

1. Navigate to Amazon VPC in the AWS Management Console and choose **Endpoints**.

1. Press **Create endpoint**.

1. Enter a name, for example “Micro-Focus-License-EC2”, “Micro-Focus-License-STS”, or “Micro-Focus-License-Manager”.

1. Choose the **AWS Services** Service Category.  
![\[Endpoint settings with AWS services selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_4.png)

1. Under Services search for the matching Interface service which is one of:
   + “com.amazonaws.*region*.ec2”
   + “com.amazonaws.*region*.sts”
   + “com.amazonaws.*region*.license-manager”

   For example:
   + “com.amazonaws.us-west-1.ec2”
   + “com.amazonaws.us-west-1.sts”
   + “com.amazonaws.us-west-1.license-manager”

1. Choose the matching Interface service.

   **com.amazonaws.*region*.ec2**:  
![\[Services with Amazon EC2 interface service selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_5.png)

   **com.amazonaws.*region*.sts:**  
![\[Services with AWS STS interface service selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_6.png)

   **com.amazonaws.*region*.license-manager:**  
![\[Services with License Manager interface service selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_7.png)

1. For VPC choose the VPC for the instance.  
![\[VPC with the VPC for the instance selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_8.png)

1. Choose the **Availability Zone** and the **Subnets** for the VPC.  
![\[Subnets with availability zone and subnet for the VPC selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_9.png)

1. Choose the Security Group created earlier.  
![\[Security groups with security group selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_10.png)

1. Under Policy choose **Full Access**.  
![\[Policy with Full Access selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/mf-no-internet_11.png)

1. Choose **Create Endpoint**.

1. Repeat this process for the remaining interfaces.

# Set up Automation for Rocket Enterprise Analyzer (formerly Micro Focus) and Rocket Enterprise Developer Streaming Sessions
Set up WorkSpaces Applications Automation

You can automatically run a script at session start and end to allow automation that is specific to your customer context. For more information on this WorkSpaces Applications feature, see [Use Session Scripts to Manage Your AppStream 2.0 Users' Streaming Experience](https://docs.aws.amazon.com/appstream2/latest/developerguide/use-session-scripts.html) in the *Amazon WorkSpaces Applications Administration Guide*.

This feature requires that you have at least the following versions of the Enterprise Analyzer and Enterprise Developer images:
+ `m2-enterprise-analyzer-v8.0.4.R1`
+ `m2-enterprise-developer-v8.0.4.R1`

**Topics**
+ [

## Set up automation at session start
](#set-up-automation-m2.start)
+ [

## Set up automation at session end
](#set-up-automation-m2.end)

## Set up automation at session start


If you want to run an automation script when users connect to WorkSpaces Applications, create your script and name it `m2-user-setup.cmd`. Store the script in the WorkSpaces Applications Home folder for the user. The WorkSpaces Applications images that AWS Mainframe Modernization provides look for a script with that name in that location, and run it if it exists.

**Note**  
The script duration cannot exceed the limit set by WorkSpaces Applications, which is currently 60 seconds. For more information, see [Run Scripts Before Streaming Sessions Begin](https://docs.aws.amazon.com/appstream2/latest/developerguide/use-session-scripts.html#run-scripts-before-streaming-sessions-begin) in the *Amazon WorkSpaces Applications Administration Guide*.

## Set up automation at session end


If you want to run an automation script when users disconnect from WorkSpaces Applications, create your script and name it `m2-user-teardown.cmd`. Store the script in the WorkSpaces Applications Home folder for the user. The WorkSpaces Applications images that AWS Mainframe Modernization provides look for a script with that name in that location, and run it if it exists.

**Note**  
The script duration cannot exceed the limit set by WorkSpaces Applications, which is currently 60 seconds. For more information, see [Run Scripts After Streaming Sessions End](https://docs.aws.amazon.com/appstream2/latest/developerguide/use-session-scripts.html#run-scripts-after-streaming-sessions-end) in the *Amazon WorkSpaces Applications Administration Guide*.

# View data sets as tables and columns in Rocket Enterprise Developer (formerly Micro Focus Enterprise Developer)
View data sets as tables in Enterprise Developer

You can access mainframe datasets that are deployed in AWS Mainframe Modernization using the Rocket Software (formerly Micro Focus) runtime. You can view the migrated data sets as tables and columns from an Rocket Enterprise Developer instance. Viewing data sets this way allows you to: 
+ Perform `SQL SELECT` operations on the migrated data files.
+ Expose data outside the migrated mainframe application without changing the application.
+ Easily filter data and save as CSV or other file formats.

**Note**  
Steps 1 and 2 are one time activities. Repeat steps 3 and 4 for each data set to create the database views.

**Topics**
+ [

## Prerequisites
](#view-datasets-tables-m2.prereq)
+ [

## Step 1: Set up ODBC Connection to Rocket Software datastore (Amazon RDS database)
](#view-datasets-tables-m2.odbc)
+ [

## Step 2: Create the MFDBFH.cfg file
](#view-datasets-tables-m2.config)
+ [

## Step 3: Create a structure (STR) file for your copybook layout
](#view-datasets-tables-m2.str)
+ [

## Step 4: Create a database view using the structure (STR) file
](#view-datasets-tables-m2.dbview)
+ [

## Step 5: View Rocket Software (formerly Micro Focus) data sets as tables and columns
](#view-datasets-tables-m2.cols)

## Prerequisites

+ You must have access to Rocket Enterprise Developer Desktop via WorkSpaces Applications.
+ You must have an application deployed and running under AWS Mainframe Modernization using the Rocket Software runtime engine.
+ You are storing your application data in Aurora PostgreSQL-Compatible Edition.

## Step 1: Set up ODBC Connection to Rocket Software datastore (Amazon RDS database)


In this step, you set up an ODBC connection to the database that contains the data you want to view as tables and columns. This is a one-time only step.

1. Log in to Rocket Enterprise Developer Desktop using WorkSpaces Applications streaming URL.

1. Open **ODBC Data Source Administrator**, choose **User DSN** and then choose **Add**.

1. In **Create New Data Source**, choose **PostgreSQL ANSI** and then choose **Finish**.

1. Create a data source for `PG.POSTGRES` by providing the necessary database information, as follows:

   ```
   Data Source : PG.POSTGRES
   Database    : postgres
   Server      : rds_endpoint.rds.amazonaws.com
   Port        : 5432
   User Name   : user_name
   Password    : user_password
   ```  
![\[Setting up the Postgres ODBC connection in Enterprise Developer.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/view-data-tables-odbc.png)

1. Choose **Test** to make sure the connection works. You should see the message `Connection successful` if the test succeeds.

   If the test doesn't succeed, review the following information.
   + [Troubleshooting for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html)
   + [How do I resolve problems when connecting to my Amazon RDS DB instance?](https://repost.aws/knowledge-center/rds-cannot-connect)

1. Save the data source.

1. Create a data source for `PG.VSAM`, test the connection, and save the data source. Provide the following database information:

   ```
   Data Source : PG.VSAM
   Database    : MicroFocus$SEE$Files$VSAM
   Server      : rds_endpoint.rds.amazonaws.com
   Port        : 5432
   User Name   : user_name
   Password    : user_password
   ```  
![\[Setting up the PG.VSAM ODBC data source in Enterprise Developer.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/view-data-tables-pg-vsam.png)

## Step 2: Create the MFDBFH.cfg file


In this step, you create a configuration file that describes the Micro Focus data store. This is a one-time only configuration step.

1. In your Home Folder, for example, in `D:\PhotonUser\My Files\Home Folder\MFED\cfg\MFDBFH.cfg`, create the MFDBFH.cfg file with the following content.

   ```
   <datastores>
          <server name="ESPACDatabase" type="postgresql" access="odbc">
           <dsn name="PG.POSTGRES" type="database" dbname="postgres"/>
           <dsn name="PG.VSAM" type="datastore" dsname="VSAM"/>
          </server>
         </datastores>
   ```

1. Verify the MFDBFH configuration by running the following commands to query the Micro Focus datastore:

   ```
   *##*
   *## Test the connection by running the following commands*
   *##*
         
   set MFDBFH_CONFIG="D:\PhotonUser\My Files\Home Folder\MFED\cfg\MFDBFH.cfg"
         
   dbfhdeploy list sql://ESPACDatabase/VSAM?folder=/DATA
   ```

## Step 3: Create a structure (STR) file for your copybook layout


In this step, you create a structure file for your copybook layout so that you can use it later to create database views from the data sets.

1. Compile the program that is associated with your copybook. If no program is using the copybook, create and compile a simple program like the following with a COPY statement for your copybook.

   ```
   IDENTIFICATION DIVISION.
         PROGRAM-ID. TESTPGM1.
         
         ENVIRONMENT DIVISION.
         CONFIGURATION SECTION.
         
         DATA DIVISION.
         WORKING-STORAGE SECTION.
         
         COPY CVTRA05Y.
         
         PROCEDURE DIVISION.
         
         GOBACK.
   ```

1. After successful compilation, right click on the program and choose **Create Record Layout File**. This will open the Micro Focus Data File Tools using the .idy file generated during the compilation.  
![\[Location of the Create Record Layout File command in Enterprise Developer.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/view-data-tables-idy-file.png)

1. Right click on the Record structure and choose **Create Default Layout** (single structure) or **Create Conditional Layout** (multi structure) depending on the layout.

   For more information, see [Creating Structure Files and Layouts](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-6EDDA4C3-F09E-4CEC-9CF8-281D9D7453C3.html) in the Micro Focus documentation.  
![\[Location of the layout commands in Micro Focus data file tools.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/view-data-tables-mf-data-file-tools.png)

1. After creating the layout, choose **File** from the menu and then choose **Save As**. Browse and save the file under your Home Folder with same file name as your copybook. You can choose to create a folder called `str` and save all your structure files there.  
![\[Saving the str file in Enterprise Developer.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/view-data-tables-save-str.png)

## Step 4: Create a database view using the structure (STR) file


In this step, you use the previously created structure file to create a database view for a data set.
+ Use the `dbfhview` command to create a database view for a data set that is already in the Micro Focus datastore as shown in the following example.

  ```
  ##
        ## The below command creates database view for VSAM file AWS.M2.CARDDEMO.TRANSACT.VSAM.KSDS 
        ## using the STR file CVTRA05Y.str
        ##
        
        dbfhview -create -struct:"D:\PhotonUser\My Files\Home Folder\MFED\str\CVTRA05Y.str" -name:V_AWS.M2.CARDDEMO.TRANSACT.VSAM.KSDS.DAT -file:sql://ESPACDatabase/VSAM/AWS.M2.CARDDEMO.TRANSACT.VSAM.KSDS.DAT?folder=/DATA
        
        ##
        ## Output:
        ##
        
        Micro Focus Database File Handler - View Generation Tool Version 8.0.00
        Copyright (C) 1984-2022 Micro Focus. All rights reserved.
        
        VGN0017I Using structure definition 'TRAN-RECORD-DEFAULT'
        VGN0022I View 'V_AWS.M2.CARDDEMO.TRANSACT.VSAM.KSDS.DAT' installed in datastore 'sql://espacdatabase/VSAM'
        VGN0002I The operation completed successfully
  ```

## Step 5: View Rocket Software (formerly Micro Focus) data sets as tables and columns


In this step, connect to the database using `pgAdmin` so you can run queries to view the datasets like tables and columns.
+ Connect to the database `MicroFocus$SEE$Files$VSAM` using pgAdmin and query the database view you created in step 4.

  ```
  SELECT * FROM public."V_AWS.M2.CARDDEMO.TRANSACT.VSAM.KSDS.DAT";
  ```  
![\[Migrated data set showing tables and columns in pgAdmin.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/view-data-tables-new-view-pgadmin.png)

# Edit data sets using Rocket Software (formerly Micro Focus) Data File Tools in Enterprise Developer
Edit data sets using Data File Tools in Enterprise Developer

You can view and edit data sets in AWS Mainframe Modernization using the Rocket Software runtime for any migrated data sets.  The steps in this document will guide you through the process of accessing data sets using Data File Tools.  This allows you to view and edit the migrated data sets as needed. 

**Topics**
+ [

## Prerequisites
](#edit-datasets-m2.prereq)
+ [

## Launch Rocket Software(formerly Micro Focus) Data File Tools
](#edit-datasets-m2-launch)
+ [

## Edit VSAM data sets stored in the MFDBFH database
](#edit-datasets-m2-vsam)
+ [

## Edit non-VSAM data sets stored in the MFDBFH database
](#edit-datasets-m2-nonvsam)
+ [

## Edit VSAM and non-VSAM data sets stored in the File System (EFS/FSx)
](#edit-datasets-m2-open)

## Prerequisites


Before you start, you must have an application deployed with the data sets  imported under the AWS Mainframe Modernization service using the Rocket Software engine.

To continue with editing the data sets, you must complete the Step 1, Step 2,  and (optionally) Step 3 from the [View data sets as tables and columns in Rocket Enterprise Developer (formerly Micro Focus Enterprise Developer)](view-datasets-tables-m2.md) page  to configure the ODBC connection, and Micro Focus datastore (that is, `MFDBFH`).

**Important**  
This guide assumes that you are using Amazon Aurora Postgres as Micro Focus datastore (`MFDBFH`)  to store your application data.

## Launch Rocket Software(formerly Micro Focus) Data File Tools


After completing the prerequisites, you launch the Micro Focus Data File Tools by setting up the `MFDBFH_CONFIG` environment variable to access the data sets stored in the database (`MFDBFH`). 

To do this,

1. Log in to the Micro Focus Enterprise Developer desktop, and launch the **Enterprise Developer command prompt (64-bit)** from the **Start Menu**.

1. Set the `MFDBFH_CONFIG` environment variable with the full path to your `MFDBCH.cfg` file.

   ```
   set MFDBFH_CONFIG="C:\MicroFocus\config\MFDBFH.cfg"
   ```

1. Launch Micro Focus Data File Tools from the Enterprise Developer command line using the following command.

   ```
   mfdatatools2
   ```  
![\[Enterprise Developer Command Prompt.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/edit-mfdbfh-launch.png)

This opens the Micro Focus Data File Tools in a separate window.

## Edit VSAM data sets stored in the MFDBFH database


Once you launch the Micro Focus Data File Tools, you open a VSAM data set that is stored in the Micro Focus datastore.

To do this,

1. From the **File menu** in the Micro Focus Data File Tools window, choose **Data Explorer**.

1. In the Data Explorer section, choose **Settings** (gear icon) to configure a new connection. This opens a **Data Source Settings** window.  
![\[Micro Focus Data File Tools window with Data Explorer section.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/edit-mfdbfh-data-source.png)

1. In the Data Source Settings window, choose the **MFDBFH** tab, and enter the following values:
   + Server: `ESPACDatabase`
   + Datastore: `VSAM`

   Choose **Apply** to save the configuration.  
![\[Micro Focus Data File Tools window Data Source settings with MFDBFH tab.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/edit-mfdbfh-data-source-settings.png)

    The Data Explorer now shows all data sets that are stored in `MFDBFH`.  
![\[Micro Focus Data File Tools window with Data Explorer section displaying all data sets.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/edit-mfdbfh-datasets.png)

1. Expand the relative path `DATA` and double click on the VSAM data set you want to open. 

1. In the **Open Data File** window, choose **Open Shared** or **Open Exclusive** to open the data set.  
![\[Micro Focus Data File Tools window with Data Explorer section Open Data File section to open data sets..\]](http://docs.aws.amazon.com/m2/latest/userguide/images/edit-mfdbfh-data-set.png)

You can now view or edit the open data set.

## Edit non-VSAM data sets stored in the MFDBFH database


If you want to edit non-VSAM data sets, you open a non-VSAM data set that is stored in the Micro Focus datastore.

To do this,

1. From the Enterprise Developer command prompt (64-bit) run the `dbfhdeploy data extract` command to download the non-VSAM data set to your local file system. 
**Note**  
Before running this command, make sure you have set the `MFDBFH_CONFIG` environment variable with the full path to your `MFDBFH.cfg` file.  

   ```
   dbfhdeploy data extract sql://ESPACDatabase/VSAM/AWS.M2.CARDDEMO.TRANSACT.BKUP.G0001V00.DAT?folder=/DATA C:\MicroFocus\data\AWS.M2.CARDDEMO.TRANSACT.BKUP.G0001V00.DAT
   ```

1. Launch Micro Focus Data File Tools from the **Start Menu**.

1. From the File Menu of Micro Focus Data File Tools, choose **Open**, and then choose **Data File**.

1. In the Open Data File window, browse the downloaded data set in your local file system. Edit the **File Details** as required. Then choose **Open Shared** or **Open Exclusive** to open the data set.  
![\[Open Data Files window with local tab selected for browsing data sets.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/edit-mfdbfh-browse.png)

You can now view or edit the open data set.

The edited or updated data sets can be imported back to the Micro Focus datastore using steps in [Import data sets for AWS Mainframe Modernization applications](applications-m2-dataset.md) or by using [The dbfhdeploy Command Line Utility](https://www.microfocus.com/documentation/enterprise-developer/ed90/ED-Eclipse/GUID-2A16851F-E475-42C9-B024-37567006B86D.html).

## Edit VSAM and non-VSAM data sets stored in the File System (EFS/FSx)


You can also open a data set stored in a file system.

To do this,

1. Mount the EFS/FSx file system in on the Enterprise Developer EC2 instance.

1. Use the Micro Focus Data File Tools to browse, and open the data sets from the file system.

# Tutorials for Rocket Software (formerly Micro Focus)
Tutorials for Rocket Software

The tutorials in this section help you to get started with setting up various tasks in the Rocket Software runtime engine for the AWS Mainframe Modernization service. These tutorials are for setting up sample application, using templates with Rocket Enterprise Developer, and setting up Enterprise Analyzer.

**Topics**
+ [

# Tutorial: Setting up the Rocket Software (formerly Micro Focus) build for the BankDemo sample application
](tutorial-build-mf.md)
+ [

# Tutorial: Set up WorkSpaces Applications for use with Rocket Enterprise Analyzer and Rocket Enterprise Developer
](set-up-appstream-mf.md)
+ [

# Tutorial: Use templates with Rocket Enterprise Developer (formerly Micro Focus Enterprise Developer)
](tutorial-templates-ed.md)
+ [

# Tutorial: Set up Enterprise Analyzer on WorkSpaces Applications
](set-up-ea.md)
+ [

# Tutorial: Set up Rocket Enterprise Developer on WorkSpaces Applications
](set-up-ed.md)

# Tutorial: Setting up the Rocket Software (formerly Micro Focus) build for the BankDemo sample application
Tutorial: Set up the build for the BankDemo sample application

AWS Mainframe Modernization provides you with the ability to set up builds and continuous integration/continuous delivery (CI/CD) pipelines for your migrated applications. These builds and pipelines use AWS CodeBuild, AWS CodeCommit, and AWS CodePipeline to provide these capabilities. CodeBuild is a fully managed build service that compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeCommit is a version control service that enables you to privately store and manage Git reponsitories in the AWS Cloud. CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software.

This tutorial demonstrates how to use AWS CodeBuild to compile the BankDemo sample application source code from Amazon S3 and then export the compiled code back to Amazon S3.

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you can use prepackaged build environments, or you can create custom build environments that use your own build tools. This demo scenario uses the second option. It consists of a CodeBuild build environment that uses a pre-packaged Docker image.

**Important**  
Before you start your mainframe modernization project, we recommend that you learn about the [AWS Migration Acceleration Program (MAP) for Mainframe](https://aws.amazon.com/migration-acceleration-program/mainframe/) or contact [AWS mainframe specialists](mailto: mainframe@amazon.com) to learn about the steps required to modernize a mainframe application.

**Topics**
+ [

## Prerequisites
](#tutorial-build-mf-prerequisites)
+ [

## Step 1: Share the build assets with AWS account
](#tutorial-build-mf-assets)
+ [

## Step 2: Create Amazon S3 buckets
](#tutorial-build-mf-s3)
+ [

## Step 3: Create the build spec file
](#tutorial-build-mf-spec)
+ [

## Step 4: Upload the source files
](#tutorial-build-mf-upload)
+ [

## Step 5: Create IAM policies
](#tutorial-build-mf-IAM-policy)
+ [

## Step 6: Create an IAM role
](#tutorial-build-mf-IAM-role)
+ [

## Step 7: Attach the IAM policies to the IAM role
](#tutorial-build-mf-attach)
+ [

## Step 8: Create the CodeBuild project
](#tutorial-build-mf-create-project)
+ [

## Step 9: Start the build
](#tutorial-build-mf-start)
+ [

## Step 10: Download output artifacts
](#tutorial-build-mf-download-output)
+ [

## Clean up resources
](#tutorial-build-mf-clean)

## Prerequisites


Before you start this tutorial, complete the following prerequisites.
+ Download the [BankDemo sample application](https://d3lkpej5ajcpac.cloudfront.net/demo/mf/BANKDEMO-build.zip) and unzip it to a folder. The source folder contains COBOL programs and Copybooks, and definitions. It also contains a JCL folder for reference, although you do not need to build JCL. The folder also contains the meta files required for the build.
+ In the AWS Mainframe Modernization console, choose **Tools** . In **Analysis, development, and build assets**, choose **Share assets with my AWS account**.

## Step 1: Share the build assets with AWS account


In this step, you ensure that you share the build assets with your AWS account, especially in the Region where assets are being used.

1. Open the AWS Mainframe Modernization console at [https://console.aws.amazon.com/m2/](https://us-west-2.console.aws.amazon.com/m2/home?region=us-west-2#/).

1. In the left navigation, choose **Tools**.

1. In **Analysis, development, and build assets**, choose **Share assets with my AWS account**.

**Important**  
You need to do this step once in every AWS Region where you intend to do builds.

## Step 2: Create Amazon S3 buckets


In this step, you create two Amazon S3 buckets. The first is an input bucket to hold the source code, and the other is an output bucket to hold the build output. For more information, see [Creating, configuring, and working with Amazon S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) in the *Amazon S3 User Guide*.

1. To create the input bucket, log in to the Amazon S3 console and choose **Create bucket**.

1. In **General configuration**, provide a name for the bucket and specify the AWS Region where you want to create the bucket. An example name is `codebuild-regionId-accountId-input-bucket`, where `regionId` is the AWS Region of the bucket ,and `accountId` is your AWS account ID.
**Note**  
If you are creating the bucket in a different AWS Region from US East (N. Virginia), specify the `LocationConstraint` parameter. For more information, see [Create Bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html) in the *Amazon Simple Storage Service API Reference*.

1. Retain all other settings and choose **Create bucket**.

1. Repeat steps 1-3 to create the output bucket. An example name is `codebuild-regionId-accountId-output-bucket`, where `regionId` is the AWS Region of the bucket and `accountId` is your AWS account ID.

   Whatever names you choose for these buckets, be sure to use them throughout this tutorial.

## Step 3: Create the build spec file


In this step, you create a build spec file,. This file provides build commands and related settings, in YAML format, for CodeBuild to run the build. For more information, see [Build specification reference for CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html) in the *AWS CodeBuild User Guide*.

1. Create a file named `buildspec.yml` in the directory that you unzipped as a prerequisite.

1. Add the following content to the file and save. No changes are required for this file.

   ```
   version: 0.2
   env:
     exported-variables:
       - CODEBUILD_BUILD_ID
       - CODEBUILD_BUILD_ARN
   phases:
     install:
       runtime-versions:
         python: 3.7
     pre_build:
       commands:
         - echo Installing source dependencies...
         - ls -lR $CODEBUILD_SRC_DIR/source
     build:
       commands:
         - echo Build started on `date`
         - /start-build.sh -Dbasedir=$CODEBUILD_SRC_DIR/source -Dloaddir=$CODEBUILD_SRC_DIR/target 
     post_build:
       commands:
         - ls -lR $CODEBUILD_SRC_DIR/target
         - echo Build completed on `date`
   artifacts:
     files:
       - $CODEBUILD_SRC_DIR/target/**
   ```

   Here `CODEBUILD_BUILD_ID`, `CODEBUILD_BUILD_ARN`, `$CODEBUILD_SRC_DIR/source`, and `$CODEBUILD_SRC_DIR/target` are environment variables available within CodeBuild. For more information, see [Environment variables in build environments](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html).

   At this point, your directory should look like this.

   ```
   (root directory name)
       |-- build.xml
       |-- buildspec.yml
       |-- LICENSE.txt
       |-- source
            |... etc.
   ```

1. Zip the contents of the folder to a file named `BankDemo.zip`.. For this tutorial, you can't zip the folder. Instead, zip the contents of the folder to the file `BankDemo.zip`.

## Step 4: Upload the source files


In this step, you upload the source code for the BankDemo sample application to your Amazon S3 input bucket.

1. Log in to the Amazon S3 console and choose **Buckets** in the left navigation pane. Then choose the input bucket you created previously.

1. Under **Objects**, choose **Upload**.

1. In the **Files and folders** section, choose **Add Files**.

1. Navigate to and choose your `BankDemo.zip` file.

1. Choose **Upload**.

## Step 5: Create IAM policies


In this step, you create two [IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html). One policy grants permissions for AWS Mainframe Modernization to access and use the Docker image that contains the Rocket Software build tools. This policy is not customized for customers. The other policy grants permissions for AWS Mainframe Modernization to interact with the input and output buckets, and with the [Amazon CloudWatch logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) that CodeBuild generates.

To learn about creating an IAM policy, see [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

**To create a policy for accessing Docker images**

1. In the IAM console, copy the following policy document and paste it into the policy editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ecr:GetAuthorizationToken"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "ecr:BatchCheckLayerAvailability",
                   "ecr:GetDownloadUrlForLayer",
                   "ecr:BatchGetImage"
               ],
               "Resource": "arn:aws:ecr:*:673918848628:repository/m2-enterprise-build-tools"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws:s3:::aws-m2-repo-*-<region>-prod"
           }
       ]
   }
   ```

------

1. Provide a name for the policy, for example, `m2CodeBuildPolicy`.

**To create a policy that allows AWS Mainframe Modernization to interact with buckets and logs**

1. In the IAM console, copy the following policy document and paste it into the policy editor. Make sure to update `regionId` to the AWS Region, and `accountId` to your AWS account.

1. Provide a name for the policy, for example, `BankdemoCodeBuildRolePolicy`.

## Step 6: Create an IAM role


In this step, you create a new [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that allows CodeBuild to interact with AWS resources for you, after you associate the IAM policies that you previously created with this new IAM role.

For information about creating a service role, see [Creating a Role to Delegate Permissions to an AWS Service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*,.

1. Log in to the IAM console and choose **Roles** in the left navigation pane.

1. Choose **Create role**.

1. Under **Trusted entity type**, choose **AWS service**.

1. Under **Use cases for other AWS services**, choose **CodeBuild**, and then choose **CodeBuild** again.

1. Choose **Next**.

1. On the **Add permissions** page, choose **Next**. You assign a policy to the role later.

1. Under **Role details**, provide a name for the role, for example, `BankdemoCodeBuildServiceRole`.

1. Under **Select trusted entities**, verify that the policy document looks like the following:

------
#### [ JSON ]

****  

   ```
   {
             "Version":"2012-10-17",		 	 	 
             "Statement": [
               {
                 "Effect": "Allow",
                 "Principal": {
                   "Service": "codebuild.amazonaws.com"
                 },
                 "Action": "sts:AssumeRole"
               }
             ]
           }
   ```

------

1. Choose **Create role**.

## Step 7: Attach the IAM policies to the IAM role


In this step, you attach the two IAM policies you previously created to the `BankdemoCodeBuildServiceRole` IAM role.

1. Log in to the IAM console and choose **Roles** in the left navigation pane.

1. In **Roles**, choose the role you created previously, for example, `BankdemoCodeBuildServiceRole`.

1. In **Permissions policies**, choose **Add permissions**, and then **Attach policies**.

1. In **Other permissions policies**, choose the policies that you created previously, for example, `m2CodeBuildPolicy` and `BankdemoCodeBuildRolePolicy`.

1. Choose **Attach policies.**

## Step 8: Create the CodeBuild project


In this step, you create the CodeBuild project.

1. Log in to the CodeBuild console and choose **Create build project**.

1. In the **Project configuration** section, provide a name for the project, for example, `codebuild-bankdemo-project`.

1. In the **Source** section, for **Source provider**, choose **Amazon S3**, and then choose the input bucket you created previously, for example, `codebuild-regionId-accountId-input-bucket`.

1. In the **S3 object key or S3 folder** field, enter the name of the zip file that you uploaded to the S3 bucket. In this case, the file name is `bankdemo.zip`.

1. In the **Environment** section, choose **Custom image**.

1. In the **Environment type** field, choose **Linux**.

1. Under **Image registry**, choose **Other registry**.

1. In the **External registry URL** field, 
   + For Rocket Software v9: Enter `673918848628.dkr.ecr.us-west-1.amazonaws.com/m2-enterprise-build-tools:9.0.7.R1`. If you're using a different AWS Region with Rocket Software v9, you can also specify ` 673918848628.dkr.ecr.<m2-region>.amazonaws.com/m2-enterprise-build-tools:9.0.7.R1`, where <m2-region> is an AWS Region in which AWS Mainframe Modernization service is available (for example, `eu-west-3`).
   + For Rocket Software v8: Enter `673918848628.dkr.ecr.us-west-2.amazonaws.com/m2-enterprise-build-tools:8.0.9.R1`
   + For Rocket Software v7: Enter `673918848628.dkr.ecr.us-west-2.amazonaws.com/m2-enterprise-build-tools:7.0.R10`

1. Under **Service role**, choose **Existing service role**, and in the **Role ARN** field, choose the service role you created previously; for example, `BankdemoCodeBuildServiceRole`.

1. In the **Buildspec** section, choose **Use a buildspec file**.

1. In the **Artifacts** section, under **Type**, choose **Amazon S3**, and then choose your output bucket, for example, `codebuild-regionId-accountId-output-bucket`.

1. In the **Name** field, enter the name of a folder in the bucket that you want to contain the build output artifacts, for example, `bankdemo-output.zip`.

1. Under **Artifacts packaging**, choose **Zip**.

1. Choose **Create build project**.

## Step 9: Start the build


In this step, you start the build.

1. Log in to the CodeBuild console.

1. In the left navigation pane, choose **Build projects**.

1. Choose the build project that you created previously, for example, `codebuild-bankdemo-project`.

1. Choose **Start build**.

This command starts the build. The build runs asynchronously. The output of the command is a JSON that includes the attribute id. This attribute idis a reference to the CodeBuild build id of the build that you just started. You can view the status of the build in the CodeBuild console. You can also see detailed logs about the build execution in the console. For more information, see [View detailed build information](https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started-build-log-console.html) in the *AWS CodeBuild User Guide*.

When the current phase is COMPLETED, it means that your build finished successfully, and your compiled artifacts are ready on Amazon S3.

## Step 10: Download output artifacts


In this step, you download the output artifacts from Amazon S3. The Rocket Software build tool can create several different executable types. In this tutorial, it generates shared objects.

1. Log in to the Amazon S3 console.

1. In the **Buckets** role="bold"> section, choose the name of your output bucket, for example, `codebuild-regionId-accountId-output-bucket`.

1. Choose **Download** role="bold">.

1. Unzip the downloaded file. Navigate to the target folder to see the build artifacts. These include the `.so` Linux shared objects.

## Clean up resources


If you no longer need the resources that you created for this tutorial, delete them to avoid additional charges. To do so, complete the following steps:
+ Delete the S3 buckets that you created for this tutorial. For more information, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the *Amazon Simple Storage Service User Guide*.
+ Delete the IAM policies that you created for this tutorial. For more information, see [Deleting IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-delete.html) in the *IAM User Guide*.
+ Delete the IAM role that you created for this tutorial. For more information, see [Deleting roles or instance profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_delete.html) in the *IAM User Guide*.
+ Delete the CodeBuild project that you created for this tutorial. For more information, see [Delete a build project in CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/delete-project.html) in the *AWS CodeBuild User Guide*.

# Tutorial: Set up WorkSpaces Applications for use with Rocket Enterprise Analyzer and Rocket Enterprise Developer
Tutorial: Set up WorkSpaces Applications for Enterprise Analyzer and Enterprise Developer

AWS Mainframe Modernization provides several tools through Amazon WorkSpaces Applications. WorkSpaces Applications is a fully managed, secure application streaming service that lets you stream desktop applications to users without rewriting applications. WorkSpaces Applications provides users with instant access to the applications that they need with a responsive, fluid user experience on the device of their choice. Using WorkSpaces Applications to host runtime engine-specific tools gives customer application teams the ability to use the tools directly from their web browsers, interacting with application files stored in either Amazon S3 buckets or CodeCommit repositories. 

For information about browser support in WorkSpaces Applications see [System Requirements and Feature Support (Web Browser)](https://docs.aws.amazon.com/appstream2/latest/developerguide/requirements-and-features-web-browser-admin.html) in the *Amazon WorkSpaces Applications Administration Guide*. If you have issues when you are using WorkSpaces Applications see [Troubleshooting AppStream 2.0 User Issues](https://docs.aws.amazon.com/appstream2/latest/developerguide/troubleshooting-user-issues.html) in the *Amazon WorkSpaces Applications Administration Guide*.

This document is intended for members of the customer operations team. It describes how to set up Amazon WorkSpaces Applications fleets and stacks to host the Rocket Enterprise Analyzer and Rocket Enterprise Developer tools used with AWS Mainframe Modernization. Rocket Enterprise Analyzer is usually used during the Assess phase and Rocket Enterprise Developer is usually used during the Migrate and Modernize phase of the AWS Mainframe Modernization approach. If you plan to use both Enterprise Analyzer and Enterprise Developer you must create separate fleets and stacks for each tool. Each tool requires its own fleet and stack because their licensing terms are different.

**Important**  
The steps in this tutorial are based on the downloadable CloudFormation template [cfn-m2-appstream-fleet-ea-ed.yml](https://drm0z31ua8gi7.cloudfront.net/tutorials/mf/appstream/cfn-m2-appstream-fleet-ea-ed.yml). 

**Topics**
+ [

## Prerequisites
](#tutorial-aas-prerequisites)
+ [

## Step 1: Get the WorkSpaces Applications images
](#tutorial-aas-step1)
+ [

## Step 2: Create the stack using the CloudFormation template
](#tutorial-aas-step2)
+ [

## Step 3: Create a user in WorkSpaces Applications
](#tutorial-aas-step3)
+ [

## Step 4: Log in to WorkSpaces Applications
](#tutorial-aas-step4)
+ [

## Step 5: Verify buckets in Amazon S3 (optional)
](#tutorial-aas-step5)
+ [

## Next steps
](#tutorial-aas-next-steps)
+ [

## Clean up resources
](#tutorial-aas-cleanup)

## Prerequisites

+ Download the template: [cfn-m2-appstream-fleet-ea-ed.yml](https://drm0z31ua8gi7.cloudfront.net/tutorials/mf/appstream/cfn-m2-appstream-fleet-ea-ed.yml).
+ Get the ID of your default VPC and security group. For more information on the default VPC, see [Default VPCs](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html) in the *Amazon VPC User Guide*. For more information on the default security group, see [Default and custom security groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/default-custom-security-groups.html) in the *Amazon EC2 User Guide*. 
+ Make sure you have the following permissions:
  + create stacks, fleets, and users in WorkSpaces Applications.
  + create stacks in CloudFormation using a template.
  + create buckets and upload files to buckets in Amazon S3.
  + download credentials (`access_key_id` and `secret_access_key`) from IAM.

## Step 1: Get the WorkSpaces Applications images


In this step, you share the WorkSpaces Applications images for Enterprise Analyzer and Enterprise Developer with your AWS account.

1. Open the AWS Mainframe Modernization console at [https://console.aws.amazon.com/m2/](https://us-west-2.console.aws.amazon.com/m2/home?region=us-west-2#/).

1. In the left navigation, choose **Tools**.

1. In **Analysis, development, and build assets**, choose **Share assets with my AWS account**.

## Step 2: Create the stack using the CloudFormation template


In this step, you use the downloaded CloudFormation template to create an WorkSpaces Applications stack and fleet for running Rocket Enterprise Analyzer. You can repeat this step later to create another WorkSpaces Applications stack and fleet for running Rocket Enterprise Developer, since each tool requires its own fleet and stack in WorkSpaces Applications. For more information on CloudFormation stacks, see [Working with stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html) in the *AWS CloudFormation User Guide*.

**Note**  
AWS Mainframe Modernization adds an additional fee to the standard WorkSpaces Applications pricing for the use of Enterprise Analyzer and Enterprise Developer. For more information, see [AWS Mainframe Modernization Pricing](https://aws.amazon.com/mainframe-modernization/pricing/).

1. Download the [cfn-m2-appstream-fleet-ea-ed.yml](https://drm0z31ua8gi7.cloudfront.net/tutorials/mf/appstream/cfn-m2-appstream-fleet-ea-ed.yml) template, if necessary.

1. Open the CloudFormation console and choose **Create Stack** and **with new resources (standard)**.

1. In **Prerequisite - Prepare template**, choose **Template is ready**.

1. In **Specify Template**, choose **Upload a template file**.

1. In **Upload a template file**, choose **Choose file** and upload the [cfn-m2-appstream-fleet-ea-ed.yml](https://drm0z31ua8gi7.cloudfront.net/tutorials/mf/appstream/cfn-m2-appstream-fleet-ea-ed.yml) template.

1. Choose **Next**.  
![\[The CloudFormation Create stack page with selected cfn-m2-appstream-fleet-ea-ed.yml template.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/cfn-create-stack.png)

1. On **Specify stack details**, enter the following information:
   + In **Stack name**, enter a name of your choice. For example, **m2-ea**.
   + In **AppStreamApplication**, choose **ea**.
   + In **AppStreamFleetSecurityGroup**, choose your default VPC’s default security group.
   + In **AppStreamFleetVpcSubnet**, choose a subnet within your default VPC.
   + In **AppStreamImageName**, choose the image starting with `m2-enterprise-analyzer`. This image contains the currently supported version of the Rocket Enterprise Analyzer tool.
   + Accept the defaults for the other fields, then choose **Next**.  
![\[The CloudFormation specify stack details page with Enterprise Analyzer options filled in.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/cfn-specify-stack-details.png)

1. Accept all defaults, then choose **Next** again.

1. On **Review**, make sure all the parameters are what you intend.

1. Scroll to the bottom, choose **I acknowledge that AWS CloudFormation might create IAM resources with custom names**, and choose **Create Stack**.

It takes between 20 and 30 minutes for the stack and fleet to be created. You can choose **Refresh** to see the CloudFormation events as they occur. 

## Step 3: Create a user in WorkSpaces Applications


While you are waiting for CloudFormation to finish creating the stack, you can create one or more users in WorkSpaces Applications. These users are those who will be using Enterprise Analyzer in WorkSpaces Applications. You will need to specify an email address for each user, and ensure that each user has sufficient permissions to create buckets in Amazon S3, upload files to a bucket, and link to a bucket to map its contents.

1. Open the WorkSpaces Applications console.

1. In the left navigation, choose **User pool**.

1. Choose **Create user**.

1. Provide an email address where the user can receive an email invitation to use WorkSpaces Applications, a first name and last name, and choose **Create user**.

1. Repeat if necessary to create more users. The email address for each user must be unique.

For more information on creating WorkSpaces Applications users, see [WorkSpaces Applications User Pools](https://docs.aws.amazon.com/appstream2/latest/developerguide/user-pool.html) in the *Amazon WorkSpaces Applications Administration Guide*.

When CloudFormation finishes creating the stack, you can assign the user you created to the stack, as follows:

1. Open the WorkSpaces Applications console.

1. Choose the user name.

1. Choose **Action**, then **Assign stack**.

1. In **Assign stack**, choose the stack that begins with `m2-appstream-stack-ea`.

1. Choose **Assign stack**.  
![\[The WorkSpaces Applications Assign stack page showing a user and the Enterprise Analyzer stack to be assigned.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/aas-assign-stack.png)

Assigning a user to a stack causes WorkSpaces Applications to send an email to the user at the address you provided. This email contains a link to the WorkSpaces Applications login page.

## Step 4: Log in to WorkSpaces Applications


In this step, you log in to WorkSpaces Applications using the link in the email sent by WorkSpaces Applications to the user you created in [Step 3: Create a user in WorkSpaces Applications](#tutorial-aas-step3).

1. Log in to WorkSpaces Applications using the link provided in the email sent by WorkSpaces Applications.

1. Change your password, if prompted. The WorkSpaces Applications screen that you see is similar to the following:  
![\[A sample WorkSpaces Applications login screen showing the desktop icon.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/aas-login-screen.png)

1. Choose **Desktop**.

1. On the task bar, choose **Search** and enter **D:** to navigate to the Home Folder.
**Note**  
If you skip this step, you might get a `Device not ready` error when you try to access the Home Folder.

At any point, if you have trouble signing into WorkSpaces Applications, you can restart your WorkSpaces Applications fleet and try to sign in again, using the following steps.

1. Open the WorkSpaces Applications console.

1. In the left navigation, choose **Fleets**.

1. Choose the fleet you are trying to use.

1. Choose **Action**, then choose **Stop**.

1. Wait for the fleet to stop.

1. Choose **Action**, then choose **Start**.

This process can take around 10 minutes.

## Step 5: Verify buckets in Amazon S3 (optional)


One of the tasks completed by the CloudFormation template you used to create the stack was to create two buckets in Amazon S3, which are necessary to save and restore user data and application settings across work sessions. These buckets are as follows:
+ Name starts with `appstream2-`. This bucket maps data to your Home Folder in WorkSpaces Applications (`D:\PhotonUser\My Files\Home Folder`).
**Note**  
The Home Folder is unique for a given email address and is shared across all fleets and stacks in a given AWS account. The name of the Home Folder is a SHA256 hash of the user’s email address, and is stored on a path based on that hash.
+ Name starts with `appstream-app-settings-`. This bucket contains user session information for WorkSpaces Applications, and includes settings such as browser favorites, IDE and application connection profiles, and UI customizations. For more information, see [How Application Settings Persistence Works](https://docs.aws.amazon.com/appstream2/latest/developerguide/how-it-works-app-settings-persistence.html) in the *Amazon WorkSpaces Applications Administration Guide*.

To verify that the buckets were created, follow these steps:

1. Open the Amazon S3 console.

1. In the left navigation, choose **Buckets**.

1. In **Find buckets by name**, enter **appstream** to filter the list.

If you see the buckets, no further action is necessary. Just be aware that the buckets exist. If you do not see the buckets, then either the CloudFormation template is not finished running, or an error occurred. Go to the CloudFormation console and review the stack creation messages.

## Next steps


Now that the WorkSpaces Applications infrastructure is set up, you can set up and start using Enterprise Analyzer. For more information, see [Tutorial: Set up Enterprise Analyzer on WorkSpaces Applications](set-up-ea.md). You can also set up Enterprise Developer. For more information, see [Tutorial: Set up Rocket Enterprise Developer on WorkSpaces Applications](set-up-ed.md).

## Clean up resources


The procedure to clean up the created stack and fleets is described in [Create an WorkSpaces Applications Fleet and Stack](https://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html).

When the WorkSpaces Applications objects have been deleted, the account administrator can also, if appropriate, clean up the Amazon S3 buckets for Application Settings and Home Folders.

**Note**  
The home folder for a given user is unique across all fleets, so you might need to retain it if other WorkSpaces Applications stacks are active in the same account.

Finally, WorkSpaces Applications does not currently allow you to delete users using the console. Instead, you must use the service API with the CLI. For more information, see [User Pool Administration](https://docs.aws.amazon.com/appstream2/latest/developerguide/user-pool-admin.html) in the *Amazon WorkSpaces Applications Administration Guide*.

# Tutorial: Use templates with Rocket Enterprise Developer (formerly Micro Focus Enterprise Developer)
Tutorial: Use templates with Rocket Enterprise Developer

This tutorial describes how to use templates and predefined projects with Rocket Enterprise Developer. It covers three use cases. All of the use cases use the sample code provided in the BankDemo sample. To download the sample, choose [https://d1vi4vxke6c2hu.cloudfront.net/demo/bankdemo.zip](https://d1vi4vxke6c2hu.cloudfront.net/demo/bankdemo.zip) .

**Important**  
If you use the version of Enterprise Developer for Windows, the binaries generated by the compiler can run only on the Enterprise Server provided with Enterprise Developer. You cannot run them under the AWS Mainframe Modernization runtime, which is based on Linux.

**Topics**
+ [

## Use Case 1 - Using the COBOL Project Template containing source components
](#tutorial-templates-ed-step1)
+ [

## Use Case 2 - Using the COBOL Project Template without source components
](#tutorial-templates-ed-step2)
+ [

## Use Case 3 - Using the pre-defined COBOL project linking to the source folders
](#tutorial-templates-ed-step3)
+ [

## Using the Region Definition JSON Template
](#tutorial-templates-ed-step4)

## Use Case 1 - Using the COBOL Project Template containing source components


This use case requires you to copy the source components into the Template directory structure as part of the demo pre setup steps. In the [https://d1vi4vxke6c2hu.cloudfront.net/demo/bankdemo.zip](https://d1vi4vxke6c2hu.cloudfront.net/demo/bankdemo.zip) this has been changed from the original `AWSTemplates.zip` delivery to avoid having two copies of the source.

1. Start Enterprise Developer and specify the chosen workspace.  
![\[The Eclipse launcher with a workspace selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc1-step1.png)

1. Within the **Application Explorer** view, from the **Enterprise Development Project** tree view item, choose **New Project from Template** from the context menu.  
![\[The enterprise development project tree view context menu.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc1-step2.png)

1. Enter the template parameters as shown.
**Note**  
The Template Path will refer to where the ZIP was extracted.  
![\[The Enter template parameters box with the path and project name filled in.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc1-step3.png)

1. Choosing OK will create a local development Eclipse Project based on the provided template, with a complete source and execution environment structure.  
![\[The local development Eclipse project showing its structure.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc1-step4.png)

   The `System` structure contains a complete resource definition file with the required entries for BANKDEMO, the required catalog with entries added and the corresponding ASCII data files.

   Because the source template structure contains all the source items, these files are copied to the local project and therefore are automatically built in Enterprise Developer.

## Use Case 2 - Using the COBOL Project Template without source components


Steps 1 to 3 are identical to [Use Case 1 - Using the COBOL Project Template containing source components](#tutorial-templates-ed-step1). 

The `System` structure in this use case also contains a complete resource definition file with the required entries for BankDemo, the required catalog with entries added, and the corresponding ASCII data files.

However, the template source structure does not contain any components. You must import these into the project from whatever source repository you are using.

1. Choose the project name. From the related context menu, choose **Import**.  
![\[The project context menu with import selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc2-step4.png)

1. From the resulting dialog, under the **General** section, choose **File System** and then choose Next.  
![\[The Import box with file system selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc2-step5.png)

1. Populate the **From directory** field by browsing the file system to point to the repository folder. Choose all the folders you wish to import, such as `sources`. The `Into folder` field will be pre-populated. Choose **Finish**.   
![\[The File system box with the BankDemo directory expanded.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc2-step6.png)

   After the source template structure contains all the source items, they are built automatically in Enterprise Developer.

## Use Case 3 - Using the pre-defined COBOL project linking to the source folders


1. Start Enterprise Developer and specify the chosen workspace.  
![\[The Eclipse launcher with a workspace selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc3-step1.png)

1. From the **File** menu, choose **Import**.  
![\[The File menu with Import selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc3-step2.png)

1. From the resulting dialog, under **General**, choose **Projects from Folder or Archive** and choose **Next**.  
![\[The Import box with projects from folder or archive selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc3-step3.png)

1. Populate **Import source**, Choose **Directory** and browse through the file system to select the pre-defined project folder. The project contained within has links to the source folders in the same repository.  
![\[The import projects from file system or archive box with the path to the import source entered.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-uc3-step4.png)

   Choose **Finish**.

   Because the project is populated by the links to the source folder, the code is automatically built.

## Using the Region Definition JSON Template


1. Switch to the Server Explorer view. From the related context menu, choose **Open Administration Page**, which starts the default browser.  
![\[The server explorer context menu with open administration page selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-admin-page.png)

1. From the resulting Enterprise Server Common Web Administration (ESCWA) screen, choose **Import** .  
![\[The Enterprise Server Common Web Administration screen with Import selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-import.png)

1. Choose the **JSON** import type and choose **Next**.  
![\[The choose import type box with JSON selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-import-type.png)

1. Upload the supplied `BANKDEMO.JSON` file.  
![\[The choose file to upload box with the BANKDEMO file selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-upload.png)

   Once selected, choose **Next**.  
![\[The select regions box with clear ports from endpoints not selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-next.png)

   On the **Select Regions** panel, ensure that the **Clear Ports from Endpoints** option is not selected, and then continue to choose **Next** through the panels until the **Perform Import** panel is shown. Then choose **Import** from the left navigation pane.

   Finally click **Finish**. The BANKDEMO region will then be added to the server list.  
![\[The Region and server list with BankDemo added.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-server-list.png)

1. Navigate to the **General Properties** for the BANKDEMO region.

1. Scroll to the **Configuration** section.

1. The ESP environment variable needs to be set to the `System` folder relevant to the Eclipse Project created in the previous steps. This should be `workspacefolder/projectname/System`.  
![\[The configuration section with the ESP variable shown.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-ESP.png)

1. Click **Apply**.

   The region is now fully configured to run in conjunction with the Eclipse COBOL project.

1. Finally, back in Enterprise Developer, associate the imported region with the project.  
![\[The project context menu with Associated with project selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ed-json-associate.png)

   The Enterprise Developer environment is now ready to use, with a complete working version of BankDemo. You can edit, compile, and debug code against the region.
**Important**  
If you use the version of Enterprise Developer for Windows, the binaries generated by the compiler can run only on the Enterprise Server provided with Enterprise Developer. You cannot run them under the AWS Mainframe Modernization runtime, which is based on Linux.

# Tutorial: Set up Enterprise Analyzer on WorkSpaces Applications
Tutorial: Set up Enterprise Analyzer

This tutorial describes how to set up Rocket Enterprise Analyzer (formerly Micro Focus Enterprise Analyzer) to analyze one or more mainframe applications. The Enterprise Analyzer tool provides several reports based on its analysis of the application source code and system definitions.

This setup is designed to foster team collaboration. Installation uses an Amazon S3 bucket to share the source code with virtual disks. Doing this makes use of [Rclone](https://rclone.org/)) on the Windows machine. With a common Amazon RDS instance running [PostgreSQL](https://www.postgresql.org/) , any member of the team can access to all requested reports.

Team members can also mount the virtual Amazon S3 backed disk on their personal machines. and update the source bucket from their workstations. They can potentially use scripts or any other form of automation on their machines if they are connected to other on-premises internal systems.

The setup is based on the WorkSpaces Applications Windows images that AWS Mainframe Modernization shares with the customer . Setup is also based on the creation of WorkSpaces Applications fleets and stacks as described in [Tutorial: Set up WorkSpaces Applications for use with Rocket Enterprise Analyzer and Rocket Enterprise Developer](set-up-appstream-mf.md).

**Important**  
The steps in this tutorial assume that you set up WorkSpaces Applications with the downloadable CloudFormation template [cfn-m2-appstream-fleet-ea-ed.yml](https://drm0z31ua8gi7.cloudfront.net/tutorials/mf/appstream/cfn-m2-appstream-fleet-ea-ed.yml). For more information, see [Tutorial: Set up WorkSpaces Applications for use with Rocket Enterprise Analyzer and Rocket Enterprise Developer](set-up-appstream-mf.md).  
To perform the steps in this tutorial, you must have set up your Enterprise Analyzer fleet and stack and they must be running.

For a complete description of Enterprise Analyzer features and deliverables, see the [Enterprise Analyzer Documentation](https://www.microfocus.com/documentation/enterprise-analyzer/) on the Rocket Software (formerly Micro Focus) website.

## Image contents


In addition to Enterprise Analyzer application itself, the image contains the following tools and libraries.

Third-party tools
+ [Python](https://www.python.org/)
+ [Rclone](https://rclone.org/)
+ [pgAdmin](https://www.pgadmin.org/)
+ [git-scm](https://git-scm.com/)
+ [PostgreSQL ODBC driver](https://odbc.postgresql.org/)

Libraries in `C:\Users\Public`
+ BankDemo source code and project definition for Enterprise Developer: `m2-bankdemo-template.zip`.
+ MFA install package for the mainframe: `mfa.zip`. For more information, see [Mainframe Access Overview](https://www.microfocus.com/documentation/enterprise-developer/30pu12/ED-VS2012/BKMMMMINTRS001.html) in the *Micro Focus Enterprise Developer *documentation.
+ Command and config files for Rclone (instructions for their use in the tutorials): `m2-rclone.cmd` and `m2-rclone.conf`.

**Topics**
+ [

## Image contents
](#set-up-ea-image-contents)
+ [

## Prerequisites
](#tutorial-ea-prerequisites)
+ [

## Step 1: Setup
](#tutorial-ea-step1)
+ [

## Step 2: Create the Amazon S3 based virtual folder on Windows
](#tutorial-ea-step2)
+ [

## Step 3: Create an ODBC source for the Amazon RDS instance
](#tutorial-ea-step3)
+ [

## Subsequent sessions
](#tutorial-ea-step4)
+ [

## Troubleshooting workspace connection
](#tutorial-ea-step5)
+ [

## Clean up resources
](#tutorial-ea-clean)

## Prerequisites

+ Upload the source code and system definitions for the customer application that you want to analyze to an S3 bucket. The system definitions include CICS CSD, DB2 object definitions, and so on. You can create a folder structure within the bucket that makes sense for how you want to organize the application artifacts. For example, when you unzip the BankDemo sample, it has the following structure:

  ```
  demo
       |--> jcl
       |--> RDEF
       |--> transaction
       |--> xa
  ```
+ Create and start an Amazon RDS instance running PostgreSQL. This instance will store the data and results produced by Enterprise Analyzer. You can share this instance with all members of the application team. In addition, create an empty schema called `m2_ea` (or any other suitable name) in the database. Define credentials for authorized users that allow them to create, insert, update, and delete items in this schema. You can obtain the database name, its server endpoint URL, and TCP port from the Amazon RDS console or from the account administrator.
+ Make sure you have set up programmatic access to your AWS account. For more information, see [Programmatic access](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) in the *Amazon Web Services General Reference.*

## Step 1: Setup


1. Start a session with WorkSpaces Applications with the URL that you received in the welcome email message from WorkSpaces Applications.

1. Use your email as your user ID, and define your permanent password.

1. Select the Enterprise Analyzer stack.

1. On the WorkSpaces Applications menu page, choose **Desktop** to reach the Windows desktop that the fleet is streaming.

## Step 2: Create the Amazon S3 based virtual folder on Windows


**Note**  
If you already used Rclone during the AWS Mainframe Modernization preview, you must update `m2-rclone.cmd` to the newer version located in `C:\Users\Public`.

1. Copy the `m2-rclone.conf` and `m2-rclone.cmd` files provided in `C:\Users\Public` to your home folder `C:\Users\PhotonUser\My Files\Home Folder` using File Explorer.

1. Update the `m2-rclone.conf` config parameters with your AWS access key and corresponding secret, as well as your AWS Region.

   ```
   [m2-s3]
   type = s3
   provider = AWS
   access_key_id = YOUR-ACCESS-KEY
   secret_access_key = YOUR-SECRET-KEY
   region = YOUR-REGION
   acl = private
   server_side_encryption = AES256
   ```

1. In `m2-rclone.cmd`, make the following changes:
   + Change `amzn-s3-demo-bucket` to your Amazon S3 bucket name. For example, `m2-s3-mybucket`.
   + Change `your-s3-folder-key` to your Amazon S3 bucket key. For example, `myProject`.
   + Change `your-local-folder-path` to the path of the directory where you want the application files synced from the Amazon S3 bucket that contains them. For example, `D:\PhotonUser\My Files\Home Folder\m2-new`. This synced directory must be a subdirectory of the Home Folder in order for WorkSpaces Applications to properly back up and restore it on session start and end.

   ```
   :loop
   timeout /T 10
   "C:\Program Files\rclone\rclone.exe" sync m2-s3:amzn-s3-demo-bucket/your-s3-folder-key "D:\PhotonUser\My Files\Home Folder\your-local-folder-path" --config "D:\PhotonUser\My Files\Home Folder\m2-rclone.conf"
   goto :loop
   ```

1. Open a Windows command prompt, cd to `C:\Users\PhotonUser\My Files\Home Folder` if needed and run `m2-rclone.cmd`. This command script runs a continuous loop, syncing your Amazon S3 bucket and key to the local folder every 10 seconds. You can adjust the time out as needed. You should see the source code of the application located in the Amazon S3 bucket in Windows File Explorer.

To add new files to the set that you are working on or to update existing ones, upload the files to the Amazon S3 bucket and they will be synced to your directory at the next iteration defined in `m2-rclone.cmd`. Similarly, if you want to delete some files, delete them from the Amazon S3 bucket. The next sync operation will delete them from your local directory.

## Step 3: Create an ODBC source for the Amazon RDS instance


1. To start the EA\$1Admin tool, navigate to the application selector menu in the top left corner of the browser window and choose **MF EA\$1Admin**.

1. From the **Administer** menu, choose **ODBC Data Sources**, and choose **Add** from the **User DSN** tab.

1. In the Create New Data Source dialog box, choose the **PostgreSQL Unicode** driver, and then choose **Finish**.

1. In the **PostgreSQL Unicode ODBC Driver (psqlODBC) Setup** dialog box, define and take note of the data source name that you want. Complete the following parameters with the values from the RDS instance that you previously created:  
**Description**  
Optional description to help you identify this database connection quickly.  
**Database**  
The Amazon RDS database you created previously.  
**Server**  
The Amazon RDS endpoint.  
**Port**  
The Amazon RDS port.  
**User Name**  
As defined in the Amazon RDS instance.  
**Password**  
As defined in the Amazon RDS instance.

1. Choose **Test** to validate that the connection to Amazon RDS is successful, and then choose **Save** to save your new User DSN.

1. Wait until you see the message that confirms creation of the proper workspace, and then choose **OK** to finish with ODBC Data Sources and close the EA\$1Admin tool.

1. Navigate again to the application selector menu, and choose Enterprise Analyzer to start the tool. Choose **Create New**. 

1. In the Workspace configuration window, enter your workspace name and define its location. The workspace can be the Amazon S3 based disk if you work under this config, or your home folder if you prefer.

1. Choose **Choose Other Database** to connect to your Amazon RDS instance.

1. Choose the **Postgre** icon from the options, and then choose **OK**.

1. For the Windows settings under **Options – Define Connection Parameters**, enter the name of the data source that you created. Also enter the database name, the schema name, the user name, and password. Choose **OK**.

1. Wait for Enterprise Analyzer to create all the tables, indexes, and so on that it needs to store results. This process might take a couple of minutes. Enterprise Analyzer confirms when the database and workspace are ready for use.

1. Navigate again to the application selector menu and choose Enterprise Analyzer to start the tool.

1. The Enterprise Analyzer startup window appears in the new, selected workspace location. Choose **OK**.

1. Navigate to your repository in the left pane, select the repository name, and choose **Add files / folders to your workspace**.Select the folder where your application code is stored to add it to the workspace. You can use the previous BankDemo example code if you want. When Enterprise Analyzer prompts you to verify those files, choose **Verify** to start the initial Enterprise Analyzer verification report. It might take some minutes to complete, depending on the size of your application.

1. Expand your workspace to see the files and folders that you’ve added to the workspace. The object types and cyclomatic complexity reports are also visible in the top quadrant of the **Chart Viewer** pane.

You can now use Enterprise Analyzer for all needed tasks.

## Subsequent sessions


1. Start a session with WorkSpaces Applications with the URL that you received in the welcome email message from WorkSpaces Applications.

1. Log in with your email and permanent password.

1. Select the Enterprise Analyzer stack.

1. Launch `Rclone` to connect to the Amazon S3 backed disk if you use this option to share the workspace files.

1. Launch Enterprise Analyzer to do your tasks.

## Troubleshooting workspace connection


When you try to reconnect to your Enterprise Analyzer workspace, you might see an error like this:

```
Cannot access the workspace directory D:\PhotonUser\My Files\Home Folder\EA_BankDemo. The workspace has been created on a non-shared disk of the EC2AMAZ-E6LC33H computer. Would you like to correct the workspace directory location?
```

To resolve this issue, choose **OK** to clear the message, and then complete the following steps.

1. In WorkSpaces Applications, choose the **Launch Application** icon on the toolbar, and then choose **EA\$1Admin** to start the Enterprise Analyzer Administration tool.  
![\[The WorkSpaces Applications launch selector menu with the Rocket Enterprise Developer administration tool selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/aas-launch-selector.png)

1. From the **Administer** menu, choose **Refresh Workspace Path...**.  
![\[Administer menu of Rocket Enterprise Analyzer administration tool with Refresh Workspace Path selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ea_admin-administer-refresh.png)

1. Under **Select workspace**, choose the workspace that you want, and then choose **OK**.  
![\[The Select workspace dialog box of Rocket Enterprise Analyzer administration tool with a project selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ea_admin-select-workspace.png)

1. Choose **OK** to confirm the error message.  
![\[The Enterprise Analyzer error message "Cannot access the workspace directory" with OK selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ea_admin-select-workspace-error.png)

1. Under **Workspace directory network path**, enter the correct path to your workspace, for example, `D:\PhotonUser\My Files\Home Folder\EA\MyWorkspace3`.  
![\[The Enterprise Analyzer dialog box Workspace directory network path with an example path.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ea_admin-workspace-directory-network-path.png)

1. Close the Micro Focus Enterprise Analyzer Administration tool.  
![\[The Micro Focus Enterprise Analyzer Administration tool with the Close button selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ea_admin-close.png)

1. In WorkSpaces Applications, choose the **Launch Application** icon on the toolbar, and then choose **EA** to start Micro Focus Enterprise Analyzer.  
![\[The WorkSpaces Applications launch application icon with EA selected.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/aas-launch-ea.png)

1. Repeat steps 3 - 5.

Micro Focus Enterprise Analyzer should now open with the existing workspace.

## Clean up resources


If you no longer need the resources that you created for this tutorial, delete them so that you don't incur further charges. Complete the following steps:
+ Use the **EA\$1Admin** tool to delete the workspace.
+ Delete the S3 buckets that you created for this tutorial. For more information, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the *Amazon S3 User Guide*.
+ Delete the database that you created for this tutorial. For more information, see [Deleting a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html#CHAP_GettingStarted.Deleting.PostgreSQL).

# Tutorial: Set up Rocket Enterprise Developer on WorkSpaces Applications
Tutorial: Set up Enterprise Developer

This tutorial describes how to set up Rocket Enterprise Developer (formerly Micro Focus Enterprise Developer) for one or more mainframe applications in order to maintain, compile, and test them using the Enterprise Developer features. The setup is based on the WorkSpaces Applications Windows images that AWS Mainframe Modernization shares with the customer and on the creation of WorkSpaces Applications fleets and stacks as described in [Tutorial: Set up WorkSpaces Applications for use with Rocket Enterprise Analyzer and Rocket Enterprise Developer](set-up-appstream-mf.md).

**Important**  
The steps in this tutorial assume that you set up WorkSpaces Applications using the downloadable CloudFormation template [cfn-m2-appstream-fleet-ea-ed.yaml](https://d1vi4vxke6c2hu.cloudfront.net/tutorial/cfn-m2-appstream-fleet-ea-ed.yaml). For more information, see [Tutorial: Set up WorkSpaces Applications for use with Rocket Enterprise Analyzer and Rocket Enterprise Developer](set-up-appstream-mf.md).  
You must perform the steps of this setup when the Enterprise Developer fleet and stack are up and running.

For a complete description of Enterprise Developer v7 features and deliverables, check out its [up-to-date online documentation (v7.0)](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/GUID-8D6B7358-AC35-4DAF-A445-607D8D97EBB2.html) on the Rocket Software (formerly Micro Focus) site.

## Image contents


In addition to Enterprise Developer itself, the image contains the image contains Rumba (a TN3270 emulator). It also contains the following tools and libraries.

Third-party tools
+ [Python](https://www.python.org/)
+ [Rclone](https://rclone.org/)
+ [pgAdmin](https://www.pgadmin.org/)
+ [git-scm](https://git-scm.com/)
+ [PostgreSQL ODBC driver](https://odbc.postgresql.org/)

Libraries in `C:\Users\Public`
+ BankDemo source code and project definition for Enterprise Developer: `m2-bankdemo-template.zip`.
+ MFA install package for the mainframe: `mfa.zip`. For more information, see [Mainframe Access Overview](https://www.microfocus.com/documentation/enterprise-developer/30pu12/ED-VS2012/BKMMMMINTRS001.html) in the *Micro Focus Enterprise Developer *documentation.
+ Command and config files for Rclone (instructions for their use in the tutorials): `m2-rclone.cmd` and `m2-rclone.conf`.

If you need to access source code that is not yet loaded into CodeCommit repositories, but that is available in an Amazon S3 bucket, for example to perform the initial load of the source code into git, follow the procedure to create a virtual Windows disk as described in [Tutorial: Set up Enterprise Analyzer on WorkSpaces Applications](set-up-ea.md).

**Topics**
+ [

## Image contents
](#set-up-ed-image-contents)
+ [

## Prerequisites
](#tutorial-ed-prerequisites)
+ [

## Step 1: Setup by individual Enterprise Developer users
](#tutorial-ed-step1)
+ [

## Step 2: Create the Amazon S3-based virtual folder on Windows (optional)
](#tutorial-ed-step2)
+ [

## Step 3: Clone the repository
](#tutorial-ed-step3)
+ [

## Subsequent sessions
](#tutorial-ed-step4)
+ [

## Clean up resources
](#tutorial-ed-clean)

## Prerequisites

+ One or more CodeCommit repositories loaded with the source code of the application to be maintained. The repository setup should match the requirements of the CI/CD pipeline above to create synergies by combination of both tools.
+ Each user must have credentials to the CodeCommit repository or repositories defined by the account administrator according to the information in [Authentication and access control for AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control.html). The structure of those credentials is reviewed in [Authentication and access control for AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control.html) and the complete reference for IAM authorizations for CodeCommit is in the [CodeCommit permissions reference](https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-permissions-reference.html): the administrator may define distinct IAM policies for distinct roles having credentials specific to the role for each repository and limiting its authorizations of the user to the specific set of tasks that he has to to accomplish on a given repository. So, for each maintainer of the CodeCommit repository, the account administrator will generate a primary user and grant this user permissions to access the required repository or repositories via selecting the proper IAM policy or policies for CodeCommit access.

## Step 1: Setup by individual Enterprise Developer users


1. Obtain your IAM credentials:

   1. Connect to the AWS console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. Follow the procedure described in step 3 of [Setup for HTTPS users using Git credentials](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) in the *AWS CodeCommit User Guide*. 

   1. Copy the CodeCommit-specific sign-in credentials that IAM generated for you, either by showing, copying, and then pasting this information into a secure file on your local computer, or by choosing **Download credentials** to download this information as a .CSV file. You need this information to connect to CodeCommit.

1. Start a session with WorkSpaces Applications based on the url received in the welcome email. Use your email as user name and create your password.

1. Select your Enterprise Developer stack.

1. On the menu page, choose **Desktop** to reach the Windows desktop streamed by the fleet.

## Step 2: Create the Amazon S3-based virtual folder on Windows (optional)


If there is a need for Rclone (see above), create the Amazon S3-based virtual folder on Windows: (optional if all application artefacts exclusively come from CodeCommit access).

**Note**  
If you already used Rclone during the AWS Mainframe Modernization preview, you must update `m2-rclone.cmd` to the newer version located in `C:\Users\Public`.

1. Copy the `m2-rclone.conf` and `m2-rclone.cmd` files provided in `C:\Users\Public` to your home folder `C:\Users\PhotonUser\My Files\Home Folder` using File Explorer.

1. Update the `m2-rclone.conf` config parameters with your AWS access key and corresponding secret, as well as your AWS Region.

   ```
   [m2-s3]
   type = s3
   provider = AWS
   access_key_id = YOUR-ACCESS-KEY
   secret_access_key = YOUR-SECRET-KEY
   region = YOUR-REGION
   acl = private
   server_side_encryption = AES256
   ```

1. In `m2-rclone.cmd`, make the following changes:
   + Change `amzn-s3-demo-bucket` to your Amazon S3 bucket name. For example, `m2-s3-mybucket`.
   + Change `your-s3-folder-key` to your Amazon S3 bucket key. For example, `myProject`.
   + Change `your-local-folder-path` to the path of the directory where you want the application files synced from the Amazon S3 bucket that contains them. For example, `D:\PhotonUser\My Files\Home Folder\m2-new`. This synced directory must be a subdirectory of the Home Folder in order for WorkSpaces Applications to properly back up and restore it on session start and end.

   ```
   :loop
   timeout /T 10
   "C:\Program Files\rclone\rclone.exe" sync m2-s3:amzn-s3-demo-bucket/your-s3-folder-key "D:\PhotonUser\My Files\Home Folder\your-local-folder-path" --config "D:\PhotonUser\My Files\Home Folder\m2-rclone.conf"
   goto :loop
   ```

1. Open a Windows command prompt, cd to `C:\Users\PhotonUser\My Files\Home Folder` if needed and run `m2-rclone.cmd`. This command script runs a continuous loop, syncing your Amazon S3 bucket and key to the local folder every 10 seconds. You can adjust the time out as needed. You should see the source code of the application located in the Amazon S3 bucket in Windows File Explorer.

To add new files to the set that you are working on or to update existing ones, upload the files to the Amazon S3 bucket and they will be synced to your directory at the next iteration defined in `m2-rclone.cmd`. Similarly, if you want to delete some files, delete them from the Amazon S3 bucket. The next sync operation will delete them from your local directory.

## Step 3: Clone the repository


1. Navigate to the application selector menu in the top left corner of the browser window and select Enterprise Developer.

1. Complete the workspace creation required by Enterprise Developer in your Home folder by choosing `C:\Users\PhotonUser\My Files\Home Folder` (aka `D: \PhotonUser\My Files\Home Folder`) as location for the workspace.

1. In Enterprise Developer, clone your CodeCommit repository by going to the Project Explorer, right click and choose **Import**, **Import …**, **Git**, **Projects** from **Git** **Clone URI**. Then, enter your CodeCommit-specific sign-in credentials and complete the Eclipse dialog to import the code.

The CodeCommit git repository in now cloned in your local workspace.

Your Enterprise Developer workspace is now ready to start the maintenance work on your application. In particular, you can use the local instance of Enterprise Server (ES) integrated with Enterprise Developer to interactively debug and run your application to validate your changes locally.

**Note**  
The local Enterprise Developer environment, including the local Enterprise Server instance, runs under Windows while AWS Mainframe Modernization runs under Linux. We recommend that you run complementary tests in the Linux environment provided by AWS Mainframe Modernization after you commit the new application to CodeCommit and rebuild it for this target and before you roll out the new application to production.

## Subsequent sessions


As you select a folder that is under WorkSpaces Applications management like the home folder for the cloning of your CodeCommit repository, it will be saved and restored transparently across sessions. Complete the following steps the next time you need to work with the application: 

1. Start a session with WorkSpaces Applications based on the url received in the welcome email.

1. Login with your email and permanent password.

1. Select the Enterprise Developer stack.

1. Launch `Rclone` to connect (see above) to the Amazon S3-backed disk when this option is used to share the workspace files.

1. Launch Enterprise Developer to do your work.

## Clean up resources


If you no longer need the resources you created for this tutorial, delete them so that you won't continue to be charged for them. Complete the following steps:
+ Delete the CodeCommit repository you created for this tutorial. For more information, see [Delete an CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-delete-repository.html) in the *AWS CodeCommit User Guide*.
+ Delete the database you created for this tutorial. For more information, see [Deleting a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html#CHAP_GettingStarted.Deleting.PostgreSQL).

# Available batch utilities in AWS Mainframe Modernization
Batch utilities

Mainframe applications often use batch utility programs to perform specific functions such as sorting data, transferring files using FTP, loading data into databases like DB2, unloading data from databases, and so on.

When you migrate your applications to AWS Mainframe Modernization, you need functionally equivalent replacement utilities that can perform the same tasks as the ones you used on the mainframe. Some of these utilities might already be available as part of the AWS Mainframe Modernization runtime engines, but we are providing the following replacement utilities:
+ M2SFTP - enables secure file transfer using SFTP protocol.
+ M2WAIT - waits for a specified amount of time before continuing with the next step in a batch job.
+ TXT2PDF - converts text files to PDF format.
+ M2DFUTIL - provides backup, restore, delete, and copy functions on data sets that is similar to the support provided by the mainframe ADRDSSU utility.
+ M2RUNCMD - lets you run Rocket Software (formerly Micro Focus) commands, scripts, and system calls directly from JCL.

We developed these batch utilities based on customer feedback and designed them to provide the same functionality as the mainframe utilities. The goal is to make your transition from mainframe to AWS Mainframe Modernization as smooth as possible.

**Topics**
+ [

## Binary Location
](#location-utilities)
+ [

# M2SFTP batch utility
](m2sftp.md)
+ [

# M2WAIT batch utility
](m2wait.md)
+ [

# TXT2PDF batch utility
](txt2pdf.md)
+ [

# M2DFUTIL batch utility
](m2dfutil.md)
+ [

# M2RUNCMD batch utility
](m2runcmd.md)

## Binary Location


These utilities are preinstalled on the Rocket Enterprise Developer (ED) and Rocket Software (ES) products. You can find them in the following location for all variants of ED and ES:
+ Linux: `/opt/aws/m2/microfocus/utilities/64bit`
+ Windows (32 bit): `C:\AWS\M2\MicroFocus\Utilities\32bit`
+ Windows (64 bit): `C:\AWS\M2\MicroFocus\Utilities\64bit`

# M2SFTP batch utility


M2SFTP is a JCL utility program designed to perform secure file transfers between systems using the Secure File Transfer Protocol (SFTP). The program uses the Putty SFTP client, `psftp`, to perform the actual file transfers. The program works similarly to a mainframe FTP utility program and uses user and password authentication.

**Note**  
Public key authentication is not supported.

To convert your mainframe FTP JCLs to use SFTP, change `PGM=FTP` to `PGM=M2SFTP`.

**Topics**
+ [

## Supported platforms
](#m2sftp-platforms)
+ [

## Installing dependencies
](#m2sftp-dependencies)
+ [

## Configure M2SFTP for AWS Mainframe Modernization Managed
](#m2sftp-configure-managed)
+ [

## Configure M2SFTP for AWS Mainframe Modernization runtime on Amazon EC2 (including WorkSpaces Applications)
](#m2sftp-configure-customer-infra)
+ [

## Sample JCLs
](#m2sftp-jcl)
+ [

## Putty SFTP (PSFTP) client command reference
](#m2sftp-cmd-ref)
+ [

## Next steps
](#m2sftp-next)

## Supported platforms


You can use M2SFTP on any of the following platforms:
+ AWS Mainframe Modernization Rocket Software (formerly Micro Focus) Managed
+ Rocket Software Runtime (on Amazon EC2)
+ All variants of Rocket Software Enterprise Developer (ED) and Rocket Software Enterprise Server (ES) products.

## Installing dependencies


**To install the Putty SFTP client on Windows**
+ Download the [PuTTY SFTP](https://www.putty.org/) client and install it.

**To install the Putty SFTP client on Linux:**
+ Run the following command to install the Putty SFTP client:

  ```
    sudo yum -y install putty
  ```

## Configure M2SFTP for AWS Mainframe Modernization Managed


If your migrated applications are running on AWS Mainframe Modernization Managed, you will need to configure M2SFTP as follows.
+ Set the appropriate Rocket Enterprise Server environment variables for MFFTP. Here are few examples:
  + `MFFTP_TEMP_DIR`
  + `MFFTP_SENDEOL`
  + `MFFTP_TIME`
  + `MFFTP_ABEND`

  You can set as few or as many of these variables as you want. You can set them in your JCL using the `ENVAR DD` statement. For more information on these variables, see [MFFTP Control Variables](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/GUID-3F94BBC8-CB97-4642-A4A7-4235C0C079E2.html) in the Micro Focus documentation.

To test your configuration, see [Sample JCLs](#m2sftp-jcl).

## Configure M2SFTP for AWS Mainframe Modernization runtime on Amazon EC2 (including WorkSpaces Applications)


If your migrated applications are running on AWS Mainframe Modernization runtime on Amazon EC2, configure M2SFTP as follows.

1. Change the [Micro Focus JES Program Path](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/GUID-BC8A1796-9EDE-48EB-8363-31C9BDE7F96B.html) to include the binary location for batch utilities. If you need to specify multiple paths, use colons (`:`) to separate paths on Linux and semicolons (`;`) on Windows.
   + Linux: `/opt/aws/m2/microfocus/utilities/64bit`
   + Windows (32bit): `C:\AWS\M2\MicroFocus\Utilities\32bit`
   + Windows (64bit): `C:\AWS\M2\MicroFocus\Utilities\64bit`

1. Set the appropriate Rocket Enterprise Server environment variables for MFFTP. Here are few examples:
   + `MFFTP_TEMP_DIR`
   + `MFFTP_SENDEOL`
   + `MFFTP_TIME`
   + ``MFFTP\$1ABEND

   You can set as few or as many of these variables as you want. You can set them in your JCL using the `ENVAR DD` statement. For more information on these variables, see [MFFTP Control Variables](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/GUID-3F94BBC8-CB97-4642-A4A7-4235C0C079E2.html) in the Micro Focus documentation.

To test your configuration, see [Sample JCLs](#m2sftp-jcl).

## Sample JCLs


To test the installation, you can use either of the following sample JCL files.

**M2SFTP1.jcl**

This JCL shows how to call M2SFTP to send a file to a remote SFTP server. Notice the environment variables that are set in the `ENVVAR DD` statement.

```
//M2SFTP1 JOB 'M2SFTP1',CLASS=A,MSGCLASS=X,TIME=1440
//*
//* Copyright Amazon.com, Inc. or its affiliates.* 
//* All Rights Reserved.* 
//*
//*-------------------------------------------------------------------**
//* Sample SFTP JCL step to send a file to SFTP server* 
//*-------------------------------------------------------------------**
//*
//STEP01 EXEC PGM=M2SFTP,
//            PARM='127.0.0.1 (EXIT=99 TIMEOUT 300'
//*
//SYSFTPD  DD  *
RECFM FB
LRECL 80
SBSENDEOL CRLF
MBSENDEOL CRLF
TRAILINGBLANKS FALSE
/*
//NETRC    DD  *
machine 127.0.0.1 login sftpuser password sftppass
/*
//SYSPRINT DD  SYSOUT=*
//OUTPUT   DD  SYSOUT=*
//STDOUT   DD  SYSOUT=*
//INPUT    DD  *
type a
locsite notrailingblanks
cd files
put 'AWS.M2.TXT2PDF1.PDF' AWS.M2.TXT2PDF1.pdf   
put 'AWS.M2.CARDDEMO.CARDDATA.PS' AWS.M2.CARDDEMO.CARDDATA.PS1.txt
quit
/*
//ENVVAR   DD *
MFFTP_VERBOSE_OUTPUT=ON
MFFTP_KEEP=N
/*
//*
//
```

**M2SFTP2.jcl**

This JCL shows how to call M2SFTP to receive a file from a remote SFTP server. Notice the environment variables set in the `ENVVAR DD` statement.

```
//M2SFTP2 JOB 'M2SFTP2',CLASS=A,MSGCLASS=X,TIME=1440
//*
//* Copyright Amazon.com, Inc. or its affiliates.* 
//* All Rights Reserved.* 
//*
//*-------------------------------------------------------------------**
//* Sample SFTP JCL step to receive a file from SFTP server* 
//*-------------------------------------------------------------------**
//*
//STEP01 EXEC PGM=M2SFTP
//*
//SYSPRINT DD  SYSOUT=*
//OUTPUT   DD  SYSOUT=*
//STDOUT   DD  SYSOUT=*
//INPUT    DD  *
open 127.0.0.1
sftpuser
sftppass
cd files
locsite recfm=fb lrecl=150
get AWS.M2.CARDDEMO.CARDDATA.PS.txt +
'AWS.M2.CARDDEMO.CARDDATA.PS2' (replace
quit
/*
//ENVVAR   DD *
MFFTP_VERBOSE_OUTPUT=ON
MFFTP_KEEP=N
/*
//*
//
```

**Note**  
We strongly recommend storing FTP credentials in a NETRC file and restricting access to only authorized users.

## Putty SFTP (PSFTP) client command reference


The PSFTP client does not support all FTP commands. The following list shows all the commands that PSFTP does support.


| Command | Description | 
| --- | --- | 
|  \$1  |  Run a local command  | 
|  bye  |  Finish your SFTP session  | 
|  cd  |  Change your remote working directory  | 
|  chmod  |  Change file permissions and modes  | 
|  close  |  Finish your SFTP session but do not quit PSFTP  | 
|  del  |  Delete files on the remote server  | 
|  dir  |  List remote files  | 
|  exit  |  Finish your SFTP session  | 
|  get  |  Download a file from the server to your local machine  | 
|  help  |  Give help  | 
|  lcd  |  Change local working directory  | 
|  lpwd  |  Print local working directory  | 
|  ls  |  List remote files  | 
|  mget  |  Download multiple files at once  | 
|  mkdir  |  Create directories on the remote server  | 
|  mput  |  Upload multiple files at once  | 
|  mv  |  Move or rename file(s) on the remote server  | 
|  open  |  Connect to a host  | 
|  put  |  Upload a file from your local machine to the server  | 
|  pwd  |  Print your remote working directory  | 
|  quit  |  Finish your SFTP session  | 
|  reget  |  Continue downloading files  | 
|  ren  |  Move or rename file(s) on the remote server  | 
|  reput  |  Continue uploading files  | 
|  rm  |  Delete files on the remote server  | 
|  rmdir  |  Remove directories on the remote server  | 

## Next steps


To upload and download files into Amazon Simple Storage Service using SFTP, you could use M2SFTP in conjunction with the AWS Transfer Family, as described in the following blog posts.
+ [Using AWS SFTP logical directories to build a simple data distribution service](https://aws.amazon.com/blogs/storage/using-aws-sftp-logical-directories-to-build-a-simple-data-distribution-service/)
+ [Enable password authentication for AWS Transfer for SFTP using AWS Secrets Manager](https://aws.amazon.com/blogs/storage/enable-password-authentication-for-aws-transfer-for-sftp-using-aws-secrets-manager/)

# M2WAIT batch utility


M2WAIT is a mainframe utility program that enables you to introduce a wait period in your JCL scripts by specifying a time duration in seconds, minutes, or hours. You can call M2WAIT directly from JCL by passing the time you want to wait as an input parameter. Internally, the M2WAIT program calls the Rocket Software (formerly Micro Focus) supplied module `C$SLEEP` to wait for a specified time.

**Note**  
You can use Micro Focus aliases to replace what you have in your JCL scripts. For more information, see [JES Alias](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/GUID-D4206FF9-32C4-43E7-9413-5E7E96AA8092.html) in the Micro Focus documentation.

**Topics**
+ [

## Supported platforms
](#m2wait-platforms)
+ [

## Configure M2WAIT for AWS Mainframe Modernization Managed
](#m2wait-configure-managed)
+ [

## Configure M2WAIT for AWS Mainframe Modernization runtime on Amazon EC2 (including WorkSpaces Applications)
](#m2wait-configure-customer-infra)
+ [

## Sample JCL
](#m2wait-jcl)

## Supported platforms


You can use M2WAIT on any of the following platforms:
+ AWS Mainframe Modernization Rocket Software (formerly Micro Focus) Managed
+ Rocket Software Runtime (on Amazon EC2)
+ All variants of Rocket Software Enterprise Developer (ED) and Rocket Software Enterprise Server (ES) products.

## Configure M2WAIT for AWS Mainframe Modernization Managed


If your migrated applications are running on AWS Mainframe Modernization Managed, you will need to configure M2WAIT as follows.
+ Use the program M2WAIT in your JCL by passing input parameter as shown in [Sample JCL](#m2wait-jcl).

## Configure M2WAIT for AWS Mainframe Modernization runtime on Amazon EC2 (including WorkSpaces Applications)


If your migrated applications are running on AWS Mainframe Modernization runtime on Amazon EC2, configure M2WAIT as follows.

1. Change the [Micro Focus JES Program Path](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/GUID-BC8A1796-9EDE-48EB-8363-31C9BDE7F96B.html) to include the binary location for batch utilities. If you need to specify multiple paths, use colons (`:`) to separate paths on Linux and semicolons (`;`) on Windows.
   + Linux: `/opt/aws/m2/microfocus/utilities/64bit`
   + Windows (32bit): `C:\AWS\M2\MicroFocus\Utilities\32bit`
   + Windows (64bit): `C:\AWS\M2\MicroFocus\Utilities\64bit`

1. Use the program M2WAIT in your JCL by passing the input parameter as shown in [Sample JCL](#m2wait-jcl).

## Sample JCL


To test the installation, you can use the `M2WAIT1.jcl` program.

This sample JCL shows how to call M2WAIT and pass it several different durations.

```
//M2WAIT1 JOB 'M2WAIT',CLASS=A,MSGCLASS=X,TIME=1440
//*
//* Copyright Amazon.com, Inc. or its affiliates.* 
//* All Rights Reserved.* 
//*
//*-------------------------------------------------------------------**
//* Wait for 12 Seconds*
//*-------------------------------------------------------------------**
//*
//STEP01 EXEC PGM=M2WAIT,PARM='S012'
//SYSOUT DD SYSOUT=*
//*
//*-------------------------------------------------------------------**
//* Wait for 0 Seconds (defaulted to 10 Seconds)*
//*-------------------------------------------------------------------**
//*
//STEP02 EXEC PGM=M2WAIT,PARM='S000'
//SYSOUT DD SYSOUT=*
//*
//*-------------------------------------------------------------------**
//* Wait for 1 Minute*
//*-------------------------------------------------------------------**
//*
//STEP03 EXEC PGM=M2WAIT,PARM='M001'
//SYSOUT DD SYSOUT=*
//*
//
```

# TXT2PDF batch utility


TXT2PDF is a mainframe utility program commonly used to convert a text file to a PDF file. This utility uses the same source code for TXT2PDF (z/OS freeware). We modified it to run under the AWS Mainframe Modernization Rocket Software (formerly Micro Focus) runtime environment.

**Topics**
+ [

## Supported platforms
](#txt2pdf-platforms)
+ [

## Configure TXT2PDF for AWS Mainframe Modernization Managed
](#txt2pdf-configure-managed)
+ [

## Configure TXT2PDF for AWS Mainframe Modernization runtime on Amazon EC2 (including WorkSpaces Applications)
](#txt2pdf-configure-customer-infra)
+ [

## Sample JCL
](#txt2pdf-jcl)
+ [

## Modifications
](#txt2pdf-mods)
+ [

## References
](#txt2pdf-ref)

## Supported platforms


You can use TXT2PDF on any of the following platforms:
+ AWS Mainframe Modernization Rocket Software Managed
+ Rocket Software Runtime (on Amazon EC2)
+ All variants of Rocket Enterprise Developer (ED) and Rocket Enterprise Server (ES) products.

## Configure TXT2PDF for AWS Mainframe Modernization Managed


If your migrated applications are running on AWS Mainframe Modernization Managed, configure TXT2PDF as follows.
+ Create a REXX EXEC library called `AWS.M2.REXX.EXEC`. Download these [REXX modules](https://drm0z31ua8gi7.cloudfront.net/utilities/mf/TXT2PDF/rexx/TXT2PDF_rexx.zip) and copy them into the library. 
  + `TXT2PDF.rex` - TXT2PDF z/OS freeware (modified)
  + `TXT2PDFD.rex` - TXT2PDF z/OS freeware (unmodified)
  + `TXT2PDFX.rex` - TXT2PDF z/OS freeware (modified)
  + `M2GETOS.rex` - To check the OS type (Windows or Linux)

To test your configuration, see [Sample JCL](#txt2pdf-jcl).

## Configure TXT2PDF for AWS Mainframe Modernization runtime on Amazon EC2 (including WorkSpaces Applications)


If your migrated applications are running on AWS Mainframe Modernization runtime on Amazon EC2, configure TXT2PDF as follows.

1. Set the Rocket Software environment variable `MFREXX_CHARSET` to the appropriate value, such as “`A`" for ASCII data.
**Important**  
Entering the wrong value could cause data conversion issues (from EBCDIC to ASCII), making the resulting PDF unreadable or inoperable. We recommend setting `MFREXX_CHARSET` to match `MF_CHARSET`.

1. Change the [Micro Focus JES Program Path](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/GUID-BC8A1796-9EDE-48EB-8363-31C9BDE7F96B.html) to include the binary location for batch utilities. If you need to specify multiple paths, use colons (`:`) to separate paths on Linux and semicolons (`;`) on Windows.
   + Linux: `/opt/aws/m2/microfocus/utilities/64bit`
   + Windows (32bit): `C:\AWS\M2\MicroFocus\Utilities\32bit`
   + Windows (64bit): `C:\AWS\M2\MicroFocus\Utilities\64bit`

1. Create a REXX EXEC library called `AWS.M2.REXX.EXEC``. Download these [REXX modules](https://drm0z31ua8gi7.cloudfront.net/utilities/mf/TXT2PDF/rexx/TXT2PDF_rexx.zip) and copy them into the library. 
   + `TXT2PDF.rex` - TXT2PDF z/OS freeware (modified)
   + `TXT2PDFD.rex` - TXT2PDF z/OS freeware (unmodified)
   + `TXT2PDFX.rex` - TXT2PDF z/OS freeware (modified)
   + `M2GETOS.rex` - To check the OS type (Windows or Linux)

To test your configuration, see [Sample JCL](#txt2pdf-jcl).

## Sample JCL


To test the installation, you can use either of the following sample JCL files.

**TXT2PDF1.jcl**

This sample JCL file uses a DD name for the TXT2PDF conversion.

```
//TXT2PDF1 JOB 'TXT2PDF1',CLASS=A,MSGCLASS=X,TIME=1440
//*
//* Copyright Amazon.com, Inc. or its affiliates.* 
//* All Rights Reserved.* 
//*
//*-------------------------------------------------------------------**
//* PRE DELETE*
//*-------------------------------------------------------------------**
//*
//PREDEL  EXEC PGM=IEFBR14
//* 
//DD01     DD DSN=AWS.M2.TXT2PDF1.PDF.VB,                      
//            DISP=(MOD,DELETE,DELETE)
//*
//DD02     DD DSN=AWS.M2.TXT2PDF1.PDF,                       
//            DISP=(MOD,DELETE,DELETE)
//* 
//*-------------------------------------------------------------------**
//* CALL TXT2PDF TO CONVERT FROM TEXT TO PDF (VB)*
//*-------------------------------------------------------------------**
//*
//STEP01 EXEC PGM=IKJEFT1B
//*
//SYSEXEC  DD DISP=SHR,DSN=AWS.M2.REXX.EXEC
//*
//INDD     DD *
1THIS IS THE FIRST LINE ON THE PAGE 1
0THIS IS THE THIRD LINE ON THE PAGE 1
-THIS IS THE   6TH LINE ON THE PAGE 1
THIS IS THE   7TH LINE ON THE PAGE 1
+____________________________________ - OVERSTRIKE 7TH LINE          
1THIS IS THE FIRST LINE ON THE PAGE 2
0THIS IS THE THIRD LINE ON THE PAGE 2
-THIS IS THE   6TH LINE ON THE PAGE 2 
THIS IS THE   7TH LINE ON THE PAGE 2
+____________________________________ - OVERSTRIKE 7TH LINE                 
/*
//*
//OUTDD    DD DSN=AWS.M2.TXT2PDF1.PDF.VB,
//            DISP=(NEW,CATLG,DELETE),
//            DCB=(LRECL=256,DSORG=PS,RECFM=VB,BLKSIZE=0)
//*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN  DD DDNAME=SYSIN
//*
//SYSIN    DD *
%TXT2PDF BROWSE Y IN DD:INDD +
OUT DD:OUTDD +
CC YES
/*
//*
//*-------------------------------------------------------------------**
//* CONVERT PDF (VB) TO PDF (LSEQ - BYTE STREAM)*
//*-------------------------------------------------------------------**
//* 
//STEP02 EXEC PGM=VB2LSEQ
//*
//INFILE   DD DSN=AWS.M2.TXT2PDF1.PDF.VB,DISP=SHR             
//*
//OUTFILE  DD DSN=AWS.M2.TXT2PDF1.PDF,                      
//            DISP=(NEW,CATLG,DELETE),
//            DCB=(LRECL=256,DSORG=PS,RECFM=LSEQ,BLKSIZE=0)
//*
//SYSOUT   DD SYSOUT=*
//*
//
```

**TXT2PDF2.jcl**

This sample JCL uses a DSN name for the TXT2PDF conversion.

```
//TXT2PDF2 JOB 'TXT2PDF2',CLASS=A,MSGCLASS=X,TIME=1440
//*
//* Copyright Amazon.com, Inc. or its affiliates.* 
//* All Rights Reserved.* 
//*
//*-------------------------------------------------------------------**
//* PRE DELETE*
//*-------------------------------------------------------------------**
//*
//PREDEL  EXEC PGM=IEFBR14
//* 
//DD01     DD DSN=AWS.M2.TXT2PDF2.PDF.VB,                      
//            DISP=(MOD,DELETE,DELETE)
//*
//DD02     DD DSN=AWS.M2.TXT2PDF2.PDF,                       
//            DISP=(MOD,DELETE,DELETE)
//* 
//*-------------------------------------------------------------------**
//* CALL TXT2PDF TO CONVERT FROM TEXT TO PDF (VB)*
//*-------------------------------------------------------------------**
//* 
//STEP01 EXEC PGM=IKJEFT1B
//*
//SYSEXEC  DD DISP=SHR,DSN=AWS.M2.REXX.EXEC
//*
//INDD     DD *
1THIS IS THE FIRST LINE ON THE PAGE 1
0THIS IS THE THIRD LINE ON THE PAGE 1
-THIS IS THE   6TH LINE ON THE PAGE 1
THIS IS THE   7TH LINE ON THE PAGE 1
+____________________________________ - OVERSTRIKE 7TH LINE          
1THIS IS THE FIRST LINE ON THE PAGE 2
0THIS IS THE THIRD LINE ON THE PAGE 2
-THIS IS THE   6TH LINE ON THE PAGE 2 
THIS IS THE   7TH LINE ON THE PAGE 2
+____________________________________ - OVERSTRIKE 7TH LINE                 
/*
//*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN  DD DDNAME=SYSIN
//*
//SYSIN    DD *
%TXT2PDF BROWSE Y IN DD:INDD +
OUT 'AWS.M2.TXT2PDF2.PDF.VB' +
CC YES
/*
//*
//*-------------------------------------------------------------------**
//* CONVERT PDF (VB) TO PDF (LSEQ - BYTE STREAM)*
//*-------------------------------------------------------------------**
//*
//STEP02 EXEC PGM=VB2LSEQ
//*
//INFILE   DD DSN=AWS.M2.TXT2PDF2.PDF.VB,DISP=SHR             
//*
//OUTFILE  DD DSN=AWS.M2.TXT2PDF2.PDF,                      
//            DISP=(NEW,CATLG,DELETE),
//            DCB=(LRECL=256,DSORG=PS,RECFM=LSEQ,BLKSIZE=0)
//*
//SYSOUT   DD SYSOUT=*
//*
//
```

## Modifications


To make the TXT2PDF program run on the AWS Mainframe Modernization Rocket Software runtime environment, we made the following changes:
+ Changes to the source code to ensure compatibility with the Rocket Software REXX runtime
+ Changes to ensure that the program can run on both Windows and Linux operating systems
+ Modifications to support both EBCDIC and ASCII runtime

## References


TXT2PDF references and source code:
+ [Text to PDF converter](https://homerow.net/rexx/txt2pdf/)
+ [z/OS Freeware TCP/IP and Mail Tools](http://www.lbdsoftware.com/tcpip.html)
+ [TXT2PDF User Reference Guide](http://www.lbdsoftware.com/TXT2PDF-User-Guide.pdf)

# M2DFUTIL batch utility


M2DFUTIL is a JCL utility program that provides backup, restore, delete, and copy functions on datasets, similar to the support provided by the mainframe ADRDSSU utility. This program retains many of the SYSIN parameters from ADRDSSU, which streamlines the process to migrate to this new utility.

**Topics**
+ [

## Supported platforms
](#m2dfutil-platforms)
+ [

## Platform requirements
](#m2dfutil-platform)
+ [

## Planned future support
](#m2udfutil-future-support)
+ [

## Asset locations
](#mdfutil-assets)
+ [

## Configure M2DFUTIL or AWS Mainframe Modernization runtime on Amazon EC2 (including AppStream 2.0)
](#mdfutil-dependencies)
+ [

## General syntax
](#mdfutil-syntax)
+ [

## Sample JCLs
](#mdfutil-sample-jcls)

## Supported platforms


You can use M2DFUTIL on any of the following platforms:
+ Rocket Software (formerly Micro Focus) ES on Windows (64 bit and 32 bit)
+ Rocket Software ES on Linux (64 bit)

## Platform requirements


M2DFUTIL depends on calling a script to perform a regular expression test. On Windows, you must install Windows Services for Linux (WSL) for this script to run.

## Planned future support


Features that are not currently available from the mainframe ADRDSSU utility, but are in the future scope include: 
+ M2 Managed
+ VSAM
+ COPY support for file name renaming
+ RENAME support for RESTORE 
+ Multiple INCLUDE and EXCLUDE
+ BY clause for subselecting by DSORG, CREDT, EXPDT
+ MWAIT clause to retry enqueue failures
+ S3 storage support for DUMP/RESTORE

## Asset locations


The load module for this utility is called `M2DFUTIL.so` on Linux and `M2DFUTIL.dll` on Windows. This load module can be found in the following locations:
+ Linux: `/opt/aws/m2/microfocus/utilities/64bit`
+ Windows (32 bit): `C:\AWS\M2\MicroFocus\Utilities\32bit`
+ Windows (64 bit): `C:\AWS\M2\MicroFocus\Utilities\64bit`

The script used for regular expression testing is called `compare.sh`. This script can be found in the following locations:
+ Linux: `/opt/aws/m2/microfocus/utilities/scripts`
+ Windows (32 bit): `C:\AWS\M2\MicroFocus\Utilities\scripts`

## Configure M2DFUTIL or AWS Mainframe Modernization runtime on Amazon EC2 (including AppStream 2.0)


Configure your Enterprise Server region with the following:
+ Add the following variables in **[ES-Environment]**
  + `M2DFUTILS_BASE_LOC` - The default location for DUMP output
  + `M2DFUTILS_SCRIPTPATH` - The location of the `compare.sh` script documented in **Asset Locations**
  + `M2DFUTILS_VERBOSE` - [VERBOSE or NORMAL]. This controls the level of detail in the `SYSPRINT `output
+ Verify that the load module path is added to the `JES\Configuration\JES Program Path` setting
+ Verify that the scripts in the utilities directory have run permissions. You can add a run permission using the `chmod + x <script name>` command, in the Linux environment

## General syntax


### DUMP


Provides the ability to copy files from the present cataloged location to a backup location. This location must currently be a file system.

#### Process


DUMP will perform the following:

1. Create the target location directory.

1. Catalog the target location directory as a PDS member.

1. Determine the files to be included by processing the INCLUDE parameter.

1. Deselect included files by processing the EXCLUDE parameter.

1. Determine if the files being dumped are to be DELETED.

1. Enqueue the files to be processed.

1. Copy the files.

1. Export the copied files cataloged DCB information to a side file in the target location to assist with future RESTORE operations.

#### Syntax


```
DUMP
TARGET ( TARGET LOCATION  )    -
INCLUDE ( DSN. )
[ EXCLUDE ( DSN ) ]
[ CANCEL | IGNORE ]
[ DELETE ]
```

#### Required parameters


Following are the required parameters for DUMP:
+ `SYSPRINT DD NAME` - To contain additional logging information
+ `TARGET` - Target location. It can be either:
  + Full path of the dump location
  + Subdirectory name created in the location defined in the **M2DFUTILS\$1BASE\$1LOC** variable
+ `INCLUDE` - Either a single named DSNAME or a valid mainframe DSN search string
+ `EXCLUDE` - Either a single named DSNAME or a valid mainframe DSN search string

#### Optional parameters

+ CANCEL - Cancel if any error occurs. Files that were processed will be retained
+ (Default) IGNORE - Ignore any error and process until end
+ DELETE - If no ENQ error occurs, then the file is deleted and is uncataloged

### DELETE


Provides the ability to mass delete and uncatalog files. Files are not backed up.

#### Process


DELETE will perform the following:

1. Determine the files to be included by processing the INCLUDE parameter.

1. Deselect included files by processing the EXCLUDE parameter.

1. Enqueue the files to be processed. Setting the disposition to OLD, DELETE, KEEP.

#### Syntax


```
DELETE
INCLUDE ( DSN )
[ EXCLUDE ( DSN ) ]
[ CANCEL | IGNORE ]
[ DELETE ]
```

#### Required parameters


Following are the required parameters for DELETE:
+ `SYSPRINT DD NAME` - To contain additional logging information
+ `INCLUDE` - Either a single named DSNAME or a valid mainframe DSN search string 
+ `EXCLUDE` - Either a single named DSNAME or a valid mainframe DSN search string 

#### Optional parameters

+ CANCEL - Cancel if any error occurs. Files that are processed will be retained 
+ (Default) IGNORE - Ignore any error and process until end

### RESTORE


Provides the ability to restore files previously backed up using DUMP. Files are restored to the original cataloged location unless RENAME is used to alter the restored DSNAME.

#### Process


RESTORE will perform the following:

1. Validate the source location directory.

1. Determine the files to be included by processing the catalog export file.

1. Deselect included files by processing the EXCLUDE parameter.

1. Enqueue the files to be processed.

1. Catalog files that aren't cataloged based on their export information.

1. If a file is already cataloged and the export catalog information is the same, RESTORE will replace the cataloged dataset if the REPLACE option is set.

#### Syntax


```
RESTORE
SOURCE ( TARGET LOCATION ) 
INCLUDE ( DSN )
[ EXCLUDE ( DSN ) ]
[ CANCEL | IGNORE ]
[ REPLACE]
```

#### Required parameters


Following are the required parameters for RESTORE:
+ `SYSPRINT DD NAME` - To contain additional logging information 
+ `SOURCE` - Source location. It can be either:
  + Full path of the dump location
  + Subdirectory name created in the location defined in the **M2DFUTILS\$1BASE\$1LOC** variable
+ `INCLUDE` - Either a single named DSNAME or a valid mainframe DSN search string
+ `EXCLUDE` - Either a single named DSNAME or a valid mainframe DSN search string

#### Optional parameters

+ CANCEL - Cancel if any error. Files processed retained
+ (Default) IGNORE - Ignore any error and process until end
+ REPLACE - If the file being restored is already cataloged and the catalog records are the same, then replace the cataloged file

## Sample JCLs


 **DUMP job**

This job will create a subdirectory called `TESTDUMP`. This is the default backup location specified by the **M2DFUTILS\$1BASE\$1LOC** variable. It will create a PDS library for this backup called `M2DFUTILS.TESTDUMP`. The exported catalog data is stored in a line sequential file in the backup directory called `CATDUMP.DAT`. All files selected will be copied to this backup directory.

```
//M2DFDMP JOB 'M2DFDMP',CLASS=A,MSGCLASS=X
//STEP001  EXEC PGM=M2DFUTIL
//SYSPRINT DD DSN=TESTDUMP.SYSPRINT,
//        DISP=(NEW,CATLG,DELETE),
//        DCB=(RECFM=LSEQ,LRECL=256)
//SYSIN    DD *
DUMP TARGET(TESTDUMP)               -
     INCLUDE(TEST.FB.FILE*.ABC)     -
 CANCEL
/*
//
```

 **DELETE job**

This job will delete all files from the catalog that match the INCLUDE parameter.

```
/M2DFDEL JOB 'M2DFDEL',CLASS=A,MSGCLASS=X
//STEP001  EXEC PGM=M2DFUTIL
//SYSPRINT DD DSN=TESTDEL.SYSPRINT,
//        DISP=(NEW,CATLG,DELETE),
//        DCB=(RECFM=LSEQ,LRECL=256)
//SYSPRINT DD SYSOUT=A
//SYSIN    DD *
  DELETE                               -
     INCLUDE(TEST.FB.FILE*.ABC)        -
 CANCEL
/*
 //
```

 **RESTORE job**

This job will restore the files that match the INCLUDE parameter from the `TESTDUMP` backup location. Files that are cataloged will be replaced if the cataloged file is the same as the one in the CATDUMP export and the REPLACE option is specified.

```
//M2DFREST JOB 'M2DFREST',CLASS=A,MSGCLASS=X
//STEP001  EXEC PGM=M2DFUTIL
////SYSPRINT DD DSN=TESTREST.SYSPRINT,
//        DISP=(NEW,CATLG,DELETE),
//        DCB=(RECFM=LSEQ,LRECL=256)
//SYSPRINT DD SYSOUT=A
//SYSIN    DD *
RESTORE SOURCE(TESTDUMP)               -
     INCLUDE(TEST.FB.FILE*.ABC)        -
 IGNORE
 REPLACE
/*
//
```

# M2RUNCMD batch utility


You can use M2RUNCMD, a batch utility program, to run Rocket Software (formerly Micro Focus) commands, scripts, and system calls directly from JCL instead of running them from a terminal or command prompt. The output from the commands is logged to the batch job's spool log.

**Topics**
+ [

## Supported platforms
](#m2runcmd-platforms)
+ [

## Configure M2RUNCMD for AWS Mainframe Modernization runtime on Amazon EC2 (including AppStream 2.0)
](#m2runcmd-configure)
+ [

## Sample JCLs
](#m2runcmd-sample-jcls)

## Supported platforms


You can use M2RUNCMD on the following platforms:
+ Rocket Software Runtime (on Amazon EC2)
+  All variants of Rocket Software Enterprise Developer (ED) and Rocket Software Enterprise Server (ES) products.

## Configure M2RUNCMD for AWS Mainframe Modernization runtime on Amazon EC2 (including AppStream 2.0)


If your migrated applications are running on AWS Mainframe Modernization runtime on Amazon EC2, configure M2RUNCMD as follows.
+ Change the [Micro Focus JES Program Path](https://www.microfocus.com/documentation/enterprise-developer/ed80/ED-Eclipse/index.html?t=GUID-BC8A1796-9EDE-48EB-8363-31C9BDE7F96B.html) to include the binary location for batch utilities. If you must specify multiple paths, use colons (:) to separate paths on Linux and semicolons (;) on Windows.
  + Linux: `/opt/aws/m2/microfocus/utilities/64bit`
  + Windows (32bit): `C:\AWS\M2\MicroFocus\Utilities\32bit`
  + Windows (64bit): `C:\AWS\M2\MicroFocus\Utilities\64bit`

## Sample JCLs


To test the installation, you can use either of the following sample JCLs.

 **RUNSCRL1.jcl**

This sample JCL creates a script and runs it. The first step creates a script called `/tmp/TEST_SCRIPT.sh` and with content from `SYSUT1` in-stream data. The second step sets the run permission and runs the script created in the first step. You can also choose to perform only the second step to run already existing Rocket Software and system commands.

```
//RUNSCRL1 JOB 'RUN SCRIPT',CLASS=A,MSGCLASS=X,TIME=1440
//*
//*
//*-------------------------------------------------------------------*
//*  CREATE SCRIPT (LINUX)                                           
//*-------------------------------------------------------------------*
//*
//STEP0010 EXEC PGM=IEBGENER
//*
//SYSPRINT DD SYSOUT=*
//SYSIN    DD DUMMY
//*
//SYSUT1   DD *
#!/bin/bash

set -x

## ECHO PATH ENVIRONMNET VARIABLE
echo $PATH

## CLOSE/DISABLE VSAM FILE
casfile -r$ES_SERVER -oc  -ed -dACCTFIL

## OPEN/ENABLE VSAM FILE
casfile -r$ES_SERVER -ooi -ee -dACCTFIL

exit $?
/*
//SYSUT2   DD DSN=&&TEMP,
//            DISP=(NEW,CATLG,DELETE),
//            DCB=(RECFM=LSEQ,LRECL=300,DSORG=PS,BLKSIZE=0)
//*MFE: %PCDSN='/tmp/TEST_SCRIPT.sh'
//*
//*-------------------------------------------------------------------*
//*   RUN SCRIPT (LINUX)                                              *
//*-------------------------------------------------------------------*
//*
//STEP0020 EXEC PGM=RUNCMD
//*
//SYSOUT  DD  SYSOUT=*
//* 
//SYSIN   DD *
*RUN SCRIPT
 sh /tmp/TEST_SCRIPT.sh
/*
//
```

 **SYSOUT**

The output from the command or script that is run, is written into the `SYSOUT` log. For each carried out command, it displays the command, output, and return code.

```
************ CMD Start ************   
                                                                                    
CMD_STR: sh /tmp/TEST_SCRIPT.sh                                                                                              
CMD_OUT:                                                                                                                                                                                                                             
+ echo /opt/microfocus/EnterpriseServer/bin:/sbin:/bin:/usr/sbin:/usr/bin    
/opt/microfocus/EnterpriseServer/bin:/sbin:/bin:/usr/sbin:/usr/bin           
+ casfile -rMYDEV -oc -ed -dACCTFIL                                                                       
-Return Code:   0                                                                                                         
Highest return code:    0                                                                                                 
+ casfile -rMYDEV -ooi -ee -dACCTFIL                                                                     
-Return Code:   8                                                                                                         
Highest return code:    8                                                                                                 
+ exit 8                                                                                                                  

CMD_RC=8                                                                                                                  

************  CMD End  ************
```

 **RUNCMDL1.jcl**

This sample JCL uses RUNCMD to run multiple commands.

```
//RUNCMDL1 JOB 'RUN CMD',CLASS=A,MSGCLASS=X,TIME=1440
//*
//*
//*-------------------------------------------------------------------*
//*   RUN SYSTEM COMMANDS                                             *
//*-------------------------------------------------------------------*
//*
//STEP0001 EXEC PGM=RUNCMD
//*
//SYSOUT  DD  SYSOUT=*
//* 
//SYSIN   DD *
*LIST DIRECTORY
 ls
*ECHO PATH ENVIRONMNET VARIABLE
 echo $PATH
/*
//
```