Steps to Modernization and Cloud Deployment

Steps to Modernization and Cloud Deployment 

The Problem

You're intrigued about the possibility of running your enterprise mission-critical application in the cloud but don't know how to get started?  The Getting Started Guide shows how to develop and maintain these apps using the Heirloom Computing Elastic COBOL Interactive Development Environment (IDE), but what about actually deploying and running apps in the Heirloom Computing Enterprise Legacy Platform-as-a-Service?  You might be asking yourself these questions:

  • I've got application source, how do I get a running app to the cloud?
  • My app has a "batch" component -- where does it run and how does it start?
  • My app prints reports that need to be routed to the local user's printer -- how do I connect a printer cable to the cloud?
  • I use a 3270 (mainframe) or 5250 (AS400) or ASCII Terminal (UNIX, Linux) screen interface on the "online" components -- how do I connect an ASCII terminal to the cloud?
We'll try to answer these basic questions and show a logical, stepwise process for moving an app from your IT datacenter or simply a back-office server to a managed cloud environment, creating a Software-as-a-Service (cloud) app where a traditional command-line/terminal app once was operating. And, of course, we'll do it while providing security that's superior to running the app on your own computer, is scaleable to unbelievable transaction rates, and delivers on the promise of the cloud-business  model of "pay for only what you use."

After we define what we mean by a "migrated app" we'll go though the steps to modernize it and deploy it into the cloud:

App Definition

What kind of Mission Critical, Legacy, Enterprise (choose your own terms -- they're the programs you need to run that run your business) do you have?  If you have applications that are either "command line" or "background" jobs that access all their input in the form of database or files (include standard input) and update a database or output report files, then you've got batch applications.  Those applications will run the same way in the cloud -- provided that the data needed by them is available at the time they run.  The Heirloom Computing Job Entry Subsystem and Job Control Language interpreter will manage resources for batch applications in the cloud.

Interactive terminal-oriented applications are online transaction processing applications (OLTP) that possibly interact with the same data as batch applications.  Legacy interactive applications come in a few varieties:
  • COBOL ACCEPT/DISPLAY -- ASCII terminal screens
  • CICS SEND MAP applications - 3270 terminal screens
  • EXEC HTML applications - Web pages delivered to network-attached browsers
The first of these can operate in the cloud using remote desktop protocol whereby an application window that would normally appear on a application's local console is redirected over the network and displays in a window on the user's screen.  In the second case a BMS screen map is converted to HTML and appears in a Web browser.  And the third requires that the application be deployed into a servlet or application server environment.  Of course, the UI of a screen-based application can be modernized to use other techniques:  XML delivered over HTTP to a create a rich Web 2.0 UI (AJAX).

Often times, OLTP components cannot run at the same time as batch applications because the background apps need exclusive use of database and dataset resources.  In these cases the Heirloom Computing cloud-based Task and Job Scheduler is employed to run background apps during an overnight batch window.   The scheduler will also serve to collect the output report files from the batch runs and make them available through a Web portal.  The reports can then be viewed even if the execution environment that ran them is no longer available.  The Heirloom Computing ELPaaS Dashboard is used to access the scheduler's Task History Portlet to download report files to a local file or view them from within the browser.  The browser's Print facility is used to print to a local or network attached printer.

Whatever type of application and data requirements and report outputs are presented, developing and deploying an application to the the cloud is not that much different that working in any client/server or mainframe/terminal environment.  Heirloom Computing Web portals, dashboards, and downloadable tools are used to manage the application lifecycle.  Let's go through those steps.

Step 1 - Application Development / Maintenance

Application development for enterprise applications is accomplished from Heirloom Computing Elastic COBOL IDE.  Even if the application consists of Java, C, C++ or languages other than COBOL, the IDE is based on Eclipse development environment that supports multiple languages.  HCI offers both a downloadable tool and a cloud-based Application Development-as-a-Service solution.  Both start with registering for a subscription on > Portal.  


To gain full use of the platform (e.g., CICS, JES, database), sign up for a paid subscription to Elastic COBOL Enterprise Developer.  To test your application code with Elastic COBOL, sign up for a Elastic COBOL Developer (Free Edition).  Either subscription is valid in the cloud or on-premise.  Advantages of on-prem is that you have access to local files and cloud development allows you to get started without any installation or setup required.  


See the Elastic COBOL Portal FAQ forum for commonly asked questions about starting and connecting to a cloud instance.  When you start a cloud instance with the Power On link (paid subscriptions have a Download link) and then Connect to it to bring up the IDE window using remote desktop protocol allowing the IDE to run in the cloud while its windows are displayed locally.


When the IDE window appears all the sample and demo application are pre-installed (on the cloud edition) or an empty workspace is shown (when Elastic COBOL is downloaded).  Look to the  Getting Started Guide to examine and run the samples.

To get your application source into the cloud there are two techniques
  • Open new COBOL program files and copy/paste from existing source into the window
  • Copy directories en-mass from your local system to the cloud
The first technique calls for copying the source (from Notepad) to new project windows using the IDE itself.


To move multiple files into the cloud (source and include) use the IDE's File > Import... menu selection and a feature of remote desktop protocol that allows you to search your desktop disks from the cloud-based IDE.  From the Import dialog box, select My Computer and open up the local C drive, Users folder and any other project folders until you see the folder with your source and copylib information (by convention, Elastic COBOL calls these cobol_source and copylib).


Click OK to complete the import.


The result will be a fully-populated initial project with your source code in the cloud instance.  


Even if you Power Off your cloud instance (it also stops if you are not using it), the next time you return to the portal to Power On your projects will be intact.  You may delete your project disks entirely from the system using the ELPaaS Dashboard's Data Volumes Portlet.

Step 2 - Data Extraction, Translation and Loading

Data Extract Translate and Load refers to the process to move production (or test) data from your current environment to the development or production environment and put it in a format acceptable for your Elastic COBOL application.

When moving between like databases (e.g., Oracle in your existing production UNIX server to an ELPaaS Oracle instance) is fairly straightforward: Use the same techniques as moving data between servers.  This might include running database manufacturer supplied tools to export the data to a text or a database-specific export file format.  ELPaaS contains the Oracle APEX and SQL Developer instances to run the corresponding import tools.  

If moving between EBCDIC and ASCII or Unicode format the text files would be converted when you move them off the original platform.

Elastic COBOL supports a variety of sequential and indexed-sequential file formats:

  1. Line sequential, variable record length
  2. Fixed record length sequential
  3. Micro Focus IDX2 format COBOL files
  4. AcuCOBOL indexed file format
  5. Elastic COBOL indexed, relative-record and sequential file format
These files can be moved into the development instance for testing purposes in much the same way as source programs.  Use "import" or "copy/paste" techniques outlined above to place in the "resources" or "workdir" directories of the project.   For production systems it is necessary to set up the production environment.  The Enterprise Legacy Platform-as-a-Service environment contains components to run your application "in production."  Heirloom Computing also maintains daily, weekly, monthly and yearly backups of cloud instance data.

Subscribe the cloud instances from the  > Portal in the same way you subscribed to Elastic COBOL development instances.  Subscribe to the "ELPaaS (Free Edition)", which runs about 5 MIPS worth of compute power, has no database, 1GB of disk space and limited memory insufficient for full CICS environment or the "ELPaaS (Linux)" or "ELPaaS (Windows)" that has 50 MIPS of power, 10GB of data space, regular backups, CICS, JES and JCL subsystems.


As with development instances, Power On your ELPaaS instance.  Remember that development instances will automatically Power Off after a certain amount of idle time.  ELPaaS production instances (other than Free Edition) will continue to run until you use the portal to suspend operations.  See the Portal FAQs for answers to common questions about starting instances in the cloud.

After you Power On your instance it will appear connectable a few minutes later.  Connect means open a browser window to the default home page of the Web server on your instance.  The home page will appear similar to:
The domain name (instance can be changed through the Data Volumes Portlet of the ELPaaS Dashboard.  Have your network adminstrator set up DNS CNAME "alias" records to map to your own domain name when the app goes into production.
The File Explorer window allows you to upload files and folders of data or other collateral to ELPaaS instance.
Datasets that must be accessible to the Job Entry Subsystem should be copied into the /data directory.  See the Job Entry Subystem and JES Configuration for a description of dataset naming conventions and configuration.

Step 3 - Application User Interface Mapping

If your application is BMS screen oriented use the techniques described in the Getting Started Guide under CICS Applications for transforming the user interface to a Web interface.  For further customization, utilize a custom CSS or home page for each app.  Static pages can set up the introduction and links to the applications running under the Java app server (Geronimo) or servlet manager (Tomcat).

For traditional COBOL ACCEPT/DISPLAY applications or Acu Graphic Extension oriented use the techniques described How do Acu GUI apps look in Elastic COBOL.  


These applications bring up a Java graphics window which utilizes Microsoft Windows graphics techniques.  An ELPaaS (Windows) instance type offers access to these applications usin the same remote desktop protocol as the Eclipse IDE in an ADaaS instance.  Virtual terminal remote desktop technologies are available for ELPaaS (Linux) platforms as well.  A different windows client (VNC) is required to access these applications.

CICS application transactions and batch applications can also be accessed through WSGI protocol in a technique suited for Web 2.0 AJAX applications.  The user interface or batch file input/output is converted to XML and sent over HTTP by use of a servlet or JEE application.  See for more information.

Step 4 - Cloud Deployment

Applications are deployed from the development environment to the deployment environment from either the IDE's Elastic COBOL Deploy Wizard or the Elastic Transaction Deploy Wizard.  Access these from the File > Export... menu command.


Batch applications are deployed to the production environment in pre-packaged JAR files using the first of these wizards.  The JAR (containing your application code and Elastic COBOL runtime libraries) into the configured JOBLIB, STEPLIB or the /data or D:\Data directory as configured in the Job Entry Subsystem.  JCL programs and scripts can access the JARs from those locations.  For testing on the development ADaaS cloud instance, deploy batch jars to the D:\Data directory.


Non-CICS Applications that are Web based (e.g., utilize EXEC HTML) are also deployed to the production environment from the IDE through the Elastic COBOL Deployment Wizard, but are deployed as WAR files.  You can deploy these from the ADaaS cloud environment to the production ELPaaS environment by clicking on the Cloud radio button and choosing your started instance.



 OLTP (CICS) applications require the JEE server to coordinate transactions and database resources.  See the Getting Started Guide for examples with the Geronimo app server.  These use the Elastic Transaction Platform Deploy Wizard.



Another step involved in the deployment is establishing a production, rather than test, web environment.  The default Web page shown above for a newly started ELPaaS instance would be modified by editing the index.html home page using the File Explorer view.  Keep a protected or "administrator view" of your application using the original home page site or something similar to access it through administrator purposes.

Install the SSL certificate for the (ultimate) DSN name in the config directory using the File Explorer.  If you intend to access your application with the host name (whether this is available to your employees on your corporate Intranet or visible to customers, partners and suppliers from the Internet) then create a certificate for and install it.  Use https: to access the site and (for example, using JavaScript on the home page) redirect non-secure http:  to the secure site.  

 Step 5 - Production Operations

Now that the application has been compiled, modernized, deployed into production ELPaaS instances with the prior steps what remains are the steps necessary to carry out various production operations.  These might be System Operator duties carried out by IT datacenter personnel in a traditional environment as well as Administrator functions managed by the IT programming or operations people.  The ELPaaS Dashboard (on the Web portal) and the Job Entry Subsystem (on the production instance) are used for these activities.  

Based on the type of production subscription your ELPaaS instances are backed up daily, weekly, monthly yearly and even hourly if you need it.  Retention is up to 10 years.  This is similar to a System Operator taking backups of key DASD volumes on a regular basis.  Access these snapshots through the Backup Calendar Portlet. Use this portlet to restore a running system to the time the backup was taken.  The restored system is given a new Web address that you access in the same way as your main application. 


To retrieve report or intermediate application datasets you set up a virtual directory on the Web definition (editing the httpd.conf file in the File Explorer).  But for most batch job output reports you would likely use the Scheduler to set up windows when jobs should run:

  • Define JES job classes with different resource criteria (maximum CPU seconds, memory, etc.)
  • Upload JCL job (or other supported dynamic language script) definitions that are to invoke invoke applications
  • Start initiators of those classes at particular times, which would start running jobs for that class or increase the concurrency of background jobs
  • Define tasks which start one or more jobs under a particular class through 
  • Indicate when the batch window should close so as not to affect OLTP applications at other times of day
Tasks and jobs may run overnight based on a schedule you set up.  You view the output report datasets from the scheduler portal at any time.  The Jobs History shows the overall history for all tasks of that job and the Task Calendar allows you to see the results of a particular run of a job.  

In order to retrieve and either circulate the results of a batch job or send it to a local printer you use the Task Calendar to view the result (final condition code) of the job.  Any job that generated output on its standard output (SYSOUT) or standard error (SYSERR) or any other data definition that mapped to a SYSOUT job class (other than DUMMY) will be retrieved from the instance and stored in the cloud. 




Output datasets are available for up to a month in the cloud and as long as you decide is necessary on the ELPaaS cloud instances.  The JES System Operators Console is available for detail control over the batch execution process allowing you to view and purge output dataset that reside there.  


All of the functionality of the JES subsystem and the Scheduler are available through a REST interface.  Your organization may set up a separate "print spooler" function to issue the JES Web Service  or the Scheduler Web Services and send the printouts to a printer.



This has been a review of the steps to modernize, SaaS-enable and cloud-deploy your enterprise application into the Heirloom Computing PaaS specifically designed for the enterprise environment.  Although only limited types of applications (COBOL) and batch job scenarios (JCL) have been presented, the platforms created by HCI are of general purpose.  "Anything legacy" that runs today in a server environment can take advantage of the ADaaS and ELPaaS capabilities to provide a low-risk, high-reward alternative to traditional enterprise application operation.  See other sections of the or Web portals to learn about other modernization options.

Was this article helpful?
1 out of 1 found this helpful
Have more questions? Submit a request


Please sign in to leave a comment.
Powered by Zendesk