Introduction
Elastic Batch Platform (EBP), Elastic Transaction Platform (ETP) and Elastic COBOL runtime environments are designed for scalability and high-availability in a public or private cloud environment. First, we present the layout of a typical COBOL application (fig. 1) which divides the portions of the application into the typical 3-layer model: (1) presentation logic, (2) business logic and (3) database logic.
Fig. 1. Structure of typical COBOL application.
For online application the presentation logic may be a screen or GUI interface but with batch applications this may be categorized as the report writer section of the application. The Elastic COBOL runtime environment provides the compatibility framework for running the application as if it is operating on a mainframe:
- COBOL datatypes such as COMP-1, COMP-3, etc.,
- COBOL file I/O, together with the ability to tie COBOL FDs to DD names of a batch JCL deck,
- COBOL database I/O (e.g., EXEC SQL) interfacing the application to arbitrary SQL-oriented databases
The entire diagram of fig. 1 is placed in the Heirloom cloud deployment package, shown in fig. 2 as "Standard COBOL application."
Fig. 2. The Heirloom cloud deployment environment consisting of ETP, EBP and cloud-specific facilities.
The Heirloom cloud deployment will provide the glue layers between the facilities the COBOL program expects to have available to it when running on the mainframe and what is provided by the underlying cloud platform. Heirloom is deployed through Java EE Servers (e.g, IBM Websphere, Apache Geronimo) for ETP and/or Web Servlet containers (e.g., EMC tcServer, Apache Tomcat). Heirloom consists of the following components:
- SQL database mapping from DB2 to other databases, such as PostgreSQL or SQLFire.
- Customer Web Portal which maps from either CICS BMS screen maps to Web 2.0 pages, JavaScript and XML (e.g., RESTful Web Services) or COBOL reports from the application that are also mapped to REST Web services that can be issued to the EBP services handler.
- Monitoring and Operations Management (e.g., EMC Hyperic)
The entire diagram of fig. 2 (called Heirloom Deployment) is placed in the private cloud Infrastructure-as-a-Service (IaaS) environment provided by the hardware environment, as shown in fig. 3.
Fig. 3. The private cloud deployment Infrastructure-as-a-Service (IaaS) environment
Each of the boxes in fig. 3 represent a virtual machine running in the IaaS. In addition to the portions containing the ETP or EBP and customer applications (Heirloom Deployment), there are also VMs for the clustered database environment (e.g., PostGresSQL or SQL Fire).
To achieve scalability in the batch environment, Heirloom VMs containing EBP are started as demand for batch resources increases. The EBP starts when the VM it is running in is started. As each EBP starts it registers or contacts the centralized Heirloom Elastic Scheduler Platform (ESP) which relays batch job submissions from an external scheduler (e.g., Control-M Linux Agent). ESP also has the capability to define batch jobs and tasks (rules) for running these directly. Fig. 4. shows this interaction.
Fig. 4. The interaction between the ESP and EBP within Heirloom virtual machines and external scheduling agents.
Let's take the example where an external scheduler injects jobs into the system. As the batch job is injected by the external scheduler (e.g., Control-M Agent) which starts these steps:
- Control-M Linux Agent issues the Linux utility curl to submit jobs to the ESP via its Web services interface.
- The scheduler initializes the EBP with job classes and start job class initiators (job parallelism within an EBP)
- The scheduler submits the batch job to EBP via its WEB services interface
- The job executes and returns its condition code and output datasets to the scheduler which stores them for later review on NFS-attached drives.
- An indication of job success/failure and/or output datasets are returned to the Control-M agent.
Scaleability and high availability is achieved by instantiating more than one Heirloom EBP virtual machine within an IaaS frame and among multiple frames. See fig. 5.
Fig. 5. Multiple JES virtual machines in a "JESPlex" cluster.
Within each virtual machine, either an EBP subsystem environment is part of the tcServer / Apache Tomcat started tasks. As needed (and following scheduler rules), one or more Job Classes are defined within EBP. Classes contain attributes for the jobs that will run under them: Elapsed and CPU time allotments, storage and network utilization limits. Also following scheduler rules, one or more Class Initiators are opened under each class. This allows a degree of parallelism within a virtual machine. Then, as demand grows the vCloud management infrastructure (acting under more rules) will start additional virtual machines. These VMs may be on the same IaaS frame or different frames. Each new EBP registers with the ESP, as described in fig. 4, and begins operating on batch jobs sent to it by ESP.
All batch shared data files (input and output datasets) are accessible to any VM via NFS. Shared datasets also contain the actual COBOL applications that will be executed by the Job Steps within the Batch Jobs executed within the Initiator. Dataset locking information communicated among the EBPs prevents batch jobs with exclusive access to resources from conflicting with other jobs requesting the same resources. Similarly, Input Spool (JCL), Output Spool (report SYSOUTs) and Temporary Spool (working files) are also shared among systems via NFS.
Should a VM or its subsystems (e.g., EBP) fail, batch jobs are requeued into the Input Spool and dispatched to other waiting EBPs. In this way recovery is automatic. EMC Storage Frame components will ensure that the data stores themselves are replicated for availability purposes.
0 Comments