Elastic Batch Platform

Introduction to EBP

This is part of the Heirloom Computing Elastic Batch Platform forum on

The Heirloom Computing Elastic Batch Platform (EBP), or Job Entry Subsystem (JES), is a cloud-centric batch job execution and operating environment. EBP  is a business process manager in the form of an IBM JES2 subsystem equivalent, complete with MVS Job Control Language (JCL) compatible interfaces, a RESTful Web Services and XML control interface. When coupled with the Heirloom Dashboard, a cloud based scheduler front-ends many cloud-resident EBP systems allowing jobs to be fanned out to a pool of cloud execution engines.

EBP can also operate on a private cloud, server farm, backoffice server or even a desktop environment. EBP operates under Java EE servers such as Oracle WebLogic, IBM WebSphere, Red Hat Wildfly (JBOSS) and Apache Tomcat on Window, UNIX or Linux based environments.  It can also be started from the command line in "stand-alone mode" thanks to a built-in Eclipse Jetty servlet engine.

EBP supports a subset of the MVS JCL for zOS as defined by Heirloom JCL and the concepts of IBM JES2 including the following:

  • Job Classes which define specific Heirloom resources that are available for batch programs
  • Job Initiators which define a batch window and address concurrency issues
  • Input Job Queue where submitted and scheduled batch jobs await execution
  • Output Spool where status and held output datasets await retrieval and analysis following a job's execution
  • Job Groups that define interrelationships among jobs that drive the scheduling of the job's execution
  • Output Spoolers output classes and programs that can print, hold, or discard job output datasets
  • Network Job Entry in which batch jobs can be submitted for execution via a RESTful Web services interface
  • Remote Job Entry in which batch jobs can be submitted for execution via other jobs (internal reader)
  • JCL Engine executing a batch job consisting of jobs, steps, data definition assignments, inline and cataloged procedures and control logic that execute Elastic COBOL or other executable programs
  • JCS Engine executing a batch job consisting of IBM VSE style jobs, steps, assignment statements, extents (JCS mode JCL), with job class types and initiators for them
  • JEC Job Scheduler supports the latest IBM Job Execution Control introduced in August, 2015 to manage parallel and dependent jobs
  • Script Engine Batch jobs consisting of Shell scripts or Python scripts are accepted and managed by EBP in the same manner as JCL jobs
  • Program Engine provides a mechanism for command line programs, such as COBOL or Java, can be managed by EBP so that installations that do not require the complexity of JCL can nonetheless run overnight background command line invoking Elastic COBOL
  • IMS-DB supported in EBP and ETP
  • Utility Programs such as flat-file, partitioned dataset and record-sequenced key dataset manipulations
  • Cloud Scheduler allowing multiple EBP nodes (an "EBP-Plex") installations to be coordinated from a central point and work dispatched to servers on a timely basis using the Elastic Scheduling Platform (ESP)

Access the Elastic Batch Platform through the scheduling facilities of the Heirloom Dashboard at

IBM JCL Support

Central to the EBP system is the statement-oriented language used to control the execution of batch jobs and the standard dataset utilities that may be invoked (along with user programs) by jobs specifications conforming to IBM's Job Control Language, JCL. EBP also supports an IBM variant called JCS available on other systems as well as the definition of Job Groups through the use of Job Execution Control, JEC.  The following documents define the syntax accepted by the subsystem:

Dataset Name Mapping

Datasets are central to JCL, they're used for program storage, cataloged procedures and data.  In JCL datasets are referenced as 5-part names, each of 8 charcters each with an optional qualifier for partitioned dataset member or generation data group (GDG).  Here are some examples of JCL datasets:

  • MNH00.MYGDG.DAT(+1)
  • MNH00.MYGDG.DAT(G019V00)
EBP datasets are represented as directories and files within directories.   For example, the absolute GDG name will be a file within a directory represented by the dataset name and the relative GDG references will map to the appropriate absolute name.  EBP will evaluate a dataset name and find the best match based on the directory structure.  The directory structure is rooted on a series of paths that are specified during configuration.  There are 3 configuration paths:
  • datalib - location of data files referenced on DD cards within job steps
  • systemlib - location of executable programs invoked through job step EXEC cards
  • jcllib - location of cataloged procedures referenced in JCLLIB datasets names or INCLUDE statements, both system and user generated
  • classlib - location of class libraries paths (.jar files or directories with .class files) that will be placed on the class path of invoked programs
Each library path is specified as a distinct directory entry in the EBP Configuration Settings .  For example, datalib is initially defined in these settings as 2 separate directories:
  • ebp.datalib.1=/data
  • ebp.datalib.2=d:\data
Meaning that first directory searched for datasets named on DD cards will be "/data".  If that doesn't exist (e.g. on a Windows system), then "D:\data" is searched next.  If the D drive doesn't exist on the Windows sytem a JCL error will prevent the job from running with a "DATASET NOT FOUND" ABEND.  You can configure any number of datalib.1 through datalib.n entries for EBP to search.  New datasets (DISP=NEW) are created in the first datalib that has the permissions to do so (see Tape Handling below for storage class and unit extension).
The DSNs on DD cards are looked up in the datalibs on a directory tree basis where the "." is a directory separator.   DSNs may be translated to lower case as well.  Two configuration settings influence this translation:
  • ebp.newdsndirectory - look to translate DSNs to directory trees first if yes and flat files if no (default no)
  • ebp.newdsnlower - look at lower case directories and files first if yes (default no).
All dataset-to-directory mappings are searched directory tree order if newdsndirectory is yes then in file order. A DSN of "AAA.BBB.CCC" would look first to find "AAA/BBB/CCC", then "AAA/BBB.CCC" and finally "AAA.BBB.CCC".  A DISP=NEW would create the file "AAA/BBB/CCC".  Couple that with datalib.1 as "/data1" and datalib.2 as "/data2" results in a search order of "/data1/AAA/BBB/CCC", "/data1/AAA/BBB.CCC", "/data1/AAA.BBB.CCC", "/data2/AAA/BBB/CCC", "/data2/AAA/BBB.CCC" and "/data2/AAA.BBB.CCC".   Directory read/write permissions come into play as well.  The first directory tree that allows creation of files and/or directories will determine the DISP=NEW dataset location.  If "/data1" was write-protected, a DISP=NEW DSN would be created as "/data2/AAA/BBB/CCC."  Reversing the lookup with newdsndirectory=no while creating directories corresponding to the high-level qualifiers for certain users (e.g., "MNH00") would result in a DSN of "MNH00.ACCT.DAT" to be created as "/data2/MNH00/DATA.DAT".

Data Control Blocks (DCB)

On a mainframe dataset attributes indicating the file type and format are held in a system catalog and/or a volume table of contents (VTOC) on the disk containing the dataset.  Heirloom does not use a catalog other than the host operating system's file and directory lookup mechanism.  Thus, these attributes such as record length (LRECL), file format (RECFM) and others are maintained by EBP and other Heirloom tools such as the COBOL Record Editor And Transformation Engine (CREATE) are contained in a data control block (DCB) for a dataset and typically stored in a file named .dcb within the same directory as the physical file.  The .dcb file is a Java property sheet with property names containing the dataset name and the attribute and property values corresponding to the value.  In JCL the DCB attributes for a dataset are on the DD card with the DCB keyword.  They are typically set in the first step that creates a dataset with DISP=NEW and not supplied in later steps or other jobs that reference it with DISP=OLD or DISP=SHR.  Thus, the need to retain these attributes from step-to-step or job-to-job.

A typical DD name defining a new dataset and its attributes may look like:


The corresponding .dcb file that is created by EBP from this statement would look like:


Note missing attributes from the DCB clause are applied from defaults. Also note that the file protocol (indicating the external file handler for the file format) will be indicated as -protocol for indexed sequential files created with IDCAMS based on the PROTOCOL keyword. A VSAM KSDS created with the IDCAMS statement:

 PROTOCOL(VDB)                         -
FREESPACE(40 40) -
RECORDSIZE( 00080 00080) -
CYL (20 6) -
KEYS(0011 0000) -
TO (99366) -

would create the corresponding .dcb file (several unimportant CLUSTER attributes are ignored):


DCB entries needed by COBOL programs as they open files with open files of various file protocols and read and write records of various lengths can be combined from the .dcb files created by EBP and packaged into .jar, .war and .ear files that are referenced by JCL job steps and/or Elastic Transaction Platform regions.  The .dcb files can be combined from various sources (the datalib1..9 configured data directories and their sub directories) into a file that are packaged with the COBOL programs and transactions in those Java archives.  This helps the COBOL programs as they open and operate on the files, but not EBP when DD cards are being analyzed -- thus the need of the physical .dcb files.  Alternatively, these .dcb files can also be combined and made aware to EBP while the jobs are being analyzed by loading them into the EBP configurations using the config Web service.

Program Loading

When JCL batch jobs invoke Job Steps they do so with cards of the form,


EBP will first examine individual jar files or directories containing class files from the current step's STEPLIB DDs and then the job's JOBLIB DDs. Either of these may be concatenated datasets, i.e. "partitioned dataset" directories or individual jar files.  EBP then searches the systemlibn configuration settings and finally the classlibn configurations. Directory search operations look for jars that match the program name following COBOL and PL/I rules of program-to-class mapping. COBOL PROGRAM-ID names may contain "-" and PL/I PROC OPTIONS(MAIN) procedure names often contain "_".  The Elastic COBOL compiler will generally create Java class names in all lower case and replace "-" with "_".  The Elastic PL/I compiler uses Java camel-case notation for generating Java class names.  Although an EBP JCL extension allows for mixed-case program names, it is not necessary to specify the exact matching Java class name as the EXEC PGM option.  When directories for matching PGM names, EBP will examine a step card such as,


for Java jar files ending in MY_PGM.jar, my_pgm.jar, mypgm.jar, MyJar.jar are all candidates.  Directories are also searched for individual Java class files named MY_PGM.class, my_pgm.class, mypgm.class, MyJar.class.  These files may be in a directory structure that was created as part of the Java "package" statement for which ecobol and epli have compiler options to set.  The systemlib1..9 configuration may also contain package names to prepend to program class name lookup.

Jars that match by name or undergo an examination whether they were created as "executable jars" (contain a META-INF/MANIFEST.MF file containing a "Main-Class:" attribute) or "regular jars" (without a manifest).  If a manifest exists the "Main-Class" is executed in the job step regardless of whether that class matches the PGM= program name.  When manifests are absent or the "Main-Class" attribute is missing from it, a search of the program name alternatives (e.g., JOB_A.class, job_a.class, joba.class, JobA.class) is made within the jar.  Any Java package names contained in the systemlib1..9 configurations are prepended to the class file for that search.  If the class name is found that class will be executed as part of the JCL job step.  All programs are initiated with the "java -cp" flavor of the Java command and not the "java -jar" version.  This allows JOBLIBSTEPLIB and classlib1..9 directories and jars to be added to the class path as well as the executable jar file.

Normally executable jars are created by programmers to contain all assets needed to run a program: the program's class files, JDBC drivers, expanded Elastic PL/I or COBOL runtime libraries, etc.  They are stand-alone and often executed with "java -jar myexecutable.jar."  In EBP, however, the "java -jar" flavor of the Java command is not used.  Instead, the "java -cp class-path-item1:...:class-path-itemn mainclass" is used to start Java programs.  The program or directory found to contain the program is placed first on the "-cp" option followed by any and all STEPLIBs, JOBLIBs, and classlib1..9 configurations.  Thus, "executable jars" may be constructued without JDBC drivers and runtime libraries as long as they are referenced in other libraries specified by the JOB deck or configuration.

If EBP is unable to find Java artifacts matching the "PGM=" parameter, it will then look for executable programs such as C, C++ programs, operating system built-in commands and executable shell scripts in the systemlib1..9 configuration. If found, EBP will execute those directly without "-jar" or "-cp" operands.

In all cases the EXEC card's PARM= keyword will supply the arguments at the end of the command line.  A configuration parameter (parmcount=1) indicates whether they are placed in a single string (accepted as a single COBOL or PL/I argument to the main program) or individual arguments (accepted by a Java "public static void main(String[] args)" main program).

The JCL batch job will ABEND if the program cannot be found using one of these methods.  To see what command EBP uses to execute a job step set the JOB parameter MSGLEVEL=(4,4) and look in the MSGCLASS dataset after the job completes.

As an example for program loading, examine the following EBP configuration settings and accompanying JCL job deck:

<classlib n="1">/ebp/lib/ecobol.jar:/ebp/lib/etrans.jar:/ebp/lib/eplirt.jar</classlib>
<classlib n="2">/ebp/lib/u001.classlib/</classlib>
<datalib n="1">/ebp/data</datalib>
<systemlib n="1">/ebp/bin</systemlib>
<systemlib n="2">com.example.inv</systemlib>

The JOBLIB datasets indicate "partitioned datasets" represented as directories that are found in the datalib1..9 configuration settings. A file /ebp/data/sys001.jdbc.jars/db2jcc.jar may represent the DB2 LUW JDBC driver, a file /ebp/data/sys001.runtime.jars/ecobol.jar may represent the Elastic COBOL runtime modules.  STEP0001's STEPLIB card represents an individual Java archive file expected to be found in /ebp/data/u001.regular.jar whereas STEP0002's STEPLIBs are directories/ebp/data/u001.someof.mypgms.jars/ and /ebp/data/u001.other.mypgms.classes/ likely containing files ending in .jar and .class, respectably.

In the example, STEP0001's PL/I program referenced with PGM=MY_PGM1 likely indicates a file packaged in the regular (non-executable) jar u001.regular.jar class file name "MyPgm.class" or the package/class name "com.example.inv.MyPgm" packaged as the class file name "com/example/inv/MyPgm.class".  The resulting java command issued by EBP would appear as,

java -cp '/ebp/data/u001.regular.jar:/ebp/data/sys001.utils.classes/:/ebp/data/db2jcc.jar:\
/ebp/lib/ecobol.jar:/ebp/lib/etrans.jar:/ebp/lib/eplirt.jar com.example.inf.MyPgm 'INV(13) CUST(45)'

In the example, STEP0002's PL/I program referenced with PGM=MY_PGM2 is irrelevent if the STEPLIB is an executable jar with a META-INF/MANIFEST.MF file containing "Main-Class: MyOtherPgm". The resulting java command issued by EBP would appear as,

java -cp '/ebp/data/\
/ebp/lib/ecobol.jar:/ebp/lib/etrans.jar:/ebp/lib/eplirt.jar MyOtherPgm 'INV(31) CUST(54)'

Finally, STEP003's COBOL program MY_PGM3 (in COBOL source as "PROGRAM-ID. MY-PGM3") would be looked up in the STEPLIBs, JOBLIB datasets as "my_pgm3.class" and if not there, would be found in the directory /ebp/lib/u001.classlib/ indicated in the configuration classlib2. The resulting java command issued by EBP would appear as,

java -cp '/ebp/lib/u001.classlib/:/ebp/data/sys001.utils.classes/:/ebp/data/db2jcc.jar:\
/ebp/lib/ecobol.jar:/ebp/lib/etrans.jar:/ebp/lib/eplirt.jar my_pgm3 'INV(1) CUST(100)'


Tape Handling and Unit Specification

Tapes are regular files in EBP, but configuration settings may be used to allocate and manage them on potentially slower or cheaper media.  Using VOL=SER=TP451,LABEL=(2,SL) would allocate a file named TP451.L0002 in one of the datalib directories.  Using UNIT=TAPE (or DATACLAS=xxxx, MGMTCLAS=xxxx or STORCLAS=xxxx) on DISP=NEW datasets (or VOL=SER=serial numbered tapes) will look for datalib configurations that include that unit name or storage class.  For example, UNIT=TAPE and datalib.9=/data/tape will restrict the dataset allocation to this directory.  Using multiple datalibs, each referencing both storage class and unit names can segment allocation into certain areas.  Consider the following datalibs used for allocating tape datasets:

  • datalib.1=/data/sysda
  • datalib.2=/data/nfsmount-3350-1
  • datalib.3=/data/nfsmount-3350-2
  • datalib.7=/data/lwrclass/tape/3490
  • datalib.8=/data/lwrclass/tape/3420
  • datalib.9=/data/uprclass/cart/3880

would allow DD names to be allocated on the various classes

//DD1 DD DISP=NEW,DSN=HCI00.MY.DAT       allocate /data/sysda/HCI00.MY.DAT
//DD7 DD DISP=NEW,UNIT=TAPE,VOL=SER=TPS431,LABEL=(1,SL)  allocate /data/lwrclass/tape/3490/TPS431.L0001
//DD8 DD DISP=NEW,UNIT=3420,VOL=SER=TPS432,LABEL=(2,SL)  allocate /data/lwrclass/tape/3420/TPS431.L0002
//DD9 DD DISP=NEW,STORCLAS=UPRCLASS      allocate /data/uprclass/cart/3880/SYSEBP.TEMP2345.T1234567.DAT

Note:  UNIT=SYSDA does not require that the datalib contain "sysda", but those that do are given preferential ordering ahead of others.

RETPD and EXPDT DD parameters and LABEL subparameters work as expected.  Run the standard utility TMSCLEAN to scratch expired datasets and volumes.  Using VOL=(PRIVATE,,,) or VOL=SCRATCH will allocate scratch serial numbered tapes.  See Examples.

Unit specification also implies certain sizes that are calculated and checked during new dataset allocation.  The underlying datalib device must have enough disk space to hold the given allocation using traditional IBM device type specifications even though the space is not allocated until the COBOL program requires it. If there is not enough space the datalib will be rejected and the next datalib will be examined for the required space.  If no datalib contains enough free space then the job is aborted.  SPACE required is calculated from the traditional UNIT sizes as shown in the following table,

UNIT BLK (bytes)
TRK (bytes)
CYL (trks)
3350 * 19069 30
3380 * 47476 15
3390 * 56664 15
9345 * 46456 15
SYSDA * 100000 10

* - calculated from block size specified in the dataset DCB

Examples of calculating SPACE and UNIT together,

//DD1 DD DISP=NEW,DSN=HCI00.MY1.DAT,UNIT=3350,SPACE=(TRK,(10,10))  allocate if 190KB available
//*                                                                on /data/nfsmount-3350-1 or -2
//DD2 DD DISP=NEW,DSN=HCI00.MY2.DAT,UNIT=SYSDA,SPACE=(CYL,(10,10)) allocate if 10MB available on any datalib
//DD3 DD DISP=NEW,DSN=HCI00.MY3.DAT,SPACE=(CYL,(100,7))            allocate if 100MB available on any datalib
//DD4 DD DISP=NEW,DCB=BLKSIZE=1000,SPACE=(BLK,(5,5))               allocate if 5000bytes available


JCL, Program and Scripts

EBP is capable of executing JCL jobs but also jobs consisting of Python or Shell script code and COBOL or Java programs directly, without the need for them to be contained in JCL.  Specify the job type when defining the EBP job class. Jobs of all types can be submitted to EBP through the submit Web service or through the internal reader from other JCL jobs.  Submit COBOL or Python jobs by submitting a Java archive (jar) file.  Where possible, EBP will look for hints in a "job card" on the first comment line of a script for information regarding classification.  For example,

# Python script
import re;


Job Class Resource Limits

Job classes are defined to EBP through the define Web service. A number of resource limits may be specified on the class:

  • Maximum CPU execution time
  • Maximum elapsed (wall-clock) time
  • Maximum memory (region) used for each step
  • Total file sizes created during step execution
  • Total network transmission allowed during the job

The TIME keyword on the JOB card will help EBP determine its class if not specified explicitly on the JOB card.  Elapsed time restrictions are checked regardless, but maximum CPU and other resource limits require checking by a command outside the Java Virtual Machine environment.  Set the ebpstartcommand configuration parameter to a command that can interpret these configuration settings and start an arbitrary command under resource control.  Although not initially configured (i.e., jobs will not ABEND if their steps take too long), ebpstartcommand may be set to any of the available commands in the bin directory within the webapps folder containing EBP.  For example, on Linux 64-bit systems configure ebpstartcommand=bin/ebpstart_linuxx86_64.

Checkpoint / Restart

EBP supports the notion of checkpoint / restart initiated through Web services. The checkpoint Web service will suspend a running job at its currently executing step and place it in a suspended state.  Dataset locks are kept, the temporary spool (working directory of a job), its output spool (held output datasets) and the input spool (input job) are retained.

Suspended jobs have their intermediate MSG and ouptut spool datasets available for listing.

A subsequent restart Web service for a job ID will place a suspended job into the queued state where, when an initiator of that class becomes available, it will resume execution at the beginning of the step that was executing at the time of suspension.  The restarted will rescan the JCL and reprocess any symbolic parameters that may be in effect at the time of resumption.  The MSG dataset will reflect both the job execution before and after the restart.

On some systems (e.g., Linux) it is possible to not only restart a job at the beginning of the a step but restart the program that was executing at the point of its last checkpoint.  Such programs restart in the middle of their processing. Support relies on the availability of an external checkpoint/restart facility, such as the Distributed Multi-Threaded CheckPointing (DMTCP).  Such process checkpointing requires additional EBP configurations to take advantage of these features and these are done on a per-job-class basis. Those classes whose jobs are eligible for DMTCP checkpoint/restart have the checkpointcommand-classname and restartcommand-classname configurations set to the command sequence needed to carry out process-level checkpoint (which produces a ckpt file at regular intervals) and later restart. For example, if Job Class R is defined for restartable jobs, the two configurations might be:

checkpointcommand-R: rm ckpt_*;/usr/local/bin/dmtcp_launch --ckpt-open-files
restartcommand-R:/usr/local/bin/dmtcp_restart *.dmtcp

For each step in class R jobs EBP erases all prior checkpoint files created by DMTCP (ckpt_*.dmtcp) and then the command to invoke the step's program (e.g., a standard utility or COBOL application) is pre-pended with the dmtcp_launch command.  And if a job step is coming out of a suspended mode, the dmtcp_restart command will be executed instead of the restarted job step command and given the corresponding ckpt_*.dmtcp file.

A DMTCP coordinator must be started and listening for launch commands and a checkpoint interval must be in place to create checkpoint files every so often.  Start the coordinator with a command such as:

/usr/local/bin/dmtcp_coordinator -i 20

There are other dmtcp_launch and dmtcp_coordinator options that control other checkpointing parameters. See DMTCP documentation for more info.

Output Spoolers

EBP supports output spooler classes named A through Z, 0 through 9 as well as DEFAULT or CONSOLE such that SYSOUT DD assignments and MSGCLASS designations can define the corresponding output datasets (hold) be held in the output spool for later retrieval, (lpr) designate automatic printing to defined printers, (log) redirect to Java log4j logging systems, (cat) sent to arbitrary spool programs, (none) or discarded entirely.  Define the output classes through the EBP Configuration service with the following values:

Output Classes
Output Class Default Description
DEFAULT hold Hold output in output spool
CONSOLE none EBP console messages
L log Send to Java log4j logging system
P lpr Send to the CUPS printing system
Y hold Hold output in output spool
Z none Discard output
All others hold Same as DEFAULT

Set additional parameters to lpr or define new spooling programs by setting its output class descriptors to a command name optional parameters, including redirection parameters.  For example, the following:

output.P=lpr -P node22/printer44 -C EBP -T Job-Output
output.S=cat >>/tmp/all-&CLASS&WRITER&FORM&DEPT-output.txt
output.4=log EBP-4log

will configure

  1. output class P configured to send to a remote printer,
  2. output class S is configured to append all output to a file, where
         //SYSOUT DD SYSOUT=(S,AM1404,F1)
    sends output to /tmp/all-GAM1404F13250-output.txt (where &CLASS corresponds to SYSOUT parameter output class S, &WRITER to SYSOUT parameter writer class AM1404, &FORM correponds to SYSOUT parameter form output F1, &DEPT corresponds to PARMLIB parameter DEPT 3250).
  3. output class 4 is configured to a Java Logger (e.g., log4j) named EBP-4log, and the
  4. output class CONSOLE configuration is such that EBP console messages (job start/stop, errors, warnings) are written to a files (changed daily with a time stamp) into the output spool directory


EBP integrates with the supporting J2EE Application Server or Servlet Container to provide three layers of security:

  • Transport Level Security - ensuring data between EBP and the clients is secure
  • Authentication - validating who can use EBP
  • Authorization - indicating who can use what services against which objects

See Operating EBP in a Secure Environment for detailed information and examples for EBP, container and security subsystem configuration.

Once the EBP and container has been set up to provide the three layers of security you add users to the container's security environment and add them to roles.  EBP classifies operations in terms of roles, users and administrators.  By default these role names are EBP-user and EBP-admin but they can be changed to group names you may have defined in your mainframe security system, should you wish to continue to use these.  Users submit jobs, administrators change configuration.  Note that administrators don't ordinarily have the rights for user operations -- the ability to submit jobs.  But, you can define users that take on multiple roles in most server environments. This allows you to define the same groupings that are common in mainframe resource access control environments such as  RACF, TopSecret or ACF2.  In many environments you can configure EBP to use the same RACF system for authentication and authorization using its lightweight directory access protocol (LDAP) facility.  Or, you can extract users and groups from the mainframe system and import them into an interchange format (LDIF) to be imported into local directory systems.

When configured appropriately the EBP Web services (submit, configure, purge, etc.) are protected by the security system and jobs will run under authenticated users.  Generally, the EBP index.html home page and the sysoper.html System Operators console and their components can be allowed to be accessed by all users.  But, your installation may set up other logon pages that set the security contexts before any part of EBP becomes visible.

When security is turned on EBP will enforce a cascading set of ownership of jobs.  The authenticated user (in the EBP-user role) will become the owner of jobs submitted by the submit Web service.  Listing the status of jobs will include that user name in the output, visible on the System Operators Console.  When JCL jobs are run by EBP the &SYSUID symbolic parameter is set to the upper-case representation of the user (&sysuid is the lower case representation). Only that same user or one with both an EBP-user and EBP-admin role may cancel, purge, checkpoint or restart that job.  Further, any jobs submitted to the internal reader either in JCL or through the COBOL API will take on the same user as that which submitted the original job.

The users authenticated by the EBP container will override the USER keyword that may be present on the JOB card. the USER and PASSWORD job card keywords are not used to authenticate against, for example, OpenLDAP.  However, the USER and PASSWORD entry on the job card will still be used to authenticate access to resources such as with database connections performed by DSNTIAD and IKJEFT01.  Note that EBP cannot learn LDAP passwords in order to pass them to database connections.  However, JNDI naming and JDBC datasources can be established to allow the container to validate database connections used by job steps.


IBM IMS DB Support

IMS-DB is supported in EBP with utilities such as DBDGEN and PCBGEN as well as the ability to run IMS DB Batch applications with the DFSRRC00 utility.


IBM JCS Support

The IBM VSE Job Control Subsystem and its flavor of JCL called JCS is fully supported. Define a JCS job class and start initiators.  Then submit JCS decks with the SUBMIT web service.

The following defines the syntax accepted by the subsystem:


IBM JEC Support

IBM introduced Job Execution Control with z/OS JES2 Version 2, Release 2 (August, 2015).  JEC allows the definition of interrelationships among jobs.  JEC is fully supported within EBP.  Define a JEC job class and start initiators in order to SUBMIT JEC definitions of JOBGROUPs.

Also introduced is the new SCHEDULE JCL statement.  Jobs can be given instructions on when to start (SCHEDULE HOLDUNTL parameter), indicate that they must begin by a certain time and date (SCHEDULE STARTBY parameter), that they should be initiated at the same time as another job (SCHEDULE WITH parameter) and conform to the more advanced scheduling criteria defined in a JEC JOBGROUP (SCHEDULE JOBGROUP parameter).

The following defines the syntax accepted by the subsystem:


EBP Web Services

The dashboard controls EBP through the Web services that define an HTTP Web URL with one or more parameters for each service and returning an XML document describing what was done. Report or held output datasets from jobs in the output queue are also retrieved through Web services. See EBP Web Services for a complete list or one of the specific services listed below.

  • define - new job class
  • start - job initiators of a specific class
  • submit - jobs to the input queue to be executed by initiators
  • list - list job classes, job initiators, input jobs, job outputs, output datasets
  • cancel - a running or queued job
  • purge - output of a running or finished job
  • quiesce - suspend job initiation, complete running jobs
  • stop - running job initiators, halting job execution
  • undefine - existing job classes, stopping all initiators of that class
  • checkpoint - a job by saving the current state (at the most recent completed job step)
  • restart - a previously checkpointed (after last successful job step) or held job (at start)
  • config - the Elastic Batch Platform

A typical sequence of Web service invocation to define and start jobs is shown below.

  1. Define    A, JCL
  2. Start     A
  3. Submit    JCL
  4. List
  5. List      jobid
  6. List      outputid
  7. List      outputid,     , MSG
  8. List      outputidstepddname

You can purge old output of completed jobs, cancel running or queued jobs and adjust configuration with other services.


EBP Error Messages

Error messages generated by the EBP subsystem fall into a number of categories:

  • Errors during startup.shutdown
  • Errors issueing Web service requests
  • Errors scanning input job datasets
  • Errors executing JCL
  • Errors occuring during execution of standard utilities
  • Errors as a result of executing scripts, programs or procedures referenced by input jobs

The list of all error messages may be found here:

JCL Examples

See the following document for a list of sample JCL and script job inputs.

EBP System Operators Console

See the following document for a list of sample JCL and script job inputs.


EBP Installation

EBP is distributed as a Java-based Web Application Archive (war).  The Announcements forum contains the latest information on new features, functions and fixes as well as the download location.  EBP is distributed as a zip file containing two files:

  • ebp##vv.vv.vv.war -- the Web Application Archive used with Java Application Servers and Java Servlet Containers
  • ebp-standalone-vv.vv.vv.jar -- the Java Application Archive used as a stand-alone version of EBP without the need of other software

The Elastic Batch Platform operates under a Java EE server or the simpler servlet management system such as Apache Tomcat. Install Tomcat on Windows, UNIX, Linux (Heirloom managed clouds have EBP pre-installed and configured), then deploy the EBP system (ebp##vv.vv.vv.war) by using the server's administration interface or copying the deployment file into its webapps directory.  See the Announcements forum for the latest EBP news, downloads and updated installation instructions. See FAQs Wildfly or Weblogic for special installation instructions for those systems.

The EBP subsystem is a licensed program product of Heirloom Computing, Inc. Ensure that the registration file or for your account is installed in either (1) the same webapps/ebp folder where EBP has been deployed, or (2) the home directory of the user under which the application server operates (e.g. /home/tomcat8 or /var/lib/tomcat8 on Linux Tomcat systems operating under that user id).  Alternatively, set the environment variable ebppropdir to the directory containing the license prior to starting up EBP.

Start your application server or servlet management server prescribed by the manufacturers documentation. If your EBP is operational it will begin accepting RESTful Web services from any client. Although not the primary interface to EBP, the default index page is acceptable for testing basic operations. If your server is listening on port 8080 you can access the EBP index page with:

Jobs, classes and started iniatiators persist across restart of the application server. If the app server stops while a job is running the job will be terminated and requeued to begin again after the server restarts (depending on your job restart parameters). A redeploy of EBP into the app server will reset the state as will the quiesce Web service call with the resetindication. All jobs will be flushed from the queue.

The EBP default configuration is usually fine for running the subsystem but you can change that either by modifying the file deployed with the subsystem under the server's webapps/ebp folder or by issuing the config Web service. Settings persist through application server restarts unless a redeploy overwrites them.

See the following document for EBP configuration settings:

See your Java Application Server (Apache Geronimo, Oracle Weblogic, IBM Websphere, Red Hat JBOSS) documentation for installation and configuration of WAR files.  Most have a Web interface with a "Deploy" operation or simply allow you to drop war files into a "deploy" or "webapps" folder.  EBP begins operating immediately upon start-up of the container.  Depending on the configuration, you should be able to access the RESTful Web Services control point of EBP from a browser with the Web address:


Alternatively, you can run EBP without the need of a container.  Start EBP in stand-alone mode with the command:

java -jar ebp-standalone-vv.vv.vv.jar

Standalone EBP has simple command-line options to set important options which can be displayed with the --help option:

java -jar ebp-standalone-vv.vv.vv.jar --help
Heirloom Computing Elastic Batch Platform
Usage: java -jar ebp.jar options...
--port 8080 - defines the Web service port
--home <jar location> - EBP Home directory
--logs logs - log directory
--realm - login properties file
--context / - URL path
--help - this message


Use --port option to set the HTTP listening port.  The --home option sets the home directory, if different than the location of the jar file itself.  The --logs option can change the location of the message logs to other than the home directory.  The --context option will change the URL path from the default of "/" (the default when running under Application Servers is typically "/ebp").

java -jar ebp-standalone-vv.vv.vv.jar --port 80 --context /ebp --home /usr/share/EBP
sets up the environment such that the following Web address will access the home page:
Stand-alone EBP uses the Eclipse Jetty servlet container.  Further configuration is available through Eclipse Jetty system property settings.

Demo Mode

Elastic Batch Platform will enter a demonstration mode if no valid license is installed, it is invalid or has expired.  EBP is pre-configured with a single Job Class (A) and an Initiator for that class.  Other job classes and initiators are restricted from being configured as is EBP-Plex mode (multi-node scaling) and interaction with the Elastic Scheduling Platform (ESP).  Web Services define/undefine/start/stop are restricted. Job state (input, output spools) do not persist between EBP restarts in Demo Mode.  All other EBP features and functions are supported without restriction.

Backup and Recovery

EBP configuration and state information is stored in the EBP installation directory within the servlet engine or Java application server.  When starting EBP in standalone mode with Eclipse Jetty, that's the current directory/webapps folder.  In Tomcat its stored under the Tomcat installation directory/webapps/ebp or ebp##version folder. important files and directories that change while EBP is operating:
  • - configuration information (classlib, datalib settings)
  • ebp_object_store1.xml - state information (defined job classes, started initiators, jobs and outputs)
  • ebp_object_store2.xml - prior state information (defined job classes, started initiators, jobs and outputs)
  • job_input_queue/ - jobs submitted but not yet executed
  • job_output_queue/ - jobs finished executing
  • temp_queue/ - jobs executing

To take a quick backup of configuration settings issue and save the XML retrieved from the web services:

To restore from a backup, stop EBP and replace the files/directories listed above and restart EBP.  To recover from a failure requiring re-installation, use the config service XML file saved above to re-issue configuration from the home page or issue individual web service requests with one or more configuration settings:

  • http://localhost:8080/ebp/config?errorlevel=W&classlib.1=%2Fhome%2Fprod%2FHCI%2Fbatch%2Ftest%2Febp

If you with to preserve both configuration and state information between upgrades of EBP, zip or tar archive the files and folders mentioned above, uninstall EBP, install a new EBP and then use the Server's ability to "Stop" the EBP service (or shutdown the entire Java Servlet Engine, Jetty, or Java application server), unzip the archive into the new EBP's home directory, and restart the EBP service.  Classes, initiators, input and output jobs will be visible to the new EBP.

Was this article helpful?
3 out of 3 found this helpful
Have more questions? Submit a request


  • 0

    Surprisingly complete for a first release.  Descriptions are clear and susinct.   Have not seen these capabilities available on  any other Cloud offering to date.  This is clearly a first for Heirloom!

  • 0

    New JCS and JEC language descriptions added here and in other forum documents

Please sign in to leave a comment.
Powered by Zendesk