How it runs and why it survives:  The IBM mainframe

How it runs and why it survives: The IBM mainframe

Mainframe computers are often seen as ancient machines—practically dinosaurs. But mainframes, which are purpose-built to process enormous amounts of data, are still extremely relevant today. If they’re dinosaurs, they’re T-Rexes, and desktops and server computers are puny mammals to be trodden underfoot.

It’s estimated that there are 10,000 mainframes in use today. They’re used almost exclusively by the largest companies in the world, including two-thirds of Fortune 500 companies, 45 of the world’s top 50 banks, eight of the top 10 insurers, seven of the top 10 global retailers, and eight of the top 10 telecommunications companies. And most of those mainframes come from IBM.

In this explainer, we’ll look at the IBM mainframe computer—what it is, how it works, and why it’s still going strong after over 50 years.

Setting the stage

Mainframes descended directly from the technology of the first computers in the 1950s. Instead of being streamlined into low-cost desktop or server use, though, they evolved to handle massive data workloads, like bulk data processing and high-volume financial transactions.

Vacuum tubes, magnetic core memory, magnetic drum storage, tape drives, and punched cards were the foundation of the IBM 701 in 1952, the IBM 704 in 1954, and the IBM 1401 in 1959. Primitive by today’s standards, these machines provided the functions of scientific calculations and data processing that would otherwise have to be done by hand or mechanical calculators. There was a ready market for these machines, and IBM sold them as fast as it could make them.

In the early years of computing, IBM had many competitors, including Univac, Rand, Sperry, Amdahl, GE, RCA, NEC, Fujitsu, Hitachi, Unisys, Honeywell, Burroughs, and CDC. At the time, all of these other companies combined accounted for about 20 percent of the mainframe market, and IBM claimed the rest. Today, IBM is the only mainframe manufacturer that matters and that does any kind of business at scale. Its de facto competitors are now the cloud and clusters, but as we'll see, it's not always cost-effective to switch to those platforms, and they're not able to provide the reliability of the mainframe.

Built-in redundancy

By any standard, mainframes are enormous. Today’s mainframe can have up to 240 server-grade CPUs, 40TB of error-correcting RAM, and many petabytes of redundant flash-based secondary storage. They’re designed to process large amounts of critical data while maintaining a 99.999 percent uptime—that’s a bit over five minutes' worth of outage per year. A medium-sized bank may use a mainframe to run 50 or more separate financial applications and supporting processes and employ thousands of support personnel to keep things running smoothly.

Most mainframes process high-volume financial transactions, which include things like credit card purchases at a cash register, withdrawals from an ATM, or stock purchases on the Internet.

A bank’s lifeblood isn't money—it’s data. Every transaction a bank makes involves data that must be processed. A debit card transaction, for instance, involves the following data that must be processed:

·         Retrieving a user’s debit account info

·         Validating the user ID and PIN

·         Checking the availability of funds

·         Debiting the user’s account for the transaction amount

·         Crediting the seller’s account

All this must happen in seconds, and banks have to ensure they can maintain a rapid response even during high-volume events such as shopping holidays. Mainframes are designed from the ground up to provide both redundancy and high throughput for these purposes. High-speed processing is no good if processing stops during business hours, and reliable processing is no good if people have to wait minutes for a transaction to process.

When you process a financial transaction, it means you’re making money. If you’re processing a lot of transactions, you need to spend a lot of money on redundancy to keep things running smoothly. When parts inevitably fail, the show must go on. That’s where mainframes’ built-in redundant processing comes in.

RAM, CPUs, and disks are all hot-swappable, so if a component fails, it can be pulled and replaced without requiring the mainframe to be powered down. In fact, mainframes are divided into independent partitions, each with separate RAM, storage, CPUs, and even different operating systems, allowing applications to run continuously while certain partitions receive maintenance in the form of OS patches, hardware fixes, and upgrades.

Whereas Intel-based servers can support error correcting code (ECC) memory that can correct for bad memory bits in RAM, Telum CPUs can correct for bad RAM bits and more. The memory can detect and recover from failed hardware memory channels, failed DIMM memory chips, and failed CPU operations.

IBM  Christian Jacob explained the process:

"When we detect an error in processing in a CPU core, the core can go through a transparent recovery action. This is a complete core reset—almost like a mini-reboot of the core, but the program state, program location, and all the register contents are recovered. So after recovery, we continue running wherever we were [running] in the program. It's completely transparent to the software layers. This is a “core recovery.” If a core goes through too many restarts, the contents of the core can be moved to a spare core. This is a “brain transplant.”

A mainframe CPU can recover from bad memory bits, memory channels, memory chips, and CPUs, all transparent to the OS and software."

 

Dynamically reconfigurable system partitioning

Equivalent separate mainframe units, or logical partitions (LPARs), can be created through the use of I/O controllers, mainframe permissions, and user-configurable CPUs. Each LPAR can run a separate, isolated instance of Z/OS, the mainframe operating system.

These instances can be used for development, testing, or running separate applications. LPARs can also run different operating systems such as Linux, and they have completely separate hardware resources. If one crashes or has to go down for maintenance, the other partitions are unaffected. LPARs are also separated by permissions; one group of users can have access to the test region LPAR but not a production LPAR, for instance.

IBM uses an efficient and low-latency protocol called “parallel sysplex” to keep different mainframes (or different LPARs on the same mainframe) “coupled” with each other for workload distribution, communication, or recovery/failover. Up to 32 CPUs can be clustered in parallel to achieve near-linear speedup under load, and CPUs can be assigned for either performance or redundancy. Data locking and queueing methods are used to maintain data integrity while processing, and implementing parallel sysplex in the OS requires a holistic approach through the entire OS stack.

The ability to dynamically add and remove capacity means mainframes can be responsive when dealing with workload spikes. Jacobi said that IBM mainframe support teams have seen fluctuations in the processing needed to address supply chain scarcity during the pandemic.

And disaster recovery is not an afterthought for companies that run large loads on mainframes. The same kind of OS support that allows CPUs to be dynamically added to a workload allows a mainframe workload to migrate between geographically separated data centers. Companies can thus hedge their processing against storms, terrorist attacks, or basic infrastructure outages, which is especially valuable to companies in regulated industries such as banking and insurance, where federal law may dictate disaster recovery.

 

The CPU: Telum

The latest generation of the Z-series CPU that mainframes are built on is called Telum. A modern chip clocked at 5.2 GHz, it’s built on Samsung’s 7 nm fabrication process, and it’s optimized for single-thread performance.

What differentiates Telum from other performance CPUs from Intel and AMD? In short, it has more of everything: a higher clock speed, more cache, more specialized on-CPU hardware, and more PCI channels for moving data.

Today’s IBM mainframe CPUs are based on a highly evolved version of the POWER architecture that IBM has been developing for over 30 years. The current Telum CPU is powerful. Each core is two-way multithreaded, sports 256KB of L1 cache and 32MB of L2 cache, and supports superscalar instruction execution. Telum has eight cores per chip, two chips per socket, and four sockets per drawer; four drawers make up a system. A fully loaded mainframe can support 250 cores, 190 of which are user-controlled. I/O is handled by two PCI Express 4.0 controllers.

The new Telum cache design is optimized for heterogeneous single-processor performance with a new and unique cache management strategy. Each core has its own 32MB of L2 cache. When a cache line is evicted from an L2 cache, it’s moved to another L2 cache and tagged as an L3 cache line.

This means eight L2 caches can be combined into a 256MB virtual shared L3 cache that’s accessible by any of the eight cores. Compare this to AMD’s Zen 3 chiplet, where each of the eight cores has 512KB of L2 cache and a total of 32MB of L3 cache.

When a cache line is evicted from the virtual L3 cache, the line can find another core’s cache in the system, thus creating a virtual L4 cache. Combining the L2 caches on a system gives a spectacular 8192MB of virtual L4 cache. IBM claims an overall per-socket performance improvement of greater than 40 percent from the previous z15 CPU design.

“Cache management is an example of new design and investment,” Jacobi told me. “Which is overall extremely successful and went smoother than expected. We always knew that when every core does the same thing, it uses the same cache. But the real work doesn’t behave this way. Every CPU uses cache slightly differently. As we ran the real workloads, we were happy to see this fully confirmed in our measurements for L3 and L4. The Z16’s performance has become a new plateau from which to learn and develop from.”

Telum also has an on-chip hardware AI accelerator. Naturally, this part is targeted at large enterprise data tasks such as rapid processing of credit card fraud cases. One hundred and twenty-eight processor tiles are combined together as an array and are supported by floating point matrix math functions. The AI functions support both inference processing and machine learning (or training).

These on-chip AI functions are tightly coupled to the L2 cache for low-latency data pre-fetches and write-backs. The virtual L3 and L4 cache management system helps the AI processing to scale almost linearly across the multiple CPUs in a mainframe drawer, allowing an enterprise to process tens of thousands of inference transactions per second.

Additional mainframe hardware-accelerated functions include:

·         zEDC hardware support for lossless compression with minimal CPU usage. The Java Deflater class provides 15x throughput over the previous Z implementation.

·         Integrated Accelerator for Z Sort, which provides hardware-accelerated sorting. DB2 database benchmarks show a 12 percent decrease in elapsed time, with up to a 40 percent reduction in CPU usage.

·         Pervasive Encryption, the hardware-accelerated implementation of the widely used AES (Advanced Encryption Standard). It can encrypt enterprise data with only 3 percent additional CPU overhead. It’s called Pervasive Encryption because it can encrypt anything and everything on your mainframe as part of any workflow. Hardware acceleration is also available for SHA, DES, GHASH, and CRC32 algorithms.

Secondary storage on mainframes is next-level huge, which is essential for companies processing mountains of time-critical data. You don’t plug disks into a mainframe; you plug in storage arrays. One such array is the Hitachi G1500 Virtual Storage Platform. It can be fitted with spinning disks, SSDs, or NVMes, and it’s rated at 6.7 petabytes of capacity and 48GBps of throughput. For comparison, a single fast NVMe drive maxes out at around 7GBps. These arrays can be configured in various RAID configurations for redundancy, and a typical mainframe will have multiple storage arrays per LPAR, with separate storage arrays for backups.

Multiple dedicated I/O controllers and CPU support for the PCIe 5.0 bus ensure that throughput on a mainframe is fast. Primary storage is on arrays of SSDs or NVMes.

 

The OS: Z/OS

To make the most of a purpose-built CPU, you need a purpose-built operating system. Z/OS—pronounced “zee/zed oh ess,” depending on your location—is the mainframe operating system. The 64-bit OS was developed by IBM for the Z/Architecture family of mainframe computers, and it has compatibility with the older MVS mainframe OS. It was released in 2000 and is backward-compatible with older 24-bit and 31-bit applications. Licensed and leased from IBM, Z/OS is proprietary and closed source (the company also offers a personal version of Z/OS for use on PC desktops).

The OS natively supports applications compiled in COBOL, C, C++, Fortran, and PL/1, as well as Java applications in Java application servers. As discussed earlier, Z/OS can be partitioned into separate logical partitions, each with separate hardware, permissions, workloads, and even different operating systems. Red Hat Linux is a supported guest OS.

Z/OS provides a security system implemented through Security Server, Resource Access Control Facility (RACF), and Pervasive Encryption. Security Server controls the access of users and restricts the functions that an authorized user can perform. RACF controls access to all protected Z/OS resources.

This includes access to applications, data files, APIs, and hardware devices. Pervasive Encryption provides the ability to encrypt any data file with strong AES encryption, and encrypted files are transparently decrypted and re-encrypted when accessed. Encryption is done in hardware with minimal load to the CPU. A user with access to the file but no access to the decryption would see only the random bytes of the encrypted file.

On a system that may have 1,000 supporting personnel, no one person or role is expected to know or run the entire OS. Mainframe support is often divided between many departments, including enterprise storage, permissions and access, cryptography, daily system operations, applications software support, application testing, application deployments, hardware support, and reporting.

Files

Files on a mainframe are called data sets. Each file is referenced through its complete path, which consists of up to eight “qualifiers,” each made of up to eight characters. Qualifiers can contain numbers, letters, and the symbols @, #, and $.

Here’s an example of a full data set path:

TEST1.AREA1.GHCC.AMUST#.T345.INPUT.ACC$.FILEAA

At the byte level, all characters are in Extended Binary Coded Decimal Interchange Code, or EBCDIC, an 8-bit character encoding alternative to ASCII. Sorting EBCDIC puts lowercase letters before uppercase letters and letters before numbers, exactly the opposite of ASCII.

Z/OS also supports the ZFS filesystem and provides a file system for UNIX or Linux programs to run under Z/OS.

Although Z/OS includes the X Windows system, applications have no concept of bitmapped windows. All display is through character-based screens using 3270 emulated terminals. This is the ancient and proverbial “green screen.” Z/OS applications with screen I/O are generally coded to 24 rows by 80 columns.

Z/OS evolved and matured on a separate evolutionary course from UNIX and Linux. You can see a handy list of similarities and differences between the two systems here.

 

Mainframe software stacks

The traditional bread and butter of a mainframe are COBOL programs, data files, and JCL, or Job Control Language. Most companies run large COBOL programs developed over the decades, and programs are run and data is processed in non-interactive jobs. The concept of batch computer jobs goes back to the '50s and '60s and predates the era of interactive, timeshared minicomputers. A basic job consists of the computer code to be run, the input data to be processed, an understanding of how to process the output data, and job commands that communicate the processing steps to the Z/OS operating system. Large amounts of data can be processed very efficiently in non-interactive batches. Below are some examples of JCL and COBOL code.

Job Control Language

JCL was first introduced in the mid-1960s, and its syntax has remained largely unchanged since then. To the untrained eye, it’s like reading hieroglyphics, but it’s very powerful and remains a preferred method of running applications.

For every job that is submitted to run on Z/OS, the job control language spells out what programs to run, where to find the input data, how to process the input data, and where to put the output data. JCL has no defaults, so everything must be written out in sequential job control statements. JCL was originally entered on punched cards. It retains that 80-column format.

Here's a simple JCL example:

//EXAMPLE JOB 1

//MYEXAMPLE EXEC PGM=SORT

//INFILE DD DISP=SHR,DSN=ZOS.TEST.DATA

//OUTFILE DD SYSOUT=*

//SYSOUT DD SYSOUT=*

//SYSIN DD *

SORT FIELDS=(1,3,CH,A)

Each JCL DD (Data Definition) statement is used to associate a z/OS data set (file) with a ddname, which is recognized by the program as the input or output.

When submitted for execution, EXAMPLE is a jobname the system associates with this workload. MYEXAMPLE is the stepname, which instructs the system to execute the SORT program. On the DD statement, INFILE is the ddname. The INFILE ddname is coded in the SORT program as a program input. The data set name (DSN) on this DD statement is ZOSPROF.TEST.DATA. This is a folder path and also the file name. The data set can be shared (DISP=SHR) with other system processes. The data content of ZOSPROF.TEST. is SORT program input. The OUTFILE ddname is the SORT program output.

SYSOUT=* specifies to send system output messages to the Job Entry Subsystem (JES) print output area. It's also possible to send the output to a data set. SYSIN DD * is an input statement that specifies that what follows is data or control statements. In this case, it's the sort instruction telling the SORT program which fields of the SORTIN data records are to be sorted.

All jobs require the three main types of JCL statements: JOB, EXEC, and DD. A job defines a specific workload for z/OS to process. Because JCL was originally designed for punched cards, the details of coding JCL statements can be complicated. The general concepts are quite simple, though, and most jobs can be run using a very small subset of these control statements.

COBOL

COBOL is an early computer language that was designed in 1959. Like other computer languages of its generation, it is imperative, procedural, and line-oriented. It was designed to be self-documenting, with an English-like syntax that unfortunately makes it extremely verbose. Its early popularity among business and finance communities resulted in enormous dedicated code bases that remain in heavy, daily use today.

Although COBOL code may appear verbose and unwieldy compared to other popular languages, it’s not a slow language to run. IBM continues to release updates to the IBM Enterprise COBOL for Z/OS compiler. Version 6 of this compiler provides additional hardware optimizations for the Telum architecture and extensions to access JSON and XML and provide Java/COBOL interoperability. With this support, COBOL applications can better interact with modern web-oriented services and applications. Infrastructure also exists to run COBOL programs in cloud applications.

Here’s an example of a COBOL program that takes a simple data file as input, sorts the file, and outputs the result:

* Comments start with in Column 7

* COBOL programs have a Section, Paragraph, Sentence, Statements structure.

PROGRAM-ID. SORTEXAMPLE.

ENVIRONMENT DIVISION.

INPUT-OUTPUT SECTION.

FILE-CONTROL.

SELECT INPUT ASSIGN TO INFILE.

SELECT OUTPUT ASSIGN TO OUTFILE.

SELECT WORK ASSIGN TO WORKFILE.

DATA DIVISION.

FILE SECTION.

FD INPUT.

*DEFINE THE INPUT FIELDS

01 INPUT-EMPLOYEE.

*DEFINE THE STORAGE AND TYPE OF EMPLOYEE-ID-INPUT FIELD,

*NUMBER DATA OF 5 DIGITS

05 EMPLOYEE-ID-INPUT PIC 9(5).

*DEFINE THE STORAGE AND TYPE OF EMPLOYEE -LAST-NAME FIELD, *ALPHANUMERIC DATA WITH 25 CHARACTERS

05 EMPLOYEE-LAST-NAME PIC A(25).

*USE THE SAME INPUT FIELDS FOR OUTPUT FIELDS FD OUTPUT.

01 EMPLOYEE-OUT.

05 EMPLOYEE-ID-OUTPUT PIC 9(5).

05 EMPLOYEE-FIRST-NAME PIC A(25).

*USE THE SAME INPUT FIELDS FOR TEMP WORKING DATA SD WORK.

01 WORK-EMPLOYEE.

05 EMPLOYEE-ID-WORK PIC 9(5).

05 EMPLOYEE-NAME-WORK PIC A(25).

*USE EMPLOYEE-ID-OUTPUT AS THE SORTING KEY, SORT IN ASCENDING ORDER PROCEDURE DIVISION.

SORT WORK ON ASCENDING KEY EMPLOYEE-ID-OUTPUT

USING INPUT GIVING OUTPUT.

DISPLAY 'Finished Sorting'.

STOP RUN.

This JCL example will run the sample COBOL program after it is compiled. Note that JCL will often reference system configurations that are customized to an organization, which are not portable.

//SAMPLE JOB(TESTJCL,XXXXXX),CLASS = A,MSGCLASS = C

//STEP1 EXEC PGM = SORTEXAMPLE

//IN DD DSN = INPUT-FILE-NAME,DISP = SHR

//OUT DD DSN = OUTPUT-FILE-NAME,DISP = SHR

//WRK DD DSN = &&TEMP

SORTEXAMPLE is a very small example of COBOL data processing. A typical bank application might be for processing active loans. For such a program, you would need input data files for active customers, loan types, monthly loan payments, and interest rates. You would create output data for the remaining principal balance, forbearance, delinquency notices, updates to credit monitoring companies, and more.

The code for an application of this type could easily exceed a million lines of code. You would need business analysts to understand and maintain the business logic and developers to implement changes as the code. You would need system designers to understand and manage the data files and permissions; they would know how the application interacts with other systems and how the JCL is configured to run the batch jobs. And you would need an application manager or owner to decide how and when changes are made to the application.

 

CICS

Mainframes also run application servers as part of the software stack. Application servers are like all-inclusive resorts to run your application code, and they provide many functions for the code. For instance, if you want data, your request is at a high level. It's the equivalent of calling a waiter to take your order. Writing code in the container of an application server helps to focus development effort on the application business logic instead of on system-level details, which are needed for batch processing.

CICS (Customer Information Control System) is a mixed language application server that can run COBOL, Java, PHP, C, and other third-party vendor languages. It provides high-volume online transaction processing and is widely used by financial services companies such as banks, insurance companies, credit card companies, and airlines. Although CICS is an alternative to batch processing, it is often architected to run in conjunction with batch data files. CICS presents a high-performance API-based container where dedicated code can run in place.

CICS provides the following functions:

·         Processing inbound and outbound web communication.

·         Managing input data queues when order of transactions is required.

·         Processing input data formats such as JSON and XML into native application input data.

·         Providing security functions, including authentication, authorization, and encryption.

·         Running multiple languages in the same server to increase performance and reduce overhead.

·         Providing a common CICS LINK API with shared memory for high-performance communication between applications.

·         Providing atomic separation of processing should parts of a transaction fail and need to be rolled back.

CICS can process real-time data coming from web and mobile applications as well as ATMs, and it provides 99.999 percent uptime and is highly scalable for peak transaction periods such as shopping holidays and days after a bank holiday.

 

Why mainframes survive

Mainframe ownership is not without challenges. Only the largest companies use them, as they're expensive to own and operate—they cost between $250,000 to $4,000,000, and each one is custom-built. They require a large support staff that could number in the hundreds or thousands. Technical support staff skilled in COBOL, JCL, mainframe storage, hardware maintenance, mainframe operations, and applications are aging out of the workforce. COBOL is no longer widely taught, and few universities have hands-on mainframe topics in their curriculum.

Many mainframe applications are now decades old, and although COBOL runs efficiently, software needs to evolve to address the changing needs of customers. Maintaining “dusty deck” applications requires a lot of effort, and older banks and insurance companies are at a competitive disadvantage compared to the nimble, younger fintech companies.

Still, people have been predicting the death of the mainframe for decades, and they continue to survive because they fill the important niche of processing high-volume, business-critical financial transactions. IBM mainframes have the ultimate customer “lock-in.” Fortune 500 companies have been using the same COBOL code bases for decades. The original developers have long since moved on or even died of old age, and rewriting tens of millions of lines of mission-critical code is a very difficult, very expensive proposition.

Many companies have tried and failed at conversion. But converting the code is just part of the overall migration process. You also have to convert your data, develop new processes and procedures around the new applications, train new people on the processes and procedures, stress-test all systems, and then make the cut-over to run in production.

All this has to happen on a defined schedule and budget. It’s the equivalent of getting a heart transplant where your chance of surviving is less than 50 percent. This kind of migration takes companies years to complete and is fraught with challenges. Many companies have spent tens or hundreds of millions of dollars to try to upgrade these kinds of systems and failed. The safer option is to simply continue to pay the licensing fees on the mainframe hardware, OS, and applications and maintain staff for the hundreds or thousands of support personnel.

Alternatives to the mainframe do exist, though. FIS Global, a major provider of mainframe banking software, provides a migration path off of the mainframe with its FIS Modern Banking Platform. The company has architected a stack of cloud components to perform the same functions as its mainframe products, and it rewrote its COBOL mainframe applications in Java and migrated the flat file data to relational databases. The Java code runs in the JBOSS application server, which is deployed in a Docker container. The container is managed in cloud servers with the Kubernetes container manager, which runs on Linux or Windows in a cloud server. Real-time events are managed with Apache Kafka.

Communication is through Kafka events or Java Messaging Services, and new server instances can be spun up in seconds in AWS or Azure clouds to provide additional capacity, which is needed for high-volume processing. FIS Global can provide the same functionality as the mainframe applications, but its system is made of commodity cloud components and is very scalable. The investment to convert to this architecture is fairly high, and a smooth conversion is by no means assured, but once a company is running on the Modern Banking Platform, the annual operating cost would presumably be lower.

 So why aren’t banks jumping at the opportunity to cast off their mainframes and move to the cloud? Risk and conversion cost. As a rule, banks are risk-averse. They are often trailing adopters of new technology and only do so when under competitive or regulatory pressure.

The cost to migrate many dozens of mission-critical applications could easily cost hundreds of millions of dollars and take three or more years to complete. During that migration time, it’s difficult to add new features or react to the competition. In addition to risk and cost, there is the challenge of hiring hundreds of highly specialized technical consultants who would oversee the migration work. If you can’t find those people, you have a bottleneck in your project plan.

 

The mainframe continues to adapt

IBM's business model allows it to invest in mainframe infrastructure. Telum, the latest mainframe CPU, saw advances in its cache management and the addition of on-die AI processing, both resulting in performance increases. Mainframe COBOL has been extended to support JSON and XML to enable web-based development, and it received considerable optimizations for the Telum CPU architecture.

IBM is also adapting to changes in the industry and is pushing its hybrid cloud strategy onto the mainframe. This includes using Red Hat Linux for DevOps and Red Hat for Linux toolchains. Red Hat enables Node.js, Python, Docker, and Kubernetes on the mainframe. Other recent Z/OS features include the ability to pull, manage, and run containerized open source Linux images.

So even as COBOL programmers and mainframe support personnel age out of the workforce, IBM continues to modernize the mainframe infrastructure and software stack. And although the mainframe continues to face challenges from the cloud, it has managed to adapt and survive.

Telum is not optimized for single threaded code (zos ensures the working set is available to avoid page-faults when running problem programs) EBCDIC is ordered for case insensitive sorting 'A' & 'a' come before 'B' zos natively supports assembler & C (all other languages require a runtime - CRT is optional) When mentioning JES, it's worth noting its message start with HASP (Houston Automatic Spooling Program) - the first grid computing scheduler (for Apollo moon-shot)

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics