Transaction Processing Facility (TPF) is an IBM real-time operating system for mainframe computers descended from the IBM System/360 family, including zSeries and System z9.
TPF delivers fast, high-volume, high-throughput transaction processing, handling large, continuous loads of essentially simple transactions across large, geographically dispersed networks. The world's largest TPF-based systems are easily capable of processing tens of thousands of transactions per second. TPF is also designed for highly reliable, continuous (24×7) operation. It is not uncommon for TPF customers to have continuous online availability of a decade or more, even with system and software upgrades. This is due in part to the multi-mainframe operating capability and environment.
While there are other industrial-strength transaction processing systems, notably IBM's own CICS and IMS, TPF's specialty is extreme volume, large numbers of concurrent users, and very fast response times. For example, VISA credit card transaction processing during the peak holiday shopping season.
The TPF passenger reservation application PARS, or its international version IPARS, is used by many airlines.
One of TPF's major optional components is a high performance, specialized database facility called TPF Database Facility (TPFDF).
A close cousin of TPF, the transaction monitor ALCS, was developed by IBM to integrate TPF services into the more common mainframe operating system MVS, now z/OS.
Video Transaction Processing Facility
History
TPF evolved from the Airlines Control Program (ACP), a free package developed in the mid-1960s by IBM in association with major North American and European airlines. In 1979, IBM introduced TPF as a replacement for ACP -- and as a priced software product. The new name suggests its greater scope and evolution into non-airline related entities.
TPF was traditionally an IBM System/370 assembly language environment for performance reasons, and many TPF assembler applications persist. However, more recent versions of TPF encourage the use of C. Another programming language called SabreTalk was born and died on TPF.
IBM announced the delivery of the current release of TPF, dubbed z/TPF V1.1, in September 2005. Most significantly, z/TPF adds 64-bit addressing and mandates use of the 64-bit GNU development tools.
The GCC compiler and the DIGNUS Systems/C++ and Systems/C are the only supported compilers for z/TPF. The Dignus compilers offer reduced source code changes when moving from TPF 4.1 to z/TPF.
Maps Transaction Processing Facility
Users
Current users include Sabre (reservations), VISA Inc. (authorizations), American Airlines, American Express (authorizations), [DXC Technology] SHARES (reservations - formerly EDS, HPES), Holiday Inn (central reservations), Amtrak, Marriott International, Travelport (Galileo, Apollo, Worldspan, Axess Japan GDS), Citibank, Air Canada, Trenitalia (reservations), Delta Air Lines (reservations and operations) and Japan Airlines.
Operating environment
Tightly coupled
TPF is capable of running on a multiprocessor, that is, on mainframe systems in which there is more than one CPU. Within the community, the CPUs are referred to as Instruction Streams or simply I-streams. On a mainframe or in a logical partition (LPAR) of a mainframe with more than one I-stream, TPF is said to be running tightly coupled.
Due to the reentrant nature of TPF programs and the control program, this is made possible as no active piece of work modifies any program. The default is to run on the main I-stream which is given as the lowest numbered I-stream found when the system boots up. However, users and/or programs can initiate work on other I-streams via internal mechanisms in the API which let the caller dictate which I-stream to initiate the work on. In the new z/TPF, the system itself will try to load balance by routing any application that does not request a preference or affinity to I-streams with less work than others.
In the TPF architecture, each I-stream shares common core, except for a 4Kb in size prefix area for each I-stream. In other instances where core data must or should be kept separate, the application designer typically carves up reserved storage areas into a number of sections equal to the number of I-streams. A good example of the TPF system doing this can be found with TPFs support of I-stream unique globals. Proper access to these carved sections of core are made by taking the base address of the area, and adding to it the product of the I-stream relative number times the size of each area.
Loosely coupled
TPF is capable of supporting multiple mainframes (of any size themselves -- be it single I-stream to multiple I-stream) connecting to and operating on a common database. Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called 32-way loosely coupled. The simplest loosely coupled system would be two IBM mainframes sharing one DASD (Direct Access Storage Device). In this case, the control program would be equally loaded into core and each program or record on DASD could be potentially accessed by either mainframe.
In order to serialize accesses between data records on a loosely coupled system, a practice known as record locking must be used. This means that when one mainframe processor obtains a hold on a record, the mechanism must prevent all other processors from obtaining the same hold and communicate to the requesting processors that they are waiting. Within any tightly coupled system, this is easy to manage between I-streams via the use of the Record Hold Table. However, when the lock is obtained offboard of the TPF processor in the DASD control unit, an external process must be used. Historically, the record locking was accomplished in the DASD control unit via an RPQ known as LLF (Limited Locking Facility) and later ELLF (extended). LLF and ELLF were both replaced by the Multipathing Lock Facility (MPLF). To run, clustered (loosely coupled) z/TPF requires either MPLF in all disk control units or an alternative locking device called a Coupling Facility.
Records that absolutely must be managed by a record locking process are those which are processor shared. In TPF, most record accesses are done by using record type and ordinal. So if you had defined a record type in the TPF system of 'FRED' and gave it 100 records or ordinals, then in a processor shared scheme, record type 'FRED' ordinal '5' would resolve to exactly the same file address on DASD -- clearly necessitating the use of a record locking mechanism.
All processor shared records on a TPF system will be accessed via exactly the same file address which will resolve to exactly the same location.
Processor unique records
A processor unique record is one that is defined such that each processor expected to be in the loosely coupled complex has a record type of 'FRED' and perhaps 100 ordinals. However, if a user on any 2 or more processors examines the file address that record type 'FRED', ordinal '5' resolves to, they will note a different physical address is used.
TPF attributes
What TPF is not
TPF is not a general-purpose operating system (GPOS). TPF's specialized role is to process transaction input messages, then return output messages on a 1:1 basis at extremely high volume with short maximum elapsed time limits. TPF has never offered direct graphical display facilities; character messages are intended to be the mode of communications with human users. These facts explain necessary resetting of certain expectations commonly held by typical GPOS end-users and developers alike.
TPF has no built-in graphical user interface (GUI) functionality: to implement it on the host would be considered an unnecessary and potentially harmful diversion of real-time system resources. TPF's user interface is command-line driven with simple text display terminals that scroll upwards.
There are no mice, windows, or icons on a TPF Prime CRAS (Computer room agent set -- which is best thought of as the "operator's console"). All work is accomplished via the use of the command line, similar to UNIX without X Window System. There are several products available which connect to Prime CRAS and provide graphical interface functions to the TPF operator, such as TPF Operations Server. Graphical interfaces for end users, if desired, must be provided by external systems. Such systems perform analysis on character content (see Screen scrape) and convert the message to/from the desired graphical form, depending on its context.
Being a specialized purpose operating system, TPF does not host a compiler/assembler, text editor, nor implement the concept of a desktop as one might expect to find in a GPOS. TPF application source code is commonly stored in external systems, and likewise built "offline". Starting with z/TPF 1.1, Linux is the supported build platform; executable programs intended for z/TPF operation must observe the ELF format for s390x-ibm-linux.
Using TPF requires a knowledge of its Command Guide since there is no support for an online command "directory" or "man"/help facility to which users might be accustomed. Commands created and shipped by IBM for the system administration of TPF are called "functional messages" -- commonly referred to as "Z-messages", as they are all prefixed with the letter "Z". Other letters are reserved so that customers may write their own commands.
TPF implements debugging in a distributed client-server mode; which is necessary because of the system's headless, multi-processing nature: pausing the entire system in order to trap a single task would be highly counter-productive. Debugger packages have been developed by 3rd party vendors who took very different approaches to the "break/continue" operations required at the TPF host, implementing unique communications protocols used in traffic between the human developer running the debugger client & server-side debug controller, as well as the form and function of debugger program operations at the client side. Two examples of 3rd party debugger packages are Step by Step Trace from Bedford Associates and CMSTPF, TPF/GI, & zTPFGI from TPF Software, Inc.. Neither package is wholly compatible with the other, nor with IBM's own offering. IBM's debugging client offering is packaged in an IDE called IBM TPF Toolkit.
What TPF is
TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records.
Data records
Historically, all data on the TPF system had to fit in fixed record (and core block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling same during read operations. Since IBM hardware does I/O via the use of channels and channel programs, TPF would generate very small and efficient channel programs to do its I/O -- all in the name of speed. Since the early days also placed a premium on the size of storage media -- be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource.
Today, much of these limitations are removed. In fact, only because of legacy support are smaller-than-4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible.
Programs and residency
TPF also had its programs allocated as 381, 1055 and 4K bytes in size at different points in history, and each program consisted of a single record (a.k.a. segment). Therefore, a comprehensive application needed many segments. Historically, these segments were never link-edited (see Linker (computing)). Instead, the relocatable object code (the direct output from the assembler) was laid out in memory, had its internally relocatable symbols resolved, then written to file for later loading into the system. This created a rather unique programming environment in which segments related to one another could not directly address each other, with control transfer between them implemented as a system service.
From the earliest days (circa 1966), memory space was limited, which gave rise to a distinction between file-resident and core-resident programs in TPF -- only the most frequently used application programs were written into memory and never removed (core-residency); the rest were stored on file and read in on demand.
The introduction of C (programming language) to TPF at version 3.0 was first implemented conformant with segment conventions, including the absence of linkage editing. This scheme quickly became impractical for anything other than the simplest of C programs. At TPF 4.1, truly and fully linked load modules were introduced to TPF. These were compiled with the z/OS C/C++ compiler using TPF-specific header files and linked with IEWL, resulting in a z/OS-conformant load module, which in no manner could be considered a traditional TPF segment. The TPF loader was extended to read and lay out the file-resident program's sections into memory; meanwhile, assembly language programs remained confined to TPF's segment scheme.
At z/TPF 1.1, all source language types were conceptually unified and fully link-edited to conform to the ELF (see Executable and Linking Format) specification. The historical segment concept became obsolete, which means that any program written in any source language -- including Assembler -- may be of any size. Furthermore, external references became possible, and separate source code programs that had once been segments could now be either directly linked together into a shared object. This proved to be valuable, as critical legacy applications can be maintained and benefit from improved efficiency through repackaging. Calls made between members of a shared object module now have a much shorter pathlength at run time as compared to calling the system enter/back service, and members may now share data regions directly thanks to copy-on-write functionality also introduced at z/TPF 1.1.
The concepts of file- and core- residency were also made obsolete, due to a z/TPF design point which sought to have all programs resident in memory at all times.
Since the system had to maintain a call stack for high-level language programs, which gave HLL programs the ability to benefit from stack-based memory allocation, it was deemed beneficial to extend the call stack to assembly language programs on an optional basis, which can ease memory pressure and ease recursive programming.
All z/TPF executable programs are now packaged as ELF shared objects.
Memory usage
Historically and in step with the previous, core blocks-- memory-- were also 381, 1055 and 4 K bytes in size. Since ALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand the first block on the available list.
Physical memory was divided into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the appropriate physical block table's list. No compaction or data collection was required.
As applications got more advanced demands for memory increased, and once C became available memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and some memory management routines. To ease the overhead, TPF memory was broken into frames-- 4 KB in size (1 MB with z/TPF). If an application needs a certain number of bytes, the number of contiguous frames required to fill that need are granted.
References
Bibliography
- Transaction Processing Facility: A Guide for Application Programmers (Yourdon Press Computing Series) by R. Jason Martin (Hardcover - April 1990), ISBN 978-0139281105
External links
- z/TPF (IBM)
- TPF User Group (TPF User Group)
Source of article : Wikipedia