文档库 最新最全的文档下载
当前位置:文档库 › HP StorageWorks XP Disk Array and Mainframe white paper

HP StorageWorks XP Disk Array and Mainframe white paper

Table of contents Introduction (2)

The XP Disk Array family (2)

Get introduced (2)

Interesting XP Disk Array information (3)

How exactly does the XP Disk Array family function in Mainframe environments? (3)

But what if I also need my XP Disk Array as Open Systems storage? (4)

FICON Channels (4)

XP FICON Channel Interfaces (5)

What can HP do for you? (6)

HP FICON Directors (7)

Get introduced (7)

Interesting FICON SAN information (7)

FICON SAN planning (8)

Get to know our FICON B-series offering of today (8)

Get to know our FICON C-series offering of today (8)

What can HP do for you? (9)

What is our plan to support your mainframe environment? (9)

Our experience (10)

Conclusion (10)

Introduction

Over the years XP Storage has become a well known and appreciated Storage family—used and

deployed all over the world. As there is such a large variety of choices in software and environmental

elements, this paper will guide you through an often “forgotten” world in which the XP is feeling at

home—the world of Mainframe.

The very first design of what became the XP Disk Array family dates to the early 1990’s. Through

several generations, the XP has been improved and adapted to follow and adjust to the needs of

storage environments—back-end, front-end, disk technology, according microcode changes and

upgrades to service Open Systems environments, a mix of both Mainframe and Open Systems

environments, or pure Mainframe environments. We’ll concentrate on the identities of the XP as

Mainframe Storage and what this involves, so you will have a better view on the broad variety of

choices and possibilities it carries for your IBM Mainframe Storage environment.

HP also offers HP FICON SAN Directors and switches to complete or expand the offering for

Mainframe environments. This paper sheds some light on the HP offering, either B-series and C-series

FICON Directors and Switches, and on the added value HP can have for your Mainframe

environments with an HP FICON SAN and XP Disk Array combination. HP has a unique element that

consists of technical “know-how” and services, combined with hardware and software components

that can create matching possibilities for your Mainframe environment.

Let’s start with a deeper dive into the XP Disk Array family.

The XP Disk Array family

Get introduced

Since the introduction of magnetic disk storage in 1956, disk storage technology has become the

main resource to store electronic information. Over the years it has been improved to support the

continuously evolving storage environments and their needs.

Let’s have a look at our XP Disk Array family history—the very first XP model was launched early in

1999. About a year later there were two more members added with the launch of the 2nd generation

XP’s—the XP512 as the XP256’s big brother, and the XP48 as the smaller variant. Nowadays we

offer the 5th generation—the XP has a footprint that can go from a single cabinet, to 2 cabinets for

the XP20000, and to 5 cabinets for the XP24000. It seems as though the size of the XP Disk Array

models has grown, but in reality they’re a lot more compact than the 1st generation XP256, compared

to their maximum capacity. Of course this is also because of the evolved Disk technology—today, the

XP24000/XP20000 platform offers a disk drive that will hold up to 1 TB of data.

2

Figure 1: XP generation timeline XP48

Apr 99May 00May 02Sep 04XP128XP10000XP1024XP512

XP256

XP24000

XP12000

XP20000May 07

This timeline shows there has usually been room for improvement over time, and scope for newer and better technology to best suit the continuous changing storage environments. Taking all that into

account, one might still wonder how an XP fits into mainframe environments—let’s have a look at the basic principles.

Interesting XP Disk Array information

How exactly does the XP Disk Array family function in Mainframe environments?

To be able to function optimally in Mainframe environments, the XP actually had to adapt to them. On a top shell layer this is what XP Disk Array systems in a mainframe environment do—they present themselves as IBM disk arrays. XP storage is configured to be able to emulate IBM standard Control Units and volume types for Mainframe environments. The emulated standard volumes present the same number of cylinders and capacity to the mainframe as the native z/OS volume type of the same name.

All disk volumes meant for Mainframe environments emulate an IBM disk volume and follow IBM standard formatting. The IBM file structure standard for disk volumes is CKD (Count Key Data) or ECKD (Extended Count Key Data) which is the original way disk drives were formatted on

Mainframes. This standard specifies the record format of volumes, and verifies that each record consists of a count field and optionally, a key field and data field.

What’s also interesting to know is what exactly Logical Control Units (LCUs) are and why they are needed—the mainframe design limits a Control Unit (CU) to 256 devices, while the XP has a larger maximum amount of devices (for example 65536 for the XP24000/XP20000). Therefore a physical CU can consist of multiple “logical” CUs or LCUs, each of which respects the 256 device limit.

3

But what if I also need my XP Disk Array as Open Systems storage? Resources of a single XP Array can be shared by many applications and operating systems: you can partition cache and disk resources to protect key applications from being impacted by other applications. Cache Logical Partitions (CLPRs) permit partitioning of cache and disk array groups so that specific hosts and applications can be protected from other applications, while Storage Logical Partitions (SLPRs) on the other hand, allow you to divide the array into individually managed sub-arrays for an Open Systems environment. Each sub-array includes host ports, cache, and disk array groups. You can do the equivalent for your mainframe environment by providing secure access to disk and port resources with SANtinel for Mainframe (also known as Volume Security or LDEV Security) and SANtinel for Mainframe Port Option (also known as Volume Security Port Option). Figure 2: XP Disk Array Resource Sharing example Mainframe 1Cache Cache CHA CHA Open System Mainframe 2CHA CHIP CHA CHIP CHA CHIP Cache Cache Cache Cache CHA CHA CHA Cache FICON Channels FICON is a naming convention for a FC protocol, consistent with the ANSI standards ”FC-SB-x” (Single-Byte Command Code Sets Mapping Protocol). Within the FC standard, FICON is defined as level-4 protocols or FC-4. This protocol is used to map both cabling infrastructure and protocol onto standard FC services and infrastructure. The mapping layer specifies the signal, cabling, and transmission speeds. FICON was actually developed to take advantage of the speed, distance capabilities, capacity, and flexibility of Fibre Channel. A Channel command word is a device command that can be linked with other CCWs to form a channel program. This is the mainframe equivalent of a Fibre Channel SCSI command. As you can see in Figure 3, the only difference to the Fibre Channel being referred to in Open systems or FCP environments lies in the 4th layer of the protocol—for Open Systems this layer carries SCSI commands instead of CCWs. The CCWs, the data and the status are packaged by the FICON channel into FC-SB-2/ FC-SB-3/FC-SB-4 (FC-4 layer) Information Units (IUs). IUs from several different I/O operations to the same or different control units and devices are multiplexed or de-multiplexed by the FC-2 layer (framing). These FC-2 frames (with encapsulated FC-SB-x IUs) are encoded or decoded by the FC-1 layer (Encode/Decode) and sent or received to or from the FC-0 Fiber optic medium (optics). 4

Figure 3: Fibre Channel protocol layers

commands CCWs Protocol SCSI

FC-4 Layer FCP FICON

FC-3 Layer Common Services

FC-2 Layer Framing Protocol

FC-1 Layer Encode/Decode and Link Control

FC-0 Layer Physical Link

With the introduction of protocol standard FC-SB-4, following FC-SB-3, high performance FICON was introduced with a whole new mode of operation—transport mode. In transport mode, communication between the channel and control unit takes place using a single bi-directional exchange and utilizes

fewer handshakes to close exchanges, to transmit device commands, and to provide device status.

If you would like more information on High Performance FICON, you can consult the

High Performance FICON whitepaper. This paper also talks about the “general” FICON protocol

and will provide you with environmental details—for the z10 Mainframe side and for the HP XP

Mainframe Disk Array side. It will also explain the details of High Performance FICON and how this protocol change or improvement is transparent for devices that do not support or acknowledge High Performance FICON.

Let’s have a look at the physical XP Channel Interfaces in the next section.

XP FICON Channel Interfaces

To provide availability of your data, the internal structure of the XP’s front-end interfaces have been designed to enable the concept of alternate paths. It’s recommended to respect certain priorities when configuring alternate paths—consider the structure and performance of the specific board. Figure 1 represents an XP24000/XP20000 FICON 4-port Adapter PCB or Printed Circuit Board. The

XP12000/XP10000 has a “double size” 8-port adaptor. In reality, you will hardly notice the physical difference as the FICON 4-port Adapters are implemented in pairs in the XP24000/XP20000.

5

Figure 4: XP24000/XP20000 FICON 4-port Adapter Printed Circuit Board (PCB)

MHUB

MHUB

CHP CHP CHP

CHP

To avoid heavy or unbalanced usage on certain channel paths or CHPs, it is a good idea to look at

this internal structure before deciding how you will configure your host paths and be sure to spread

the reach of your host channels so the workload will be spread among as many resources possible.

Your 1st priority level should be to have the channels for a specific mainframe connected to as many

PCB FICON ports or cards as available to promote performance as well as availability; a good 2nd

priority level is to have a number of Host processors or HTPs in your channel configuration to support

its workload. Depending on the footprint of your design, trade-offs might be required for optimizing

your specific configuration. Your local HP team can work with you to determine the better

configuration for your environment.

Let’s have a look at the next section to answer a very important question.

What can HP do for you?

When you decide to go for XP Disk Array, there are several choices for you to make depending on

your environment. You would probably want the outstanding possible option, considering TCO and

ROI. It is possible that your sales team has enough background information to pin-point the areas that

require improvement when you design your configuration—but you also have several other options

available that HP can help you with.

One of these options is to check your current storage environment and pin-point which areas require

improvement. This kind of study can be done with a combination of information from RMF reports,

which can be generated into a complete analysis and overview by the use of RMF Magic. This is a

magical tool that is exclusive for Mainframe environments, but that can handle and analyze the

information that matters for your storage needs and more. It does not limit to only XP Disk Array

environments, but has a very broad range of supported disk storage types as you can verify here:

. HP provides the customer with a temporary license and a specialized packing program for the RMF data and an FTP site for the

packed data. This may be 24 hours, days, weeks, or even a month’s worth of data. The customer

decides how much data they wish to have analyzed and used for modeling and what represents their 6

heaviest workloads. In the unlikely event that a customer cannot install the packing program, use of the IBM standard packing utility is acceptable.

The use of a second and priceless tool then enables us to mirror or translate all of this information onto XP array models and configurations, to be able to predict its life and performance in your Mainframe environment. This tool is known as Storage Magic, and can be used in combination with RMF Magic by loading the RMF Magic output into the Storage Magic tool. In cases where no RMF data is available, Storage Magic can be used as a standalone tool in which you implement certain reference points of your mainframe environment. These reference points are things like the IO rates, throughput, response times. The importance of these figures or reference points is that they’ll indicate what you need to achieve a certain target environment or certain response times and predefined goals. This mainframe storage analysis is provided by HP free of charge to both existing and prospective customers.

In addition to these analysis studies we can help you with other consulting services like project planning, installation, large project implementation, and data migration. On top of all of this, we also offer XP Continuous Monitoring or phone home option to make sure your configuration is supervised and cared for.

If you would like more information, or would like to purchase RMF Magic and Storage Magic, contact your HP representative.

HP FICON Directors

Get introduced

We will talk about HP FICON Directors which are qualified to be implemented in XP Mainframe environments and thus support XP storage connectivity. There is often some confusion about what exactly we can offer to our mainframe customers with regards to SAN, so we aim to provide some clarification on that area and to talk some about the actual position of HP for FICON SAN support and expertise level.

First of all it might be interesting to understand how adding an HP FICON Director to your wish list for your mainframe environment can increase your level of comfort and what exactly the added value is, compared to having a non-HP FICON SAN.

Interesting FICON SAN information

You can configure a custom FICON SAN by choosing components and following the HP FICON SAN design rules. An easy and quite complete source to consult to do so is the HP StorageWorks Mainframe connectivity design guide where you'll find information that can assist you in the design and implementation of your HP FICON SAN.

HP FICON SANs provide standard topologies and design rules to meet the widest range of requirements for your FICON implementation, supporting multiple mainframe operating systems, FICON channel and Storage types—even ”the use of both FCP and FICON in the same fabric and director” also known as an intermix environment, is supported. In most cases each FICON fabric must contain directors of the same series that you can scale incrementally by adding capacity and features over time as required. For geographically dispersed installations HP provides components to meet local and long-distance connectivity requirements.

7

FICON SAN planning

Looking at FICON Director or Switch licenses you’ll notice a FICON CUP, or control unit port license.

For our FICON B-series, this license needs to be acquired separately. For the FICON C-series, this

license is included in the “Mainframe package license”—installing FICON on your C-series FICON

SAN Director or Switch involves installing CUP license. But what exactly is a CUP license, and why

and when do you need it in reality? Your FICON Director requires Control Unit Port (CUP) to allow

Host control. This feature enables the FICON Director or switch to present itself to the host or

mainframe as a mainframe I/O device. Some IBM mainframe applications that require CUP on

FICON directors are—System Automation for OS/390 (SA/390), Dynamic Channel Management

(DCM), and Resource Measurement Facility (RMF). The advantages of using CUP are the single point

of control and monitoring for Channel, Director, and Control Units. Automated tools on the mainframe

can leverage the statistics to move channels where they are needed. So although the switch is

transparent to the operating system in the path to a FICON control unit or device during the execution

of an I/O operation, it is recommended that the FICON Director be defined as an I/O device

because of error reporting1 and System Automation (z/OS or OS/390)2 access. You should define at

least two paths to the FICON Director I/O device for redundancy reasons.

To define your FICON Director as an I/O device, you need to be sure your host IO configuration files

are updated with the needed information—Powering on, and the subsequent IPL3 (Initial Program

Load) and activation of a System z or zSeries processor requires that you identify the correct I/O

Definition File (IODF) that contains the physical definition of your configuration. A data set called an

I/O Configuration Data Set (IOCDS) may be used to build the IODF file prior to IPLing and up to four

separate IODF files can be stored in the System z hardware. Only one is used during a

power-on-reset (POR) and IPL. The IOCDS contains the configuration for a specific processor, while

the IODF may contain configuration data for multiple processors. We must update our IOCDS to

reflect the FICON environment we are building.

Besides FICON CUP an HP FICON Director will also support High-Integrity fabric features like Fabric

binding, persistent domain ID, and “in order delivery” or IOD. All of these features enable e your

FICON environment to function optimally.

Get to know our FICON B-series offering of today

B-series FICON Connectivity stream

Refer to the HP Single Point of Connectivity Knowledge (SPOCK) website to consult our B-series

FICON offering: supported models, blade overview, a reference for supported FOS versions as well

as details on the supported SFP’s (small form-factor pluggable transceivers) can be consulted here.

Get to know our FICON C-series offering of today

C-series FICON Connectivity stream

Refer to the HP Single Point of Connectivity Knowledge (SPOCK) website that will take you to our

C-series FICON offering that lists the model offering as well as the supported blades, together with an

applicable overview of supported SAN-OS and NX-OS versions and details on the supported SFP’s

(small form-factor pluggable transceivers).

1 Switch-related hardware errors are reported to the operating system against a device number. If the switch is not defined as an I/O device, and

that I/O device is not online to the operating system, then switch-related errors cannot surface.

2 System Automation for OS/390 I/O-Ops (for managing FICON Directors) provides operational tools for safe switching, as well as displaying

routing information for a device. Safe switching refers to the ability to manipulate ports and adjust path status non-disruptively. In order for S/A

I/O-Ops to assure safe switching, it must have access to all switches. That is, all the switches must be online as I/O devices on all the systems

where S/A I/O-Ops Manager is running.

3 IPL (initial program load) is a mainframe term for the loading of the operating system into the computer’s main memory. A mainframe operating

system (such as z/OS) contains many megabytes of code that is customized by each installation, requiring some time to load the code into the

memory. On a personal computer, booting or re-booting (re-starting) is the equivalent to IPLing (the term is also used as a verb). In earlier

operating systems, when you added devices to the hardware system, you had to stop the system, change the configuration file, and then ”re-

IPL,” an activity that meant the system would be unavailable for some period of time. Today’s systems provide dynamic reconfiguration so that

the system can keep running.

8

What can HP do for you?

Your storage environment needs elements that are capable of supporting and achieving your goals or requirements—whether these elements belong to Mainframe Storage or FICON SAN level. Being able to reach the outstanding possible balance between what you need and what you get is something that comes from a good translation of your current environmental language. The same tools we are able to use for the mainframe storage analysis also target the HP FICON SAN—RMF Magic is capable of the analysis of your FICON Directors as well. Clearly this is a very important addition to be able to have a complete picture and to enable better and more complete service.

What HP also provides is a combined Mainframe Lab, both for our XP Disk Array offering and our FICON SAN offering, which opens a lot of doors to valuable possibilities. First of all we have the ability to perform tests or demos in case this is requested or needed by our customers. Another valuable asset is the experience in real combinations, setups, and the practice with our tools that are all made possible in this manner.

The next section will conclude with the available levels we gain for examining and safeguarding your Mainframe environment by having a combination of HP XP Mainframe Storage and an HP FICON SAN environment, for every day support.

What is our plan to support your mainframe environment? First of all, it is very important to have a look at the levels available to us to retrieve information. In a Mainframe storage environment we have XP Mainframe Storage and HP FICON SAN combined, and we also have 4 levels available in order to support your Mainframe Storage environment in the outstanding possible way—the 1st two from the zSeries level, the 3rd on FICON SAN, and 4th on XP Mainframe Storage level as you can see in this Figure:

Figure 5: XP Mainframe Storage environment, support information levels

switch zOS commands for channel status

zOS commands for device status

EREP/purge path extended*CHPID status from HMC**

CU & device status from HMC

Node descriptor from HMC Ficon Port status from switch console

Node descriptor from switch console Channel adaptor status from SVP

Node descriptor from SVP

*The purge path extended function provides enhanced capability for FICON problem determination. The FICON purge path error-recovery function is extended so that it transfers error-related data and statistics between the channel and entry switch and the control unit and its entry switch to the host operating system

** Hardware Management Console

9

We use these 4 levels of information optimally to support your Mainframe Storage environment. The

knowledge and expertise of HP XP Mainframe Storage team and FICON SAN team enable HP to do

more for you.

Our experience

HP has installed more SANs than its next three competitors combined, ships three times more SAN

attached storage than its nearest competitor, and has the largest worldwide SAN install-base in the

market. We also have the largest base of SAN-certified engineers in the industry (over 1000 HP

Engineers with Brocade Certifications), who support the largest fabrics in the industry and have done

over 25000 successful SAN implementations world-wide. In the meantime our OEM partnership and

collaboration is well over 10+ years.

Conclusion

It’s safe to say the combination of XP Mainframe Storage and HP FICON SAN strengthens and

increases the expertise level we can provide for your Mainframe environment. Why is that so?

Because we gain the most powerful ingredient to enable this—the ability to customize the

infrastructure for your Mainframe data, securing it from the moment it leaves your mainframe to the

moment it returns, safely and exactly in the way you want it.

Technology for better business outcomes

? Copyright 2010 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. The only warranties for HP

products and services are set forth in the express warranty statements

accompanying such products and services. Nothing herein should be construed as

constituting an additional warranty. HP shall not be liable for technical or editorial

errors or omissions contained herein.

4AA1-3742ENW, February 2010

相关文档
相关文档 最新文档