Following are major Data Protector features
• High-performance backup
HP Data Protector Architecture
• Data Protector Cell Manager
Following are major Data Protector features
• High-performance backup
Objectives
• Identify new diagnostic tools
• Describe the features of the Navigator tool
• Describe the features of the Support Data Collector tool
• Describe the features of the EVA performance monitoring tool
• List the replaceable components
Review of troubleshooting tools
Objectives
• List the primary features of active-active configurations
• Describe the different performance paths used during activeactive presentation
• List the ways in which virtual disk ownership is transferred
• Describe the active-active failover model
Overview of active-passive failover mode
• EVA 3000/5000 series supports active-passive (AP) mode per LUN
• Allows one controller to actively present a given LUN
• LUNs are statically distributed among controllers
– Preferred to certain controllers
– Done through user interface or server
• Problematic mode of operation
– Uses mechanisms to move a unit from one controller to the other controller in the array
– Not an industry standard
• Failover on EVA 3000/5000 series is controlled by Secure Path or native software (Tru64/OpenVMS)
Background on active-active failover mode
• SCSI standard mechanism for controlling LUN presentation and movement was developed in 2000-2001
– Developed by Sun, HP, and others
– Called Asymmetric Logical Unit Access
– HP implementation is called active-active (AA) failover
– There is a single HP AA specification for EVA and MSA
• AA implementation
– Only operating mode for the EVA 4000/6000/8000 series going forward
– Also supported on other arrays: MSA, VA, XP
– Not supported on MA/EMA
– Not supported on EVA 3000/5000 series at present, but a future release
Active-active failover features
• Allows host access to virtual disks through ports on both controllers
• Both controllers share traffic and load
• If controller fails, second controller takes all of traffic
• Controllers pass commands to appropriate controller
• Operation is transparent to the host
• Performance penalty for using non-optimal path to the LUNs, on reads more than writes
• Performance paths can be discovered and transitioned from one controller to the other by implicit internal mechanisms or through explicit SCSI commands
• Controller/LUN failover is automatic
• Enables multipath failover software for load balancing, for example
– PVLinks — HP-UX native failover
– Secure Path — HP-UX V3.0F (contains AutoPath)
– QLogic failover driver — Linux
– MPIO — Windows and NetWare
– MPxIO — Solaris
– Veritas dynamic multipathing (DMP) — HP-UX and Solaris
• Enables customer-preferred HBAs
– For example, native IBM, Netware or Sun
– Standard drivers
• Requires all array ports to be active
AA impacts
• HP Fibre Channel ecosystem impact
– Multipath software, BC, CA, and Data Protector (DP)
– Changes are the most complex NSS has ever attempted
• Secure Path impact
– Secure Path provides assist to BC, CA and other applications
– Support needs to be picked up by firmware or application
• Customer impact
– Co-existence between Secure Path and native multipath software
– Co-existence between arrays
– Upgrade or migration from first generation to second generation EVA
AA performance paths
• Each virtual disk is owned by one of the controllers, providing the most direct path to the data
• Optimal and non-optimal performance paths
– Read I/Os to the owning controller
• Executed and the data is returned directly to the host
• Direct execution reduces overhead, creating an optimal performance path to the data
– Read I/Os to the non-owning controller
• Must be passed to the owning controller for execution
• Data is read and then passed back to the non-owning controller for return to the host
• Additional overhead results in a non-optimal performance path
– Write I/Os
• Always involve both controllers for cache mirroring, therefore
• Write performance is the same regardless of which controller receives the I/O
Virtual disk ownership and transitions
• Virtual disk ownership is established when the virtual disk is created
• Ownership and optimal performance path can be transferred from one controller to the other in these ways
– Explicit virtual disk transfer by commands from a host
– Implicit virtual disk transfer (invoked internally)
• Occurs when a controller fails and the remaining controller automatically assumes ownership of all virtual disks
• Occurs when array detects high percentage of read I/Os processed by the non-owning controller — Maximizes use of the optimal performance path
• When virtual disk ownership is transferred, all members of virtual disk family (snapshots included) or DR group are transferred
AP failover model
First generation EVA failover, using Secure Path or Tru64/OpenVMS
• Active-passive
• Normally one controller “owns” each LUN
• Data for the owned LUN flows through one controller
• LUN1 is preferred to CI
• LUN2 is preferred to C2
Dotted line means it’s visible but not in an active state.
Controller preferences are as follows:
• No preference
• Preferred A/B
• Preferred A/B failover
• Preferred A/B failover/failback
Second generation EVA failover, using other failover software (not SP)
• Active-active
• Each LUN is primary to one controller
• Data for the owned LUN flows through the primary controller
Full-featured definition for MPIO
• Full-featured offered free of charge and supports the following
– Automatic failover/failback — Always on, you cannot change
– Load balancing (two implementations)
• Load across all paths
• Single preferred path with all other paths as alternate
– Path management — Selecting a preferred path prevents any load balancing
– SAN boot
– Path verification
– Quiesce
• Only able to switch from a single active path to an alternate path
• Changes preferred path setting
• Full-featured (continued)
– Add/delete without reboot
– LUN persistency
– Notification
– Integrated with performance monitoring
• Note: Implementations vary by OS
Notes on failover functionality:
Objectives
• Name the new snapclone features and functions
• Describe the three-phase snapclone process
• Describe the snapclone replica resynch process
Virtual storage terminology overview
Same terminology as for first generation EVA
• storage system (cell) — an initialized pair of HSV controllers with a minimum of eight physical disk drives
• disk group — a group of physical disks, from which you can create virtual disks
• redundant storage set (RSS) — a subgrouping of drives, usually 6 to 11, within a disk group, to allow failure separation
• virtual disk — a logical disk with certain characteristics, residing in a disk group
• host — a collection of host bus adapters that reside in the same (virtual) server
• logical unit number (LUN) — a virtual disk presented to one or multiple hosts
• virtual disk leveling — distribution of all user data within the virtual disks in a disk group proportionally across all disks within the disk group
• distributed sparing — allocated space per disk group to recover from physical disk failure in that disk group
Physical space allocation
• Physical space allocated the same as for the first generation EVA
• Physical space allocated in 2MB segments (PSEGs)
• Chunk size is 128KB (256 blocks of 512 bytes each), fixed
• One PSEG is equal to 16 chunks (16 X 128KB = 2MB), or 4096 (16 X 256) blocks