| 0 comments ]

Enterprise Virtual Array information on configuring the error reporting systems and best practices for configuring the array. You should be able to:
+ Describe the steps involved to set up notification
+ List additional HP monitoring and notification tools
+ Describe the key considerations for configuring disk groups
+ List the best practices for configuring disk groups with respect to:
   • Availability
   • Performance
   • Cost
+ Describe the factors involved in sizing disk groups

Best practices for optimizing availability
The EVA is designed to be fully redundant with no single point of failure. However, some supported configurations result in multiple drives on a single shelf being part of the same disk group. These configurations may result in data unavailability due to multidisk failures or human error. To minimize these failures and prevent unexpected downtime, use the Robust Availability Configuration.

To optimize for availability, consider the VRAID type, protection level, the number of disk groups, and drive replacement procedures. Best practices for each of these areas are described in the following sections.

Each VRAID type provides a different level of availability:
+ VRAID0 — Provides no availability protection. If one disk fails, there is no parity protection and the data will be lost.
+ VRAID1 — Provides the highest availability protection but incurs a 100% drive overhead.
+ VRAID5 — Provides the most efficient protection from the standpoint of disk utilization because it is 80% efficient (parity occupies 20% of the disk space).

Availability best practice:
Do not use VRAID0.
For best performance and availability, use VRAID1.
For best storage efficiency and cost, use VRAID5.

Availability best practice: Do not mix disk sizes in a single disk group.

Availability best practice: Always configure spare space for every array group. Do not rely on unassigned capacity to support the recovery of a failed drive.

Availability best practice: For critical database applications, place data and log files in separate disk groups.

Availability best practice:
1. Wait for the sparing to be completed. This will be signaled by an entry in the event log.
2. Replace the failed disk.
3. Add the new disk into the original disk group.

Availability best practice: After inserting a disk drive into a shelf, wait 60 seconds before inserting another disk.

Altering the configuration
To provide the highest availability after a shelf failure, always use disk failure protection levels and maintain Robust Availability Configurations. The following Availability Envelope rules maximize availability but limit the number of disks in the disk group.

If a single disk failure protection level is specified:
- For VRAID1, never put more than one disk per shelf in the disk group.
- For VRAID5, never put more than two disks per shelf in the disk group.
If a double disk failure protection level is specified:
- For VRAID1, never put more than two disks per shelf in the disk group.
- For VRAID5, never put more than four disks per shelf in the disk group.

Availability best practice: If additional disks are added to or removed from an existing disk group, the disk group must be inspected to be sure it is still in a Robust Availability Configuration.
Availability best practice: An alternative to adding disks to an existing disk group is to create a new disk group and set it up to follow the Robust Availability Configuration rules.

Best practices for optimizing performance
Optimizing performance depends on several factors:
+ Disk count
+ Number of disk groups
+ Disk RPM
+ Write cache mirroring
+ Mixed disk speeds
+ Mixed disk capacities
+ Read cache
+ LUN balancing

Performance best practice: Under certain, carefully considered circumstances, disabling write cache mirroring will result in significantly increased write performance.
Performance best practice: Always leave read caching enabled on a LUN.
Performance best practice: Always attempt to balance LUNs between the two controllers on an EVA based on the I/O load.

Best practices for optimizing cost
The cost of ownership of the storage is the cost per MB (or GB, TB, or more) of the entire storage subsystem. It is obtained by dividing the total cost of the storage system by the usable capacity available to the customer. Items affecting cost of ownership are the protection level, number of disk groups, disk quantity, and disk types

Cost of ownership best practice: Do not mix disk sizes in a single disk group.
Cost of ownership best practice: Use a single disk group.
Cost of ownership best practice: Fill the storage system with as many disk drives as possible.
Cost of ownership best practice: Use lower performance, larger capacity disks wherever possible.

The following table summarizes the recommended best practices. This table makes it relatively easy to choose between the various possibilities and highlights the fact that many of the “best practice” recommendations contradict each other. In many cases, there is no correct choice; the decision depends on the goal—cost, availability, or performance.


Before deploying an EVA subsystem, you must determine the number of physical disks required to deliver the desired usable capacity, that is, the capacity of the virtual disk as seen by the hosts to which it is presented.
The raw physical storage capacity available in an EVA subsystem is consumed for a number of purposes. The first and most obvious is the storage of the data written by the operating system and applications. Some of the physical storage, however, is used to store the information that makes the fault tolerance and virtualization features possible

Disk group sizing factors
The following factors impact the translation of raw capacity into usable capacity:
- Hardware and software storage representations
- System metadata overhead
- VRAID redundancy overhead
- Protection level
- Snapshot working space

Sizing a disk group involves determining how much usable capacity is required. A disk sizing formula is available which accounts for the fixed-system overhead, then factors in the variable overhead associated with the VRAID type, Protection level, and snapshot activity.

System metadata overhead
The subsystem stores its configuration, the tables that map virtual disks to specific physical disks blocks and other system metadata, on the disk drives. At disk group creation, the EVA keeps a minimum of five copies of metadata. Thereafter, the metadata is duplicated in up to 16 disk groups. Approximately 0.2% of the
physical capacity is consumed for this purpose.
The disk sizing formula accounts for this overhead.

VRAID redundancy overhead
The parity used to protect Vdisks from disk failure also consumes physical disk capacity.
- VRAID1 data is stored twice; consumes two blocks of physical capacity for every block of usable capacity.
- VRAID5 data stores one block of parity for every four blocks of data; consumes 1.25 blocks of physical capacity for every block of usable capacity.
- VRAID0 data has no parity protection; consumes only one block of physical capacity for every block of usable capacity.
The disk sizing formula accounts for this overhead.

Protection level
The protection level reserves spare capacity for disk reconstruction. The amount of physical disk capacity reserved depends upon the protection level selected, the size of the largest disk in the disk group and the number of disk groups.
The disk sizing formula accounts for this overhead.

Snapshot working space
An installation that creates snapshots or snapclones on a regular basis needs free capacity from which these new disks can draw. Snapclones and standard snapshots consume the same physical capacity as the original virtual disk. This additional capacity is consumed during the operation that creates the snapclone or standard
snapshot.

Formula for determining disk count
A formula is available to define the approximate relationship between hardware disk capacity, the number of disks, and the capacity required for each of the three VRAID types.
The formula assumes that the disk groups consist of a multiple eight drives with the same capacities.
The formula to determine disk count is:

DiskCount ≅ [(UsableV0 x 538) + (UsableV5 x 673) + (UsableV1 x 1076)] /(DiskCap x 476) + (ProtLevel x 2)

Where each of the variables is defined as:
DiskCount: Integer number of disk drives
UsableV0: Desired usable VRAID0 capacity in software GB
UsableV5: Desired usable VRAID5 capacity in software GB
UsableV1: Desired usable VRAID1 capacity in software GB
DiskCap: Disk drive capacity in hardware GB
ProtLevel: 0 for None, 1 for Single, 2 for Double

Note that the snapshot working space must be estimated and included in the VRAID5 or VVRAID1 capacities.

For example, if you need 1TB of usable VRAID0, 1TB of usable VRAID5, and 1TB of usable VRAID1, all using 72GB disks, and you need double protection level, you can calculate the disk count you need as:
DiskCount ≅ [(1000GB x 538) + (1000GB x 673) + (1000GB x 1076)] /(72GB x 476) + (2 x 2)≅ 66.7 + 4 ≅ 70.7 disks ≅ 71 disks

Note: Remember to convert to the same units, for example, 1TB = 1000GB.

Disk group sizing procedure summary:
To determine the number of physical disks required for the disk group:
1. Determine the physical capacity of the drives to be used, and the disk failure protection level required for the disk group.
2. Determine the number of virtual disks that are to reside in the disk group, the usable capacity required for each, and the VRAID type required for each.
3. Determine the amount of additional usable capacity required for snapshot working space for each virtual disk that will have snapshots. The VRAID type of the snapshots will match the original.
4. Take the sum of the total usable capacity required for each VRAID type. This is the sum of the usable capacity for each virtual disk of that type and its corresponding snapshot working space (if any).
5. Solve for the number of disks using the formula.
6. Add margins appropriate to the circumstances. Subsystem management flexibility improves significantly as free capacity grows.

Remember, the occupancy level for a disk device does not take spare space allocation into account until there is a disk group member failure.

0 comments

Post a Comment