Veritas InfoScale Storage 7.4.2 for Linux: Administration (ISSLA) – Outline

Detailed Course Outline

Storage Foundation Basics

Installing and Licensing InfoScale
  • Introducing the Veritas InfoScale product suite
  • Tools for installing InfoScale products
  • InfoScale Cloud offerings
  • Installing Veritas InfoScale Storage
  • Installing Veritas InfoScale Availability
  • Upgrading Veritas InfoScale Enterprise
Labs: Introduction
  • Exercise A: Viewing the virtual machine configuration
  • Exercise B: Displaying networking information
Labs: Installation of InfoScale Storage
  • Exercise A: Verifying that the system meets installation requirements
  • Exercise B: Installing InfoScale Storage and configuring Storage Foundation
  • Exercise C: Performing post-installation and version checks
Virtual Objects
  • Operating system storage devices and virtual data storage
  • Volume Manager (VxVM) storage objects
  • VxVM volume layouts and RAID levels
Labs
  • Exercise A: Text-based VxVM menu interface
  • Exercise B: Accessing CLI commands
  • Exercise C: Adding managed hosts (sys1 and sys2) to the VIOM Management Server (mgt)
Creating a Volume and File System
  • Volume layouts
  • Creating volumes with various layouts
  • Allocating storage for volumes
  • Preparing disks and disk groups for volume creation
  • Creating a volume and adding a file system
  • Displaying disk and disk group information
  • Displaying volume configuration information
  • Removing volumes, disks, and disk groups
Labs
  • Exercise A: Creating disk groups, volumes and file systems: CLI
  • Exercise B: Removing volumes and disks: CLI
  • Exercise C: Destroying disk data using disk shredding: CLI
  • Exercise D: (Optional) Creating disk groups, volumes, and file systems: VIOM
  • Exercise E: (Optional) Removing volumes, disks, and disk groups: VIOM
Working with Volumes with Different Layouts
  • Volume layouts
  • Creating volumes with various layouts
  • Allocating storage for volumes
Labs
  • Exercise A: Text-based VxVM menu interface
  • Exercise B: Accessing CLI commands
  • Exercise C: Adding managed hosts (sys1 and sys2) to the VIOM Management Server (mgt)
Making Configuration Changes
  • Administering mirrored volumes
  • Resizing a volume and a file system
  • Moving data between systems
  • Renaming VxVM objects
Labs
  • Exercise A: Administering mirrored volumes
  • Exercise B: Resizing a volume and file system
  • Exercise C: Renaming a disk group
  • Exercise D: Moving data between systems
  • Exercise E: (Optional) Resizing a file system only

Managing Devices

SmartIO
  • InfoScale Storage 7.4.2 SmartIO
  • Support for caching on Solid State Drives (SSDs)
  • Using the SmartAssist Tool
Labs
  • Exercise A: Configuring VxVM caching
  • Exercise B: Configuring VxFS read caching
  • Exercise C: Configuring VxFS writeback caching
  • Exercise D: (Optional) Destroying cache area
Dynamic Multi-Pathing
  • Managing components in the VxVM architecture
  • Discovering disk devices
  • Managing multiple paths to disk devices
Labs
  • Exercise A: Administering the Device Discovery Layer
  • Exercise B: Displaying DMP information
  • Exercise C: Displaying DMP statistics
  • Exercise D: Enabling and disabling DMP paths
  • Exercise E: Managing array policies
Veritas Dynamic Multi-Pathing for VMware
  • DMP in a VMware ESX/ESXi environment
  • Managing DMP for VMware
  • Administering the SmartPool
  • Performance monitoring and tuning using the DMP console
Resolving Hardware Problems
  • How does VxVM interpret failures in hardware?
  • Recovering disabled disk groups
  • Resolving disk failures
Labs
  • Exercise A: Recovering a temporarily disabled disk group
  • Exercise B: Preparing for disk failure labs
  • Exercise C: Recovering from temporary disk failure
  • Exercise D: Recovering from permanent disk failure
  • Exercise E: (Optional) Recovering from temporary disk failure—Layered volume
  • Exercise F: (Optional) Recovering from permanent disk failure—Layered volume
  • Exercise G: (Optional) Replacing physical drives—without hot relocation
  • Exercise H: (Optional) Replacing physical drives—with hot relocation
  • Exercise I: (Optional) Recovering from temporary disk failure with vxattachd daemon
  • Exercise J: (Optional) Exploring spare disk behavior
  • Exercise K: (Optional) Using the Support Web Site

Cluster File System

Installing InfoScale Storage for using Cluster File System
  • SFCFS overview
  • SFCFS architecture
  • SFCFS communication
  • VCS management of SFCFS infrastructure
Labs
  • Exercise A: Performing a pre-installation check using the installer utility
  • Exercise B: Installing Veritas InfoScale Storage and configuring Cluster File System
  • Exercise C: Configuring the Cluster File System component in an environment with pre-installed InfoScale Storage
  • Exercise D: (Optional) Performing post-installation and version checks
  • Exercise E: Verifying cluster communications
  • Exercise F: Adding managed hosts to the VIOM management server
Cluster Volume Manager
  • VxVM and CVM overview
  • CVM concepts
  • CVM configuration
  • CVM response to storage disconnectivity
Labs
  • Exercise A: Creating shared disk groups and volumes using CLI
  • Exercise B: Creating a shared disk group and volume using VIOM
  • Exercise C: Converting a disk group from shared to private and vice versa
  • Exercise D: Investigating the impact of the disk group activation modes
  • Exercise E: (Optional) Observing the impact of rebooting the master node in a storage cluster
Cluster File System
  • Cluster File System concepts
  • Data flow in CFS
  • Administering CFS Flexible Storage Sharing
Labs
  • Exercise A: Creating a shared file system – CLI
  • Exercise B: Changing the primary node role – CLI
  • Exercise C: Placing the shared file system under the storage cluster control – CLI
  • Exercise D: Deleting shared file systems and disk groups
Flexible Storage Sharing
  • Understanding Flexible Storage Sharing
  • FSS storage objects
  • FSS case study
  • Flexible Storage Sharing implementation
  • FSS configuration
Labs
  • Exercise A: Administering flexible storage sharing (FSS)
  • Exercise B: Testing flexible storage sharing

Replication

Disaster Recovery and Replication Overview
  • Disaster recovery concepts
  • Defining replication
  • Replication options and technologies
  • Veritas technologies for disaster recovery
Veritas File Replicator
  • Veritas Volume Replicator overview
  • Comparing volume replication with volume management
  • Volume Replicator components
  • Volume Replicator data flow
Labs
  • Exercise A: Setting up and performing replication for a VxFS file system
  • Exercise B: Restoring the source file system using the replication target
Veritas Volume Replicator Components
  • Veritas Volume Replicator overview
  • Comparing volume replication with volume management
  • Volume Replicator components
  • Volume Replicator data flow
Veritas Volume Replicator Operations
  • Replication setup
  • Assessing the status of the replication environment
  • Migration, takeover, and fast failback
Labs
  • Exercise A: Preparing storage for replication
  • Exercise B: Establishing replication
  • Exercise C: Observing data replication
  • Exercise D: Migrating the primary role
InfoScale support for Cloud environments
  • Overview of InfoScale solutions in cloud Environments
  • Preparing for InfoScale installations in cloud environments
  • Configurations for cloud environments
  • Troubleshooting issues in cloud environments
Labs
  • Exercise A: Verify S3 server details (sys3)
  • Exercise B: Create InfoScale storage support for S3 connector
  • Exercise C: (Optional) Create FSS and SmartIO type storage and backup data to S3 server
Challenge Lab (Linux)
  • Exercise A: Create a 4-Node storage cluster (CVM type)
  • Exercise B: Create a local mount point (VxFS type) and backup data to S3 server (sys3)
  • Exercise C: Create a FSS storage type cluster mount point and backup data to S3 server (sys3)
Appendix A: Working with Erasure coding
  • Erasure Coded Overview
  • Erasure Coded Architecture
  • Erasure Coded volume enhancements in 7.4
  • Erasure Coded performance comparison

Increased Lab Access Time: From now on you will be able to access the course labs for six months following the class.