Sunday, November 25, 2012

NetApp NCDA NS0-154 Exam Cram Notes: Part 2 of 3



7: Availability Overview

For correct operation of aggregates/traditional volume SyncMirror, ensure configuration of each plex is identical with regard to RAID groups and disk sizes.
The SyncMirror license cannot be removed if one or more mirrored volumes exist.
To add SyncMirror to an existing system, required hardware = disk shelves, Fibre Channel adapters, cabling.
Creating a SyncMirror volume requires an even number of disks and equally dividedbetween the two plexes. Disks are selected first by equivalent bytes per sector size; then size of disk; finally – if no equivalent size – DOT right-sizes a larger capacity disk.
Before splitting a SyncMirror volume first ensure both plexes are online and normal.

8: Snapshot Copies (and snap command)

Even if the Snapshot reserve is 0%, you can still create Snapshot copies. If there is no Snapshot reserve, Snapshot copies take their blocks from the active file system.
An inode is a data structure used to represent file system objects. It is 192 bytes in size and describes a file's attributes including: typesizeowner & permissionspointer to xinode (ACLs,) complete file data if file <= 64 bytespointers to data blocks.

The snap family of commands provides a means to create and manage Snapshot copies in each volume or aggregate.
Syntax examples:
snap restore [ -f ] [ -t vol | file ] [ -s snapshot_name] [ -r restore_as_path ] vol_name | restore_from_path
-f : The -f option suppresses user confirmation
Usage examples:
snap list engineering
Volume engineering
%/used / %/total / date / name
0% ( 0%) / 0% ( 0%) / Nov 14 08:00 / hourly.0
50% (50%) / 0% ( 0%) / Nov 14 00:00 / nightly.0
67% (50%) / 0% ( 0%) / Nov 13 20:00 / hourly.1
75% (50%) / 0% ( 0%) / Nov 13 16:00 / hourly.2
80% (50%) / 0% ( 0%) / Nov 13 12:00 / hourly.3

To protect from LUN overwrites when using Snapshot copies on a volume with 0% fractional reserve, configure either snap autodelete or vol autosize.

9: SnapRestore

SnapRestore can be used to revert an aggregateLUNvolume, or a single file (butnot a qtree or directory.)
Pre-requisites for using volume SnapRestore:
i: SnapRestore must be licensed before use.
ii: Snapshot copies for the volume to be reverted, must exist on the storage system.
iii: The volume to be reverted must be online.
iv: All ongoing access to the volume must be terminated (just as is done when a volume is brought offline.)
v: The volume to be reverted must not be a SnapMirror destination.
Note 1: A volume-level SnapRestore operation cannot be undone.
Note 2: When reverting the root volume, the filer will be rebooted.

Volume SnapRestore completion:
i: The volume automatically comes back online.
ii: More recent snapshot copies than the restored snapshot are deleted.

SnapRestore and Backup Operations
> After a reversion, incremental backup and restore operations can no longer rely on the AFS timestamps
> Recommendations:
- After the reversion, perform a level-0 (base level) backup
- When restoring from tape, use only backups created after the volume reversion

10: SnapVault (and snapvault command)

SnapVault = a disk-based backup feature of DOT that enables data stored on multiple storage systems to be backed up to a central, secondary storage system quickly and efficiently as read-only Snapshot copies.
- In the event of data loss or corruption, backed-up data can be restored from theSnapVault secondary with minimal downtime.
- Additionally, the SnapVault secondary may be configured with NFS exports or CIFS shares to let users copy the file from the Snapshot copy to the correct location.
- Available data protection solutions:
i: SnapVault to volume SnapMirror
ii: SnapVault to volume SnapMirror to volume SnapMirror
iii: volume SnapMirror to SnapVault

SnapVault: Backup to Remote Storage – recover qtrees, directories, or files
The qtree is the basic unit of SnapVault backup and restore
FAS with NearStore license recommended as a secondary storage system for SnapVault.
The SnapVault secondary system allows a separate schedule of Snapshot copies from the primary.
SnapVault secondary volume can contain up to 251 Snapshot copies for data protection.

Deduplication with SnapVault destinations:
i: source (primary) system sends duplicated data even if source data is deduplicated
ii: internally synchronizes with the SnapVault schedule on the destination
iii: creates a snapshot, deduplicates, then deletes and recreates the snapshot

Via the SnapDrive for Windows GUI, it is possible to create disks and snapshots.
Note: SnapDrive allows Server Administrators to provision storage, create snapshots, and manage application consistent backups, all without needing to trouble the Storage Administrator.

NearStore Personality
- Converts the destination storage system to a NearStore system
- Requires the nearstore_option license on the secondary and DOT 7.1 or later
- Installing the nearstore_option license increases the number of concurrent transfers

The snapvault command is used to configure and control SnapVault.
Syntax Examples:
snapvault snap sched [ -f ] [ -x ] [ -o options ] [ volname [ snapname [ schedule ]]]
options : opt_name=opt_value[[,opt_name=opt_value]...]
schedule : cnt[@day_list][@hour_list] or cnt[@hour_list][@day_list]
example: snapvault snap sched -x vault sv_daily 12@23@sun-fri
(When setting or changing a snapshot schedule, the snapvault snap sched -x option tells SnapVault to transfer new data (incremental) from all primary paths before creating the snapshot.)
snapvault snap create [ -w ] [ -o options ] volname snapname
Available on the primary and secondary. Initiates creation of the previously configured snapshot snapname in volume volname just as if its scheduled time for creation had arrived.
snapvault start [ options ] sec_qtree
Available on the secondary only. The qtree specified for sec_qtree must not exist.
snapvault update [ options ] sec_qtree
Available on the secondary only. Immediately starts an update of the specified qtree on the secondary.
snapvault restore [ options ] -S sec_filer:sec_path pri_path
Available on the primary only. If rapid_restore is not licensed, the secondary_path and the primary_path must be qtree paths.

SnapVault Qtree or Volume Qtree Restore:
- Pre-DOT 7.3, can restore to a non-existing qtree.
- DOT 7.3 and later, can restore to an existing qtree.

11: Open Systems SnapVault (OSSV)

Use OSSV Free Space Estimator utility to determine if there is sufficient disk space available on the primary to perform an OSSV transfer. The secondary system is a NetApp storage appliance. An initial baseline copy must be executed for each Open Systems platform directory to be backed up to the SnapVault secondary storage system. Open TCP ports 10000 (for central management using NDMP) and 10566(QSMSERVER) before install. The NetApp Host Agent requires HTTP port 4092 and HTTPS port 4093.

Use the OSSV Configurator GUI (installed during OSSV agent installation, and available from Windows Programs menu or from installdir/bin/svconfigurator.exe) to verify and modify OSSV paramaters – such as QSM access listmodifying NDMP settings – and perform actions - such as starting or stopping the OSSV serviceenabling debugging, and capturing trace files.

To change NDMP password on OSSV host from CLI:
Navigate to install_dir/bin folder e.g $> cd /usr/snapvault/bin
Run svpassword e.g $> ./svpassword
You will be prompted for a new password.
Enter the password.
Restart the OSSV service on the host.

12: High-Availability

Negotiated Failover: DOT allows failover to occur with one or more interfaces to ensure continual client interaction

13: MetroCluster

A "site failure" is a complete failure of the primary controller and disk shelves
Stretch MetroCluster provides campus disaster recovery protection and can stretch up to 500m.
Fabric MetroCluster provides Metropolitan disaster recovery protection and can stretch up to 100km with FC switches.

Maximum number of disk shelves per loop in a fabric-attached MetroCluster = 2
Each storage system in a cluster must have network access to the same collection of subnets.

Configure local node notes:
i) add the following licenses
- cf (Cluster)
- syncmirror_local (Syncmirror_local)
- cf_remote (Cluster_remote)
ii) Use storage show disk -p to verify disks are connected and have dual paths

To make an ifconfig partner address configuration persistent across reboots, edit the/etc/rc file for each system.
Surviving controller interfaces
 running in takeover mode within an active/activeconfiguration, reflect the identity of the interfaces as defined in the /etc/rc file.
Verify clusters with cf-config-check (there is a .cgi and a .exe version) which is downloaded from the NOW site and run from a host machine.
The cf command controls the controller failover monitor, which determines when takeover and giveback operations take place within a HA pair.
- The manual cf forcetakeover -d command causes a storage system controller takeover to occur (to avoid data corruption the remote node should be powered off and inaccessible.) Data needs to be set online to allow operations to continue. *This will cause mirrored volumes to be implicitly split, and the surviving cluster node takes over the functions of its failed partner (the failed cluster node is not powered off automatically.)

To prevent a split brain, restrict access to previous failed site controller until proper site recovery (either use manual fencing or power off the disaster site node)
Re-create the synchronous mirror to re-establish a mirrored volume after resync failure from a level-0 resync state.

To remove a cluster setup:
1) Type cf disable
2) Unlicense cluster
3) Remove partner entries for network interfaces for the /etc/rc file
4) Halt and make sure the partner-sysid is blank
5) Power down and remove the cluster interconnect card
6) Perform steps 1-5 on the partner node

14: SnapMirror (and snapmirror command)

Volume SnapMirror
Requirements and Limitations:
i: SnapMirror must be licensed for each storage system.
ii: Destination's DOT version must be equal to or more recent than the source.
iii: Like-to-like transfers only: flex-to-flex.
iv: Destination volume capacity equal to or greater than source.
v: TCP port range 10565-10569 must be open.
vi: The source volume must be online.
vii: Must create a restricted volume to be used as the SnapMirror.
viii: Quota cannot be enabled on destination volume.
ix: If a mirrored volume has a failed disk and no available spare, DOT will warn of this!

SnapMirror Configuration Process
1) Install SnapMirror license on source and destination systems
2) On source, specify host name or IP address of SnapMirror destination systems you wish to authorize to replicate this source system
options snapmirror.access host=dst1,dst2
3) For each source volume or qtree to replicate, perform an initial baseline transfer
4) After the initial transfer completes, set the SnapMirror mode of replication by creating the /etc/snapmirror.conf file in the destination's root volume

The snapmirror.conf configuration file resides on the destination.
The syntax for entries in the snapmirror.conf file is as follows:

src:/vol/src_vol/[src_qtree] dst:/vol/dst_vol[/dst_qtree] [arguments] [schedule]
[arguments] field:
- indicates that all arguments default values apply
visibility_interval = ? (controls the view of the data on the destination and specifies the amount of time before an automatic snapshot is created on the synchronously mirrored source: default value = 3 mins, and smallest supported value = 30s)
Semi-Sync Mode: Pre-DOT 7.3 use the outstanding argument ( outstanding={x ops | x ms | x s} .) DOT 7.3 and later: use the semi-sync argument ( example: src:vol dst:vol - semi-sync )
[schedule] field: (4 space-separated fields: minutehoursday_of_month,day_of_week)
* for all possible values
- means "never" and prevents this schedule entry from executing

Examples:
src:/vol/vol1 dst:/vol/vol1 – sync
src:/vol/vol1 dst:/vol/vol1 outstanding=3s sync
src:/vol/vol1 dst:/vol/vol1 visibility_interval=1hr, outstanding=3ms, cksum=crc32 sync
src:/vol/vol1/q1 dst:/vol/vol1/q1 – 15 * * *
src:vol2 dst:vol2 kbs=2000 10 8,20 * *

Note i: SnapMirror updates can be scheduled to occur as frequently as every minute
Note ii: To convert an asynchronous SnapMirror relationship to synchronous, edit the snapmirror.conf file with the keyword sync.
Note iii: To cause a currently in-sync SnapMirror relationship to fall out of sync, modify the snapmirror.conf file

The snapmirror command is used to control SnapMirror, a method of mirroring volumes and qtrees.
Some snapmirror subcommands:
resync [ -n ] [ -f ] [ -S source ] [ -k kilobytes ] [ -s src_snap ] [ -c create_dest_snap ] [ -w ] destination
Resynchronizes a broken-off destination to its former source, putting the destination in the snap-mirrored state and making it ready for update transfers. Resynchronization finds the newest common snapshot and removes all newer information. The resync command must be issued on the destination filer.
initialize [ -S source ] [ -k kilobytes ] [ -s src_snap ] [ -c create_dest_snap ] [ -w ] destination
Starts an initial transfer over the network. An initial transfer is required before update transfers can take place.The initialize command must be issued on the destination filer.
resume destination
Resumes transfers to destination. The command restores the state of the destinationfrom quiescing or quiesced to whatever it was prior to the quiesce operation.

SnapMirror log files /etc/log/snapmirror.[0-5] are saved in the root volume.
SnapMirror will automatically try to restart a transfer after a scheduled incremental update is interrupted.

SnapMirror over Multiple Paths
SnapMirror supports up to two paths for a particular SnapMirror relationship, for both Async and Sync replication modes. Supported paths modes are: multiplexing (both paths at the same time) and failover (first path as active, second path as failover.)

Throttling Network
Per transfer:- use the kbs agument in snapmirror.conf
Dynamic throttle (while transfer is active):- use CLI:> snapmirror throttledst_hostname:dst_path
System-wide throttle:- use CLI:
> options replication.throttle.enable on
> options replication.throttle.incoming.max_kbs
> options replication.throttle.outgoing.max_kbs
Note: For SnapVault, there is also the "snapvault start -k" and "snapvault update -k" flag to set throttling speed.

15: SnapLock
NetApp SnapLock software delivers high-performance, disk-based data permanence.
- SnapMirror supports SnapLock volumes
- SnapLock is available in two versions: SnapLock Compliance and SnapLock Enterprise
- SnapLock volumes support per-file retention periods
- Snapshot copies for SnapLock volumes are deleted automatically according to the retention count set in the schedule
- A volume SnapMirror relationship is not allowed between two SnapLock Compliance volumes if the destination volume has unexpired WORM files

16: Protection Manager

Download the NetApp Management Console (NMC) software - which containsProtection Manager - from Operations Manager and install the package on either a UNIX or Windows host.
Protection Manager provides a high level of assurance for data protection by proactively identifying unprotected data, checking for errors in configurations, diagnosing root cause of issues, and suggesting corrective actions, and providing detailed status reports.
Operations Manager is the user interface for the Web-based application calledDataFabric Manager. DataFabric Manager discovers, monitors, and manages NetApp storage systems.
The NetApp host agent must be installed on target hosts for file-level reporting only. Operations Manager does not require agents to monitor NetApp storage systems.
Protection Manager integrates the NetApp suite of products with other products in the market for management of ITaaS deployments as well as industry tape backup solutions (has policies that allow for disk-based protection but not tape-based protection.)
Datasets are collections of units of primary storage – storage system volumesqtrees, and directories – stored on Windows or UNIX hosts. Units of primary storage are grouped together to be protected by the same protection policy and schedule. Valid objects to bebacked up via a Protection Manager policy managing an OSSV backup of a UNIX server, include: a file, a directory, or the entire client.

No comments:

Post a Comment