How to map iSCSI interface of XenServer 6.0.2 VNX 5300

How to map iSCSI interface of XenServer and 5.6, 6.0.2 VNX 5300

First define storage interface on XenServer and for that choose configure under “Management Interface ” and then select “New Interface” . Now specify the name and then choose the NIC which will be configure for storage traffic. Supply IP address. This needs to be repeated for all the interface which is connected to the storage.

Once this is done then perform discovery of the nic for the logging into EMC. This can be perform via CLI as well as via XenCenter.

My experience is to use XenCenter

To perform via XenCenter select new storage and choose “Software iSCSI”. Choose “Name” and then under location provide information as follows :

Target host: This will be IP address of target Storage Processor or controller . Specify all the IP address with ,

XenServer New Storage Repo

Target IQN: Here you will find IQN of the target storage processor . If it has 4 ports then you will see 4 IQN. We need to choose the one which is highlighted (*)

This will log all the targets on the EMC VNX box to mapped .

From the command line following needs to be run from the host in order to login.

Once all the message says successfully login , open EMC Unishpere and select “Connectivity status ” on left hand side under “Hosts “. This will popup Host initiators window. You will find one name with just IQN as shown below. This is the new targets which has logged in . Select “Edit ” from bottom and then provide information about “host name” “IP address” . IP address is of XenServer management IP . Make sure you choose initiator type as “clarion open ” and Fail Over mode as “ALUA 4”. ALUA is the latest fail over mode as per EMC.

PROACTIVE COPY TO HOT SPARE ON EMC VNX ARRAY

// // // <![CDATA[
function showError(errorSelector) {
$(errorSelector).show();
$('#pn-loading').hide();
}

$(function() {
setupChromeExtension();
});

function loadPf() {
pf.setup();
pf.start();
$('#pn-loading').hide();
$('#pfcom-wrapper').show();
if(pf.userSettings.hideImages) {$('#wri').click();}
}

function showErrorOnSwod() {
if($('#pf-content').html() == '' && $('.pfcom-message-wrapper:visible').length == 0) {
showError('#pf-swod');
}
callRemote = false;
}

function process(data) {
$(function() {
$('#pf-content').html(data.content);
$('#pf-title').html(data.title);
if(data.author == '') {
$('#pf-author').remove();
} else {
$('#pf-author').html(data.author);
}
pf.dir = data.dir;
retrievedTitle = data.title;
$('#pn-loading').hide();
$('#pfcom-nav, #pfcom-feature, #pfcom-wrapper').show();
loadPf();
});
}

function poll() {
jQuery.ajax({
data: ({print_id: printId }), dataType: 'json',
type: 'get', url:'/print/poll.js',
timeout: 5000,
cache: false,
error:function(request, textStatus, errorThrown){
if(textStatus == 'timeout' && callRemote) {
window.setTimeout(poll,1000);
} else {
$(function() {
showError('#pf-error');
});
}
},
success: function(resp){
callRemote = false;
if(resp.status == 'fetching') {
callRemote = true;
} else if(resp.status == 'success') {
process(resp.data);
} else if(resp.status.match(/failed/)) {
$(function() {
if(resp.status.match(/failed-parse/)) {
showError('#pf-parse-error');
} else {
if(document.domain == 'www.printfriendly.com') {
// some URLs are not reachable from ec2. Try it from hetzner
// TODO: We should be using local hostnames instead of domain names
window.location.replace(window.location.href.replace('www','pf-nl-1'));
} else {
showError('#pf-error');
}
}
});
} else if(resp.status == 'timeout') {
$(function() {
showError('#pf-timeout');
});
}
if(callRemote) {window.setTimeout(poll,100)}
}});
}

function submitUrl() {
jQuery.ajax({
data: ({url: siteUrl, dummy: 1}), dataType:'json',
type:'post', url:'/print/create.js',
timeout: 3000,
cache: false,
error:function(request, textStatus, errorThrown){
if(textStatus == 'timeout') {
window.setTimeout(submitUrl,1000);
} else if(request.status == 503) {
$(function() {
showError('#high-load-error');
});
} else {
$(function() {
showError('#pf-error');
});
}
},
success:function(data){
printId = data.print_id;
window.setTimeout(poll,100);
}
});
}

if(hasContent) {
$(function() {
$('#pfcom-nav, #pfcom-feature').show();
loadPf();
});
} else {
if(error == 'invalid-url' || error == 'banned-domain') {
$(function() {
showError('#pf-' + error + '-error');
});
} else if(siteUrl != '') {
submitUrl();
} else {
$(function() {
showError('#pf-error');
});
}
}
window.setTimeout(showErrorOnSwod, 50000);

Cookie = {
set: function(name,value,days) {
if (days) {
var date = new Date();
date.setTime(date.getTime()+(days*24*60*60*1000));
var expires = "; expires="+date.toGMTString();
}
else var expires = "";
document.cookie = name+"="+value+expires+"; path=/";
},

get: function(name) {
var nameEQ = name + "=";
var ca = document.cookie.split(';');
for(var i=0;i < ca.length;i++) {
var c = ca[i];
while (c.charAt(0)==' ') c = c.substring(1,c.length);
if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length);
}
return null;
},

erase: function(name) {
createCookie(name,"",-1);
}
};

changeFont = function(fontClass) {
$('#pf-content').removeClass('pf-9 pf-10 pf-11 pf-12 pf-13 pf-14 pf-15').addClass(fontClass);
};

var getLocation = function(href) {
var l = document.createElement("a");
l.href = href;
return l;
};

var pf = {
deletedNodes: [],
deletedNodesCss: [],
showAds: false,
dir: window['dir'],
host: requestDomain,
domain: null,
pdfClick: false,
imageDisplayStyle: 'right',

userSettings: {
imageDisplayStyle: 'right',
disablePDF: false,
disablePrint: false,
disableEmail: false,
hideImages: false
},

removeImages: function() {
$('#pf-content img').toggleClass('pf-hidden');
$('#pf-content img.thumbimage').parents('.thumbinner').toggleClass('pf-hidden');
$('#pf-content img').parents('.img-separator').toggleClass('pf-hidden');
},

openGCPDialog: function() {
var gadget = new cloudprint.Gadget();
gadget.setPrintDocument("url", retrievedTitle, window.location.href);
gadget.openPrintDialog();
},

setup: function() {
var settings = this.userSettings;
if(settings.disablePDF) $('#w-pdf').remove();
if(settings.disableEmail) $('#w-email').remove();
if(settings.disablePrint) $('#w-print').remove();
if($.inArray(settings.imageDisplayStyle, ['left','right','block','none']) !== -1) pf.imageDisplayStyle = settings.imageDisplayStyle;
},

start: function() {
pf.domain = pf.host.split(':')[0].split('www.').pop();

$('#textsize').change(function() {
changeFont($(this).val());
Cookie.set('pf-font-class', $(this).val(), 365);
});

$('#w-print').click(function(){
$("#pf-dialog").css({'display': 'block'});
$("body").css('overflow', 'hidden');
$("#pf-dialog-pdf").hide();
$("#pf-dialog-print").show();
if(pf.showAds) {
$('#pf-dialog-ads').show();
}
window.print();
return false;
});

$('#w-pdf').click(function(){
var title = '';
$("#pdf-frame-container").html('http://cdn.printfriendly.com/IEneeds/iframe_blank.html‘);
$(“body”).css(‘overflow’, ‘hidden’);
$(“#pf-dialog”).css({‘display’: ‘block’});
$(“#pf-dialog-print”).hide();
$(“#pf-dialog-pdf”).show();
if(pf.showAds) {
$(‘#pf-dialog-ads’).show();
}
$(“#pf-pdf-form input[name=title]”).val(retrievedTitle);
$(“#pf-pdf-form input[name=hostname]”).val(requestDomain);
$(“#pf-pdf-form input[name=code]”).val($(‘#pf-body’).html());
$(“#pf-pdf-form input[name=dir]”).val(pf.dir);
$(“#pf-pdf-form”).submit();
return false;
});

$(‘#w-email’).click(function(e){
var a = typeof window.screenX != ‘undefined’ ? window.screenX : window.screenLeft;
var i = typeof window.screenY != ‘undefined’ ? window.screenY : window.screenTop;
var g = typeof window.outerWidth!=’undefined’ ? window.outerWidth : document.documentElement.clientWidth;
var f = typeof window.outerHeight != ‘undefined’ ? window.outerHeight: (document.documentElement.clientHeight – 22);
var h = (a 0,
figWidth,
figHeight;

/**
* If the image has no area, there’s nothing to fix.
* In addition, the image may not have loaded properly yet,
* so we should just leave it be.
**/
if(!width || !height) {
return;
}

/**
* If the image is too small to be worth displaying, hide it.
*/
if (width <= limg.imageHideThreshold || height limg.imageWidthThreshold) {
$img.addClass(“blockImage”);
} else {
$img.addClass(“smallImage”);
if(pf.imageDisplayStyle == ‘block’) {
$img.css({‘display’: ‘block’, ‘clear’: ‘both’, ‘float’: ‘none’, ‘margin’: ‘1em 0’});
} else {
if(pf.imageDisplayStyle == ‘none’) {
margin = ‘0’
} else if(pf.imageDisplayStyle == ‘left’) {
margin = ‘1em 1.5em 1em 0’
} else {
margin = ‘1em 0 1em 1.5em’
}
$img.css({‘float’: pf.imageDisplayStyle, ‘clear’: pf.imageDisplayStyle, ‘margin’: margin});
}
}
}
if($img.hasClass(‘caption-img’)) {
$img.parents(‘.wp-caption, .caption, .tr-caption-container’).width($img[0].offsetWidth).addClass(‘pf-caption’);
if($img.hasClass(‘thumbimage’)) {
$img.parents(‘.thumbinner’).width($img[0].offsetWidth).addClass(‘pf-caption’);
}
}

}, 250);

},

init: function () {
$(‘#pf-content img’).each(function() {
var $img = $(this);
if ($img.width() === 0) {
/* If the image was not cached, call layoutImage when it’s loaded. */
$img.load(function () {
pf.layoutImages.layoutImage($img);
});
// IE sometimes doesn’t fire the load event. This is the failsafe
window.setTimeout(function(){pf.layoutImages.layoutImage($img)}, 2000);
} else {
/* If the image was cached, you call it immediately because onload will never fire. */
pf.layoutImages.layoutImage($img);
}
});
}
},

// This function is called after page parsing
complete: function() {
$(‘.pf-init-iframe, .sociable, #sociable, .addthis, #addthis, .printfriendly, .pf-print, #pf-print, .wp-socializer, .editsection, .a2a_dd, .addtoany_share_save, .addtoany_share_save_container, .simply-social-wrapper, #pf-mask, .social_button, #socialbookmarks, .articleFeedback, .print-no, .no-print, .ftwit, .famos-toolbar, .famos-fstar, .ftwit-drawer’, $(‘#’)).remove();
if($(‘body’).hasClass(‘wikipedia’)) {
$(‘.noprint, div#jump-to-nav, .mw-jump, div.top, div#column-one, #colophon, .editsection, .toctoggle, .tochidden, div#f-poweredbyico, div#f-copyrightico, li#viewcount, li#about, li#disclaimer, li#mobileview, li#privacy, #footer-places, .mw-hidden-catlinks, tr.mw-metadata-show-hide-extended, span.mw-filepage-other-resolutions, #filetoc, .usermessage, #ca-delete, span.brokenref, .compact-ambox table .mbox-image,.compact-ambox table .mbox-imageright,.compact-ambox table .mbox-empty-cell, .compact-ambox .hide-when-compact, .check-icon a.new, .geo-nondefault,.geo-multi-punct, .nonumtoc .tocnumber, .nonumtoc .tocnumber, .toclimit-2 .toclevel-1 ul,.toclimit-3 .toclevel-2 ul,.toclimit-4 .toclevel-3 ul,.toclimit-5 .toclevel-4 ul,.toclimit-6 .toclevel-5 ul,.toclimit-7 .toclevel-6 ul, .mw-special-Watchlist #mw-watchlist-resetbutton, .wpb .wpb-header, .wpbs-inner .wpb-outside, .sysop-show,.accountcreator-show, .inputbox-hidecheckboxes form .inputbox-element, #editpage-specialchars, .wikipedia .ambox,.wikipedia .navbox,.wikipedia .vertical-navbox,.wikipedia .infobox.sisterproject,.wikipedia .dablink,.wikipedia .metadata,.editlink,a.NavToggle,span.collapseButton,span.mw-collapsible-toggle, #content cite a.external.text:after,.nourlexpansion a.external.text:after,.nourlexpansion a.external.autonumber:after, .skin-simple div#column-one,.skin-simple div#f-poweredbyico,.skin-simple div#f-copyrightico,.skin-simple .editsection, #siteNotice, div.magnify’).remove();
}

try {
$(‘#pf-content a’).each(function(i) {
var $this = $(this);
var content = $.trim($this.html().replace(‘ ‘,’ ‘,’g’));
if(content === ”) {
$this.remove();
}
});

// Blogger wraps images in a div.separator. Takes up space
// even after image is hidden
$(‘#pf-content div.separator’).each(function(i) {
var $this = $(this);
if($this.children().size() == $this.find(‘a,br’).size()) {
$this.addClass(‘img-separator’);
}
});

} catch(e) {}

var l = getLocation(decodeURIComponent(siteUrl));
var host = l.host;
var path = l.path || l.pathname

host = host.replace(/^www./i, ”); // Remove www
host = host.replace(//$/, ”); // Remove ‘/’ from the end
host = host.replace(/:(80|443)$/, ”); //Remove default port for IE

if (path[0] != ‘/’)
path = ‘/’ + path;

$([

‘,


].join(”)).insertAfter(‘#pf-title’);

var elements = “p, img, blockquote, h1, h2, h3, h4, ul, li, a, table, tr, pre, span, b, strong, i”;
var elementsToSkip = ‘#pf-title, #pf-src, #pf-src *, #copyright, .copyright, .delete-off, .delete-no’;
$(‘#pf-print-area’).delegate(elements,’mouseover mouseout’, function (e) {
if(!$(this).is(elementsToSkip)) {
if(e.type == ‘mouseover’) {
$(this).addClass(“hilight”);
return false;
} else {
$(this).removeClass(“hilight”);
return false;
}
}
});

$(‘#pf-print-area’).delegate(elements, ‘click’, function () {
if(!$(this).is(elementsToSkip)) {
pf.deletedNodes.push($(this));
pf.deletedNodesCss.push($(this).css(‘display’));
$(this).css({‘display’:’none’});
}
// stops event propagation and default event.
// Otherwise event propagtes to parent/child
// and it becomes a mess
return false;
});

$(‘.wp-caption img, .caption img, .tr-caption-container img, .thumbinner img.thumbimage’).each(function(){
$(this).addClass(‘caption-img’);
});

$(‘#ajax_loader’).hide();
pf.layoutImages.init();
if(pf.dir == ‘rtl’) {
$(‘#pf-content, #pf-title’).css({“direction”: pf.dir});
}
}
}
// ]]>// = 15) {
$i.click(function() {
chrome.webstore.install(url, installSuccess, installError);
return false;
});
} else {
$i.attr(‘target’, ‘_blank’);
$i.attr(‘href’, url);
}

$(‘#chrome-extension-message-wrapper’).show();
}
}
img.src = ‘chrome-extension://ohlencieiipommannpdfcmfdpjjmeolj/images/print-friendly-16px.png’;
}
}

function installSuccess() {
$(‘#chrome-extension-message-wrapper’).hide();
}

function installError(msg) {
}

function optOut() {
$(‘#chrome-extension-message-wrapper’).hide();
Cookie.set(‘pf-chrome-optout’, 1, 365);
}

// ]]>

Proactive copy to hot spare on EMC VNX array

extremesanity

 SAN errors, oh no!

The process started when the array emailed me a couple of soft media errors, so I glanced at the SPA and SPB event logs in Navisphere and saw this:

2014.3.5_initial_errors

Notice that the majority of errors showed up as informational and not warning or critical, meaning the array will not indicate to anyone that this drive is about to fail, yeesh.

Note also that all the errors occur on the same disk, Bus 0 Enclosure 1 Disk 7, a NLSAS 2TB drive in my environment.

Event codes included 0x6a0, 0x820, and 0x801 with descriptions of disk soft media error, soft scsi bus error, and soft media error.  My suggestion is to filter only on description, and do searches for “error” to find all the messages.

Reviewing the disks within the navisphere GUI showed that no disks were faulted, the dashboard showed no errors, and the hot spare was not in use, meaning the array did not believe the drive should be failed yet.


Time to be proactive…

I opened a case with EMC support, and sent them screenshots of the SP event logs, and SPCollects from both SPA and SPB, and noted that the error occurred on the same drive over 100 times in one day.  The support representative immediately requested I do a proactive copy to the hot spare disk, and ordered a replacement disk sent to me.

A proactive copy is preferred because instead of requiring the array to rebuild the RAID array to a new disk (and endure the performance degradation inherent in this procedure), it copies the data from one disk to another, then tells the RAID array to use the hot spare disk, then disables the failing disk, skipping the rebuild process altogether and hence no RAID rebuild performance degradation.

I first tried to do the proactive copy from the navisphere gui without success, below.

2014.3.5_proactivecopy_greyedout

Note the option to copy is greyed out.  Apparently VNXs new mixed RAID storage pools prevents this option from being used, so I moved onto the CLI.

Passing the command
naviseccli -Address -User -Password -Scope 0 copytohotspare 0_1_7 -initiate
(where 0_1_7 is the failing disk) worked correctly, starting the proactive copy from the failing disk to the hotspare of the same type.

Checking progress of the proactive copy

Now to check progress of the copy…

First I tried looking at the disks in the GUI

2014.3.5_noprogress_gui_1

The disk state is listed as “Copying to Hot Spare(100%)”.  Hmm, 100% doesn’t seem right, I just started this procedure.  (Looking at the RAID LUN within the GUI showed the state as “transitioning” without any progress indicators)

Then I tried through the CLI

2014.3.5_noprogress_cli

Well that also doesn’t look right, either.  I continued on looking at SPCollect logs, SP event logs, and looking all over the Internet, including the EMC community forums, without finding any answers.  (running getlun on the RAID LUN from the CLI didn’t show any progress indicators also)

Update: I figured out a way to gauge progress, though a bit crude.  See the update at the bottom.

Eventually after 19 hours (NLSAS 2TB) the process completed, throwing event logs 0x6b0, 0x604, 0x67d, 0x6a8, 0x67c, 0x7a7, 0x6ab0 along with many others indicating it had marked the failing drive as failed as expected.

2014.3.5_proactivecopy_completed

Complete list of event logs thrown as part of the proactive copy completion:
0x6b0, 0x712d4601, 0x906, 0x7a7, 0x608, 0x6a1, 0x602, 0x7a5, 0x6a8, 0x712789a0, 0x67b, 0x67c, 0x603, 0x602, 0x712d0508, 0x604, 0x712d0507, 0x2580, 0x906, 0x7a6, 0x799, 0x712d4602, 0x712d4601, 0x67d,  0x7400, 0x740a, 0x2580and probably some others I missed.

At this point I gave the EMC CE a call and scheduled replacement of the drive.  Once the replacement drive is in place, the array should copy the information on the hot spare back to the replacement drive, then mark the hot spare drive as available again.


Update
: It appears a progress indicator of sorts is included in the lustat command in the SPCollect logs.  By running SPCollects over and over again I can gauge the progress of proactive copies and equalizations when the drive is replaced.

progress_indicator_found

This information is contained within the SPCollect zip file, within the *_sus zip file, within the SPx_cfg_info.txt file.  By looking at the EQZ percentage, I can gauge roughly when it will finish, and more importantly that the equalization or proactive copy is progressing.

EMC VNX – SNAPSHOTS

EMC VNX – Snapshots | Storage Freak

// // <![CDATA[
window.dynamicgoogletags={config:[]};dynamicgoogletags.config=["ca-pub-4270312037464181",[[[["ARTICLE",0,,[],2],["10px","24px",0],0,,"1054317605",0],[["ARTICLE",0,,[],-1],["10px","24px",0],3,,"4221942009",0],[["ASIDE",,"categories-4",[]],["10px","21px",0],3,[0],"2989750802",0]]],[[[[9653709,[[0,199]]],[9653710,[[500,699]]]],[[["BODY",0,,[]],["10px","10px",1],1,[2],,0],[["NAV",0,,[]],["10px","10px",1],3,[1],,0],[["DIV",,"content",[]],["10px","10px",1],0,[1],,0],[["DIV",,"primary",[]],["10px","10px",1],0,[1],,0],[["DIV",,"comments",[]],["10px","22.5px",1],0,[3],,0],[["DIV",,"respond",[]],["10px","10px",1],3,[1],,0],[["ASIDE",,"search-4",[]],["10px","10px",1],1,[1],,0],[["ASIDE",,"search-4",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"search-4",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"categories-4",[]],["10px","14px",1],1,[1],,0],[["ASIDE",,"categories-4",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"categories-4",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"recent-posts-4",[]],["10px","10px",1],1,[1],,0],[["ASIDE",,"recent-posts-4",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"recent-posts-4",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"text-3",[]],["10px","10px",1],1,[1],,0],[["ASIDE",,"text-3",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"text-3",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"tag_cloud-3",[]],["10px","14px",1],1,[1],,0],[["ASIDE",,"tag_cloud-3",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"tag_cloud-3",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"archives-4",[]],["10px","14px",1],1,[1],,0],[["ASIDE",,"archives-4",[]],["10px","10px",1],2,[1],,0],[["DIV",,"secondary",[]],["10px","10px",1],3,[1],,0],[["DIV",,"content",[]],["10px","48px",1],3,[1],,0],[["BODY",0,,[]],["10px","10px",1],2,[1],,0],[["ARTICLE",0,,[],0],["10px","10px",0],0,[0],,0],[["ARTICLE",0,,[],1],["10px","18px",0],0,[0],,0],[["ARTICLE",0,,[],2],["10px","18px",0],0,[0],,0],[["ARTICLE",0,,[],-1],["10px","24px",0],3,[0],,0]],["8139378002","9616111209","2092844404","5161585209","6638318409"],["ARTICLE",,,[]]]],"WordPressSinglePost","1612720807",,0.001,0.04];(function(){var aa=function(a){var b=typeof a;if("object"==b)if(a){if(a instanceof Array)return"array";if(a instanceof Object)return b;var c=Object.prototype.toString.call(a);if("[object Window]"==c)return"object";if("[object Array]"==c||"number"==typeof a.length&&"undefined"!=typeof a.splice&&"undefined"!=typeof a.propertyIsEnumerable&&!a.propertyIsEnumerable("splice"))return"array";if("[object Function]"==c||"undefined"!=typeof a.call&&"undefined"!=typeof a.propertyIsEnumerable&&!a.propertyIsEnumerable("call"))return"function"}else return"null";
else if("function"==b&&"undefined"==typeof a.call)return"object";return b},h=function(a){return"number"==typeof a},p=function(a,b){function c(){}c.prototype=b.prototype;a.G=b.prototype;a.prototype=new c;a.prototype.constructor=a;a.D=function(a,c,g){for(var k=Array(arguments.length-2),f=2;f<arguments.length;f++)k[f-2]=arguments[f];return b.prototype[c].apply(a,k)}};var r=function(a){return-1!=q.indexOf(a)};var t=function(){},u=function(a,b,c){a.j={};b||(b=[]);a.F=void 0;a.o=-1;a.c=b;n:{if(a.c.length){b=a.c.length-1;var d=a.c[b];if(d&&"object"==typeof d&&"number"!=typeof d.length){a.t=b-a.o;a.q=d;break n}}a.t=Number.MAX_VALUE}if(c)for(b=0;b<c.length;b++)d=c[b],d<a.t?(d+=a.o,a.c[d]=a.c[d]||[]):a.q[d]=a.q[d]||[]},v=function(a,b){return b<a.t?a.c[b+a.o]:a.q[b]},w=function(a,b,c){b<a.t?a.c[b+a.o]=c:a.q[b]=c},x=function(a,b,c){if(!a.j[c]){var d=v(a,c);d&&(a.j[c]=new b(d))}return a.j[c]},y=function(a,b,c){if(!a.j[c]){for(var d=
v(a,c),e=[],g=0;g<d.length;g++)e[g]=new b(d[g]);a.j[c]=e}return a.j[c]};t.prototype.toString=function(){return this.c.toString()};var A=function(a){return new a.constructor(z(a.c))},z=function(a){var b;if("array"==aa(a)){for(var c=Array(a.length),d=0;d<a.length;d++)null!=(b=a[d])&&(c[d]="object"==typeof b?z(b):b);return c}c={};for(d in a)null!=(b=a[d])&&(c[d]="object"==typeof b?z(b):b);return c};var B=function(a){u(this,a,ba)};p(B,t);var ba=[4];B.prototype.f=function(){return A(this)};var C=function(a){u(this,a,null)};p(C,t);C.prototype.f=function(){return A(this)};var D=function(a){u(this,a,null)};p(D,t);D.prototype.f=function(){return A(this)};D.prototype.l=function(){return v(this,1)};D.prototype.C=function(a){w(this,1,a)};var ea=function(a,b){var c=b||document,d=null,e=v(a,3);if(e)if(d=c.getElementById(e))d=[d];else return[];if(e=v(a,1))if(d){for(var g=[],k=0;k<d.length;++k)d[k].tagName&&d[k].tagName.toUpperCase()==e.toUpperCase()&&g.push(d[k]);d=g}else{d=c.getElementsByTagName(e);e=[];for(g=0;g<d.length;++g)e.push(d[g]);d=e}if((e=v(a,4))&&0<e.length)if(d)for(c=d,d=[],g=0;gc)d=[d[c]];else return[];c=v(a,5);
if(h(c)&&d){e=[];for(g=0;gc&&(c+=k.length),0<=c&&c<k.length&&e.push(k[c]);d=e}c=v(a,6);if(void 0!==c&&d)switch(c){case 0:return d.slice(1);case 1:return d.slice(0,d.length-1);case 2:return d.slice(1,d.length-1)}return d?d:[]},da=function(a){var b=[];a=a.getElementsByTagName("p");for(var c=0;c<a.length;++c)100<=fa(a[c])&&b.push(a[c]);return b},fa=function(a){if(3==a.nodeType)return a.length;if(1!=a.nodeType||"SCRIPT"==a.tagName)return 0;for(var b=0,c=0;c<a.childNodes.length;++c)b+=
fa(a.childNodes[c]);return b},ca=function(a,b){if(b.getElementsByClassName){for(var c=b.getElementsByClassName(a.join(" ")),d=[],e=0;e<c.length;++e)d.push(c[e]);return d}c=[];E(b,a)&&c.push(b);for(e=0;e<b.childNodes.length;++e)1==b.childNodes[e].nodeType&&(c=c.concat(ca(a,b.childNodes[e])));return c},E=function(a,b){for(var c=a.className?a.className.split(/s+/):[],d={},e=0;e<c.length;++e)d[c[e]]=!0;for(e=0;e<b.length;++e)if(!d[b[e]])return!1;return!0};var q;n:{var ga=this.navigator;if(ga){var ha=ga.userAgent;if(ha){q=ha;break n}}q=""};var G=function(a){u(this,a,null)};p(G,t);G.prototype.f=function(){return A(this)};G.prototype.l=function(){return v(this,3)};G.prototype.C=function(a){w(this,3,a)};var H=function(a){u(this,a,ia)};p(H,t);var ia=[1];H.prototype.f=function(){return A(this)};H.prototype.r=function(){return y(this,G,1)};var I=function(a){u(this,a,ja)};p(I,t);var ja=[2];I.prototype.f=function(){return A(this)};var J=function(a){u(this,a,null)};p(J,t);J.prototype.f=function(){return A(this)};var K=function(a){u(this,a,ka)};
p(K,t);var ka=[1,2,3];K.prototype.f=function(){return A(this)};K.prototype.r=function(){return y(this,G,2)};var L=function(a){u(this,a,la)};p(L,t);var la=[3];L.prototype.f=function(){return A(this)};var ma=function(a){var b=window;b.google_image_requests||(b.google_image_requests=[]);var c=b.document.createElement("img");c.src=a;b.google_image_requests.push(c)};var oa=function(){var a=window.dynamicgoogletags.s;if(null!=a){var b=a.g,a=a.d;if(null!=b&&v(b,7)&&!(v(b,7)<Math.random())){var c="https://pagead2.googlesyndication.com/pagead/gen_204?&quot;,d=function(a,b){b&&(b="function"==typeof encodeURIComponent?encodeURIComponent(b):escape(b),c+="&"+a+"="+b)},c=c+"id=pso_failure";d("wpc",v(b,1));d("sv",na());d("tn",v(b,4));d("eid",a);d("w",window.innerWidth);d("h",window.innerHeight);d("m","");ma(c)}}},na=function(){for(var a=document.getElementsByTagName("script"),
b=0;bMath.random())){var b=Math.random();if(b<(v(a.g,8)||0)){try{var c=new Uint16Array(1);window.crypto.getRandomValues(c);b=c[0]/65536}catch(d){b=Math.random()}M=pa[Math.floor(b*pa.length)];break n}}M=null}return M},qa=function(a){var b=window.document.location.hash;if(!b)return null;
b=b.split("#");if(2!=b.length||0==b[1].length)return null;b=b[1];if(a){b=b.split("=");if(2!=b.length||b[0]!=a||!b[1])return null;b=b[1]}a=Number(b);return isNaN(a)?null:a};var ta=function(a){return!!a.nextSibling||!!a.parentNode&&ta(a.parentNode)},ua=function(a,b){var c=x(b,B,1);if(!c)return null;c=ea(c,a);return 0<c.length?c[0]:null},xa=function(a,b,c,d,e,g,k){var f=a.document,l=ua(f,b);if(!l||2==b.l()&&!g&&!ta(l))return!1;if(k)return!0;g=v(b,5);if(!g)return!1;k=v(b,6);var m=x(b,C,2),n=f.createElement("div");n.className="googlepublisherpluginad";n.style.textAlign="center";n.style.width="100%";n.style.height="auto";n.style.clear=m&&v(m,3)?"both":"none";f=f.createElement("ins");
f.className="adsbygoogle";f.setAttribute("data-ad-client",c);f.setAttribute("data-ad-slot",g);f.setAttribute("data-ad-format",va(k));f.setAttribute("data-tag-origin","pso");f.style.display="block";f.style.margin="auto";f.style.backgroundColor="transparent";m&&(v(m,1)&&(f.style.marginTop=v(m,1)||""),v(m,2)&&(f.style.marginBottom=v(m,2)||""));n.appendChild(f);wa(b.l(),l,n);b={};c=[];if(d)for(l=0;lc.length)return!1;var k=ea(g);if(!this.u){for(var f=e&&-1<Aa.indexOf(e)?Ca:Ba,l=0;lthis.e.length&&!Da(this,f[l])){var m=f[l],n=c[this.e.length],F=d,S=[b],T=e,U=a,ra=Q(g,m);ra&&O(this,ra,n,F,S,T,U)&&this.e.push(m)}if(3>this.e.length&&a){l=[];0<k.length&&l.push({a:0,position:0});for(f=0;f<k.length;f++)l.push({a:f,
position:3});for(f=0;f<k.length;f++)for(m=da(k[f]),n=0;n<m.length-1;n++)l.push({a:f,i:n,position:3});k=[];for(f=0;fn):n=null;n&&k.push(m)}for(l=0;lthis.e.length;l++)f=k[l],m=c[this.e.length],n=d,F=[b],S=e,T=a,(U=Q(g,f))&&O(this,U,m,n,F,S,T)&&this.e.push(f)}}return 0<this.e.length};
var Da=function(a,b){for(var c=0;c
v(c,3).length)return!1;var g=c.r();if(a||this.h){if(this.v)return 0<Ea(this);for(var k=[],f=0;f<g.length;++f)m=g[f],ya(this.b.document,m),Fa(this,m)&&k.push(m);k.sort(function(a,b){return v(a,8)-v(b,8)});g=this.h?1:0;m=-800;for(f=0;fg;++f){var l=k[f];800<=v(l,8)-m&&Ga(this,v(l,8))&&O(this,l,v(c,3)[g],d,[b],e,a)&&(w(l,7,!0),g++,Ha(this,v(l,
8)),m=v(l,8))}this.v=!0;return 0<g}for(var f=0;fa.k[c]},Ha=function(a,b){var c=Ia(a,b);++a.k[c]},Ia=function(a,b){for(var c=Math.floor(b/2400),d=a.k.length;d<=c;++d)a.k.push(0);return c},Ea=function(a){for(var b=0,c=0;c
c&&!v(b,7)};var V=function(a,b){N.call(this,a,b)};p(V,N);V.prototype.apply=function(a){var b=this.b.dynamicgoogletags.s,c=b.g;if(!c)return!1;var b=b.d?b.d.toString():void 0,d=v(c,4)||void 0,e=v(c,1),g=x(c,H,2);if(!e||!g)return!1;for(var c=!0,g=g.r(),k=this.B();kv(c,3).length)return!1;for(var g=c.r(),k=0;k<g.length;k++){var f=g[k],l=x(f,D,4);l&&(l=l.l(),this.h||2!=l||(this.h=O(this,f,v(c,3)[0],d,[b],e,a)),this.p||3!=l||(this.p=O(this,f,v(c,3)[1],d,[b],e,a)))}return this.h&&this.p};var Y={9653709:V,9653710:X,9653711:R,9653712:P,9653715:V,9653716:W};var Z=null,Ja=function(){h(Z)&&window.clearInterval&&(window.clearInterval(Z),Z=null)},La=function(){Ja();Ka(!0)},Ma=function(){Ja();window.dynamicgoogletags.s&&window.dynamicgoogletags.s.n?window.dynamicgoogletags.s.n.apply(!0)||oa():oa()},Ka=function(a){window.dynamicgoogletags.s&&window.dynamicgoogletags.s.n&&window.dynamicgoogletags.s.n.apply(a)};var Na=function(){var a=window.dynamicgoogletags.config;a&&(window.dynamicgoogletags.s.g=new L(a))},Pa=function(){var a=null,b=!1,c=Oa();c&&(a=parseInt(c.getItem("PSO_EXP0"),10));if(null==a||isNaN(a))a=Math.floor(1E3*Math.random()),c&&(c.setItem("PSO_EXP0",a.toString()),b=!0);window.dynamicgoogletags.s.w=a;window.dynamicgoogletags.s.A=b},Oa=function(){var a=window;try{return a.localStorage||null}catch(b){return null}},Ra=function(){var a=Qa();if(!a)return null;a=v(a,1);null!=Oa()&&(window.dynamicgoogletags.s.d=
a);return a},Qa=function(){var a=window.dynamicgoogletags.s.w,b=window.dynamicgoogletags.s.g;if(!h(a)||!b)return null;for(var b=y(b,K,3)[0],b=y(b,I,1),c=0;c<b.length;c++)for(var d=b[c],e=y(d,J,2),g=0;g=f&&a

// In this post I would describe in few sentences two technologies. One is VNX Snapshots and second is VNX SnapView Snapshot.

About Snapshots

Snapshot is a technology that gives you the possibility of creating point-in-time data “copies”. It’s important to understand that snpashot itself doesn’t copy the data right away, it cannot be used as a backup! But It (using different approches, like vnx snapshots, snapview, netapp snapshots etc.) gives you the possibility to preserve the data in given point-in-time, so after a while you have the possibility to roll-back to the exact situation while snapshot was taken.

Quick introduction to VNX Snapshots

VNX snapshots is a feature created to improve on the exisiting SnapVie Snapshot technology by better intergrating with pools. VNX Snapshots can only be used with pool LUNs.

LUNs created on physical RAID groups, also called classic LUNs support only SnapView Snapshots. This restriction will be easy to understand one we describe a difference between those two.

Another restriction is that VNX SnapView Snapshots and VNX Snapshot cannot coexist on the same pool LUN!

VNX Snapshot support 256 writeable snaps per pool LUN. IT supports Branching (sometimes called Snap of a Snap). A Snap of a Snap hirearchy cannot exceed 10 levels. There are no restrictions to the number of branches, as long as the total number of snapshots for a given primary LUN is within 256.

How Snapshots work

VNX Snapshots use redirect on write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool .Such an implementation is different from copy on first write (COFW) used with SnapView, where the writes to the primary LUN are held until the original data is copied to the reserved LUN pool to preserve a snapshot.

I have found a nice picture that help you see the difference between those two:

SnapView Snapshot vs VNX Snapshot - writes

VNX Snapshot technology writes the new data to the new area within a pool, without the need to read/write to the old data block. This improves the overall performance compared to SnapView.

Similarly, during a read from a snapshot, the snapshots’s data is not constructed from two different places – look at another picture:

SnapView Snapshot vs VNX Snapshot - reads

If the host read from a snapshot within the VNX Snapshot snap mount point will grab data from snapped data, while within SanpView Snapshot part of data are read from Source LUN (data that has not been overwriten), and old data are read from Reserve LUN pool.

Snapshot granularity

Every VNX Snapshot has 8 kB block granularity. This means that every write occupies at least 8 KB on the pool. The distribution of the 8 kB blocks within a 256 MB slice is congruent with the normal  thin write algorithm.

Snapshots and Thick LUNs

When a VNX Snapshot is created on a Thick LUN, portions of its address space are changed to indirect mode. In other words, when writes come in to the Snapped Thick LUN, the LUN starts converting address mapping from direct to 8KB blocks for each portion of the Thick LUN being written. The Thick LUN remains in and indirect mode while it has VNX Snapshots. When the last snapshot of the Thick LUN is removed, the mode automatically reverts to direct.

Snapshot Mount Point

Snapshot Mount Point (SMP) is a LUN-like container. It is used to emulate a typical LUN, but provides the ability for the host to write to snapshots and to change snapshots without the need to rescan the SCSI bus on the client. A SMP is created for snapshots of a specific LUN. This means that each XMP can be used only for snapshots of a single primary LUN. To enable access to hosts, SMPs must be provisioned to the storage groups just like a typical LUN.

Snapshot Mount Point

15 EMC NAVISPHERE CLI COMMAND EXAMPLES WITH NAVISECCLI

15 EMC Navisphere CLI Command Examples with NaviSecCLI

by Karthikeyan Sadhasivam on August 4, 2014

emcNavisphere CLI is a command line interface tool for EMC storage system management.

You can use it for storage provisioning and manage array configurations from any one of the managed storage system on the LAN.

It can also be used to automate the management functions through shell scripts and batch files.

CLI commands for many functions are server based and are provided with the host agent.

The remaining CLI commands are web-based and are provided with the software that runs in storage system service processors (SPs).

Configuration and Management of storage-system using Navisphere CLI:

The following steps are involved in configuring and managing the storage system (CX series, AX series) using CLI:

  • Install the Navisphere on the CLI on the host that is connected to the storage. This host will be used to configure the storage system.
  • Configure the Service processor (SP) agent on the each SP in the storage system.
  • Configure the storage system with CLI
  • Configuring and managing remote mirrors (CLI is not preferred to manage mirrors)

The following are two types of Navisphere CLI:

  1. Classic CLI is old version and it does not support any new features. But, this will still get the typical storage array jobs done.
  2. Secure CLI is most secured and preferred interface. Secure CLI includes all the commands as Class CLI with additional features. It also provides role-based authentication, audit trails of CLI events, and SSL-based data encryption.

Navisphere CLI is available for various OS including Windows, Solaris, Linux, AIX, HP-UX, etc.

Two EMC CLARiiON Navisphere CLI commands:

  1. naviseccli (Secure CLI) command sends storage-system management and configuration requests to a storage system over the LAN.
  2. navicli (Classic CLI) command sends storage-system management and configuration requests to an API (application programming interface) on a local or remote server.

In storage subsystem (CLARiiON, VNX, etc), it is very important to understand the following IDs:

  • LUN ID – The unique number assigned to a LUN when it is bound. When you bind a LUN, you can select the ID number. If you do not specify the LUN ID then the default LUN ID bound is 0, 1 and so on..
  • Unique ID – It usually refers to the storage systems, SP’s, HBAs and switch ports. It is WWN (world wide Name) or WWPN (World wide Port Name).
  • Disk ID 000 (or 0_0_0) indicates the first bus or loop, first enclosure, and first disk, and disk ID 100 (1_0_0) indicates the second bus or loop, first enclosure, and first disk.

1. Create RAID Group

The below command shows how to create a RAID group 0 from disks 0 to 3 in the Disk Processor Enclosure(DPE).

naviseccli –h H1_SPA createrg 0  0_0_0   0_0_1   0_0_2  0_0_3

In this example , -h Specifies the IP address or network name of the targeted SP on the desired storage system. The default, if you omit this switch, is localhost.

Since each SP has its own IP address, you must specify the IP address to each SP. Also a new RAID group has no RAID type (RAID 0, 1, 5) until it is bound. You can create more RAID groups 1, 2 and so on using the below commands:

naviseccli –h H1_SPA createrg 1  0_0_4 0_0_5 0_0_6

naviseccli –h H1_SPA createrg 2 0_0_7 0_0_8

This is similar to how you create raid group from the navsiphere GUI.

2. Bind LUN on a RAID Group

In the previous example, we created a RAID group, but did not create a LUN with a specific size.

The following examples will show how to bind a LUN to a RAID group:

navisecli -h H1_SPA bind r5 6 -rg 0  -sq gb -cap 50

In this example, we are binding a LUN with a LUN number/LUN ID 6 with a RAID type 5 to a RAID group 0 with a size of 50G. –sq indicates the size qualifier in mb or gb. You can also use the options to enable or disable rc=1 or 0(read cache), wc=1 or 0 (write cache).

3. Create Storage Group

The next several examples will shows how to create a storage group and connect a host to it.

First, create a stroage group:

naviseccli -h H1_SPA storagegroup -create -gname SGroup_1

4. Assign LUN to Storage Group

In the following example, hlu is the host LUN number. This is the number that host will see from its end. Alu is the array LUN number, which storage system will see from its end.

naviseccli -h H1_SPA storagegroup -addhlu -gname SGroup_1 -hlu 12 -alu 5

5. Register the Host

Register the host as shown below by specificing the name of the host. In this example, the host server is elserver1

naviseccli -h H1_SPA elserver1 register

6. Connect Host to Storage Group

Finally, connect the host to the storage group as shown below by using -connecthost option as shown below. You should also specify the storagegroup name appropriately.

naviseccli -h H1_SPA storagegroup -connecthost -host elserver1 -gname SGroup_1

7. View Storage Group Details

Execute the following command to verify the details of an existing storage group.

naviseccli  -h H1_SPA storagegroup –list –gname SGroup_1

Once you complete the above steps, your hosts should be able to see the newly provisioned storage.

8. Expand RAID Group

To extend a RAID group with new set of disks, you can use the command as shown in the below example.

naviseccli -h H1_SPA chgrg 2 -expand 0_0_9  0_1_0 -lex yes -pri high

This extends the RAID group with the ID 2 with the new disks 0_0_9 & 0_1_0 with lun expansion set to yes and priority set to high.

9. Destroy RAID Group

To remove or destroy a RAID group, use the below command.

naviseccli -h H1_SPA destroyrg 2  0_0_7 0_0_8 0_0_9 0_1_0 –rm yes –pri high

This is similar to how you destroy raid group from the navisphere GUI.

10. Display RAID Group Status

To display the status RAID group with ID 2 use the below command.

naviseccli -h H1_SPA getrg 2 -lunlist

11. Destroy Storage Group

To destroy a storage group called SGroup_1, you can use the command like below:

naviseccli -h H1_SPA storagegroup -destroy -gname SGroup_1

12. Copy Data to Hotspare Disk

The naviseccli command initiates the copying of data from a failing disk to an existing hot spare while the original disk is still functioning.

Once the copy is made, the failing disk will be faulted and the hotspare will be activated. When the faulted disk is replaced, the replacement will be copied back from the hot spare.

naviseccli –h H1_SPA copytohotspare 0_0_5  -initiate

13. LUN Migration

LUN migration is used to migrate the data from the source LUN to a destination LUN that has more improved performance.

naviseccli migrate –start –source 6 –dest 7 –rate low

Number 6 and 7 in the above example are the LUN IDs.

To display the current migration sessions and its properties:

naviseccli migrate –list

14. Create MetaLUN

MetaLUN is a type of LUN whose maximum capacity is the combined capacities of all LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN in to the larger capacity called a metaLUN. Similar to LUN, a metaLUN can belong to storage group and can be used for Snapview, MirrorView and SAN copy sessions.

You can expand a LUN or metaLUN in two ways — stripe expansion or concatenate expansion.

A stripe expansion takes the existing data on the LUN or metaLUN, and restripes (redistributes) it across the existing LUNs and the new LUNs you are adding.

The stripe expansion may take a long time to complete. A concatenate expansion creates a new metaLUN component that includes the new LUNs and appends this component to the end of the existing LUN or metaLUN. There is no restriping of data between the original storage and the new LUNs. The concatenate operation completes immediately

To create or expand a existing metaLUN, use the below command.

naviseccli -h H1_SPA metalun -expand -base 5 -lun 2 -type c -name newMetaLUN-sq gb –cap 50G

This creates a new meta LUN with the name “newMetaLUN” with the meta LUN ID 5 using the LUN ID 2 with a 50G concatenated expansion.

15. View MetaLUN Details

To display the information about MetaLUNs, do the following:

naviseccli -h H1_SPA metalun –info

The following command will destroy a specific metaLUN. In this example, it will destory metaLUN number 5.

naviseccli –h H1_SPA metalun –destroy –metalun 5

EMC DATA MIGRATION TO VNX

Whitepaper: EMC SAN Copy

Source: eGroup EMC Data Migration to VNX

EMC Data Migration to VNX –

// //

EMC Data Migration to VNX

As a result of the huge success of EMC’s VNX platform, the need for data migrations has grown, especially recently, to get customer’s data safely and easily moved over from their previous storage platform to their shiny new VNX.  Here are some ideas on migration strategies that hopefully help you in the planning or execution of data migrations! Keep in mind that almost every one of our customers are heavily virtualized, the vast majority of which run VMware vSphere– so it is in that context that the following “strategies” are presented.

The “strategies” listed below are NOT exclusive of one another– in fact, we commonly use a combination of all 3 to provide a “total solution”. And there are some other strategies and techniques used to aid in the migration of data that aren’t covered that work just fine.

“Strategy” 1

First and foremost, the BEST and EASIEST data migration of all can be done without any downtime, and with zero risk. This is accomplished by using Storage vMotion (svMotion), and means you are migrating over some of your virtual environment to the VNX. It requires that your data exists in a VMDK (meaning it’s a virtual machine and that drive is NOT an RDM), and that you’ve created and presented storage from the VNX to the vSphere environment. This method works like a charm!

“Strategy” 2

For environments that have a lot of physical servers (which hurts my feelings by the way– virtualize ‘em already!), or use lots of RDMs, there are some alternative options, most common of which uses EMC’s SAN Copy software. SAN Copy is included on the VNX platform for FREE, and for customers migrating from an EMC CLARiiON, EMC will provide this to you at no charge (again, read FREE) for your CLARiiON, (to be used for migrating to the VNX).

SAN Copy is installed as a “software enabler” on the desired arrays and allows the copying of data between arrays, in either a “push” or “pull” method. The method, push or pull, is determined by the array initiating the copy job (if it’s the “old” array, it’s a push–).

This can also take place between an EMC array and a “qualified non-EMC storage array”, but only in a “pull” fashion.

Using a “push” method (where you’re configuring and starting the SAN Copy job from the array you’re migrating FROM)  allows you to do Incremental SAN Copy jobs. This let’s you do a single full copy, and from then on only copy over the changed/delta data.

It’s important to note that this is “host independent”– meaning the host servers are not involved. This DOES require downtime to cutover, unlike strategy 1 described above.

Note: This doesn’t get configured the same way it did on the CLARiiON platform when going “CX to CX”. I’ll work on getting a video together showing the difference, but for now just know that when you create the SAN Copy session, the “remote LUN” isn’t something you select (today)– you must enter the unique ID of the LUN instead (see screenshot). Kind of a pain, but not unbearable.

“Strategy” 3

When migrating over your file server(s) or file server(s) data, consider skipping the “copy to a LUN” or svMotion methods, and move straight to using the CIFS server capabilities of the VNX. This is a larger conversation, but in brief, the VNX can operate as a Windows file server, where you give it some disk space to use, a name and IP address, and join it to the domain. It will appear as a regular computer in Active Directory (you can create multiples if you’d like as well– for security or isolation between departments, companies, domains etc.), and you can even include it in your DFS environment.

The BENEFITS of this are numerous, and include:

  • FILE DE-DUPLICATION! This is one of the BEST reasons to move your file server data to the VNX. Huge storage savings, which ultimately means “YOU SAVE MONEY”! (see screenshot below– 43% of all files were deduplicated!)
  • Highly available, highly resilient file server. Since the platform itself maintains 5 9’s of availability, your file server doesn’t have a single point of failure, which it would if it was a stand-alone Windows server (think about what happens if the OS blue screens for example).
  • High performance hardware designed to serve files!
  • Thin Provisioning– only commit what’s actually being used, again, saving space/money.

EMC VNX – Changing Storage Processor IP & NAME

Source: EMC VNX | David Ring//

This is a guideline on how to change the VNX Storage Processor IP and Name via Navicli. Navicli does not require a reboot of the SP after changing the SP IP address but does require a restart of the Management Server. Please note that a change of SP Name will require a reboot of the SP.

Note: if you are using the Setup Page (http:// SP_IP /setup/) to change either the SP IP or Name then a reboot of the SP will be required.

Ensure you address the following points before proceeding:
1. If the array to be changed is part of a storage domain, you must remove it from the domain before proceeding. If the array to be changed is the domain master, assign another array to that role before continuing. This can be done via Unisphere VNX Client.
2. If this is the only array in a storage domain, destroy the domain prior to changing the IP address. (Management server restart required)
3. Check that all hosts have dual-Fibre connectivity and that failover software is working correctly.

First before making a change take a look at the existing SP details:
naviseccli -h 192.168.101.40 networkadmin -get -sp a –ipv4
SP_IP0

naviseccli -h 192.168.101.41 networkadmin -get -sp b –ipv4
SP_IP1

Change SP_A IP:
naviseccli -h 192.168.101.40 -user sysadmin -password sysadmin -scope 0 networkadmin -set -ipv4 -address 10.236.66.71 -subnetmask 255.255.255.0 -gateway 10.236.66.1
SP_IP2

Change SP_B IP:
naviseccli -h 192.168.101.41 -user sysadmin -password sysadmin -scope 0 networkadmin -set -ipv4 -address 10.236.66.72 -subnetmask 255.255.255.0 -gateway 10.236.66.1
SP_IP3

Change DNS IP and Domain Entries:
Running this command on one SP is sufficient as it will sync the changes automatically with it’s peer SP.
naviseccli -h 10.236.66.71 -user sysadmin -password sysadmin -scope 0 networkadmin -dns -set -domain corp.local -nameserver 10.10.10.20 10.10.10.30
List DNS:
naviseccli -h 10.236.66.71 -user sysadmin -password sysadmin -scope 0 networkadmin -dns -list

Now if we retrieve the IP details for both SP’s we can see the changes made:
SP_A:
naviseccli -h 192.168.101.40 networkadmin -get -sp a –ipv4
SP_IP6

SP_B
naviseccli -h 192.168.101.41 networkadmin -get -sp b –ipv4
SP_IP7

Next to change the Network name. Again please note this change will cause the SP to reboot:
Change SP_A Name:
naviseccli -h 10.236.66.71 -user sysadmin -password sysadmin -scope 0 networkadmin -set -name NewNameSPA
SP_IP4
Change SP_B Name:
naviseccli -h 10.236.66.72 -user sysadmin -password sysadmin -scope 0 networkadmin -set -name NewNameSPB
SP_IP5

Note:
Please refer to the VNX Procedure Generator for a detailed list of specific guidelines for completing this task.

EMC VNX – NEW SHUTDOWN OPTIONS

EMC VNX | David Ring//

A new feature with the release of VNX Rockies(Block OE 5.33 & File OE 8.1) was the ability to Shutdown the Entire Array using either a single command or via the ‘Power Off’ button in the Unisphere GUI. This feature is also available for first generation VNX storage system’s, from VNX OE code release 05.32.000.5.209 & 7.1.74.5 onwards.
These options are supported on Unified, Block and File systems.

Power Off via CLI
The new CLI option extends the nas_halt command to include a new switch to power down the entire system :
nas_halt –f –sp now
This will power off Control Stations, Data Movers and the Storage Processors.
usage: nas_halt [-f] [-sp] now
Perform a controlled halt of the Control Station(s) and Data Mover(s)
-f Force shutting down without prompt
-sp Shut down Storage Processors on unified platforms

Power Off via Unisphere
From Unisphere GUI navigate to the System List page:
SHUT1

Once you hit the ‘Power Off’ button a dialog box appears and from here you enter the array serial number in order to confirm shutdown:
SHUT2

Final Confirmation:
SHUT3

Note:
DAEs are not powered off.