How to setup FCoE on Dell R710 Servers with X520 card using the Nexus 1000v

FCoE works well enough if you are deploying on a UCS or other blade server that is officially supported.  However it can get a bit interesting trying to get it to work on a general purpose server.  Here is my writeup on deploying FCoE using Dell R710 servers and the included X520-DA2 card.  Oh and the requirement to use the Nexus 1000V.

Dell Server

First on the Dell server you really don’t have to do anything in the BIOS, there is an FCoE configuration in the BIOS but that is just for boot from FCoE you don’t have to touch it at all.

VmWare

However you do have to build VmWare using the Dell specific build disc.

VMvisor-Installer-5.5.0-1331820.x86_64-Dell_Customized_A02.iso

http://www.dell.com/support/drivers/us/en/19/driverdetails?driverid=1P13P

Once VmWare is installed you will need to add the software FCoE adapters, here is the procedure (You only have to do the part on pages 14-17):  http://www.intel.com/content/www/us/en/network-adapters/10-gigabit-network-adapters/ethernet-x520-configuring-fcoe-vmware-esxi-5-guide.html

Nexus 5548

Getting the configuration on the 5k right is really the trick on this build, all sorts of weird problems will show up if you are missing anything and they are quite difficult to troubleshoot.  BTW I used version 7.0(2)N1(1) on the Nexus 5k.

1st the QoS configuration is essential, configure the 5k like this:

http://keepingitclassless.net/2012/11/qos-part-2-qos-and-jumbo-frames-on-nexus-ucs-and-vmware/

I used the above configuration with the only exception that I allowed jumbo frames in all traffic classes.

Next,

You need these features installed:

feature fcoe
feature npiv
feature lldp

The next thing to know is that if you are using a port channel to the 1000v the port channel cannot be the interface that binds to the vfc.  FCoE cannot ride on a VPC.  Instead you would want to create two VFC interfaces one on each 5k and have two fabrics A&B.

So the physical interface on one 5k would look like this:

interface Ethernet101/1/11
description esxi1-eth0
switchport mode trunk
spanning-tree port type edge trunk
channel-group 1011 mode active

And the virtual interface would look like this:

interface vfc1011
bind interface Ethernet101/1/11
no shutdown

The vsan must be created and the vfc must be in the vsan:

vsan database
vsan 100
vsan 100 interface vfc1011

The VSAN must be bound to a VLAN

vlan 600
fcoe vsan 100

Of course your server must be zoned in with your storage, I won’t go into that in this blog.

Before the 1000v is installed, you should be able to see your vfc interface (1011 in my case) when you perform a “show flogi database” command

Here are some useful troubleshooting commands

Debug lldp errors

Debug lldp warnings

Debug lldp dcbx_feat

Show sys internal dcbx info interface e101/1/11

Also login to the Esxi host and tail the /var/log/vmkernel.log file while flapping the interface is useful.

Nexus 1000v

Finally on the 1000v, currently the 1000v latest version 4.2.1.SV2.2a does not support these software FCoE adapters, however I can confirm that the latest beta of the dao release (5.2.1.SV3.1.0.276) does support them.  Follow the normal process for adding the host to Vcenter and then to the 1000v.

 

Advertisements
This entry was posted in network, Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s