Want To Read More?

Categories
  • Apple (2)
    • OSX (1)
  • Cisco (half-dozen)
    • CCNP R&S (6)
  • Games (half-dozen)
  • Dwelling house Automation (2)
  • Linux/Unix (12)
    • Debian (ii)
    • Raspberry Pi (4)
    • TrueNAS (one)
    • Ubuntu (2)
    • unRAID (two)
  • Networking (three)
  • Programming (6)
  • Security Cameras (2)
    • NVR & Software (2)
  • Storage (vi)
  • Uncategorized (ane)
  • Virtualization (10)
    • ESXi (vi)
    • Proxmox (1)
    • XCP-ng (1)
  • Web Development (3)
    • Elementor (1)
    • WooCommerce (1)
    • WordPress (three)
  • Windows Server (16)

FreeNAS 11.ii and ESXi 6.vii iSCSI Tutorial

Looking for updated TrueNAS content? Check out my newer post here: TrueNAS 12 & ESXi Home Lab Storage Pattern

Note: These steps are still mostly valid as of TrueNAS 12 and ESXi 7.0 release.

It's been over ii years since my previous guide on setting upwards iSCSI between FreeNAS and ESXi and in that time many things have changed. FreeNAS at present has a new UI, making things simpler and more straight forward. I think we can all agree the prettier graphs are extremely important too.

In this guide we'll evaluate if FreeNAS is still the best solution for your storage needs and explain why iSCSI performs best, followed by complete setup instructions for a killer multi-path redundant link iSCSI config. FreeNAS is great just every bit with most things, there are pros and cons so lets get them out of the fashion every bit conspicuously equally possible.

Pros

  • Built on FreeBSD using ZFS file system, incredibly robust combination
  • ZFS uses lots of ram (instead of leaving it idle)
  • Copy on write file system, very resilient to ability failures
  • Fleck rot protection through check-summing and scrubs
  • ARC (adaptive replacement cache) gives excellent read performance, can be expanded to L2ARC with an SSD or Optane bulldoze
  • ZIL is basically a fast persistent write cache and can also exist put on an SSD or Optane Drive
  • lz4 compression is very fast and with VMs it's not unusual to come across 50% compression ratio

Cons

  • Requires a beefier system than traditional HW/SW raid solutions, 8-16GB ram is the low end
  • You can't just add/remove boosted drives like you lot tin can with Unraid and MDADM

Is FreeNAS Correct For You?

As of writing this in November 2022 I would lean towards Unraid for a pure file/media server. Due to information technology'due south all in one nature, proficient VM support (running VMs on FreeNAS isn't worth your time in my experience) and ability to add/remove disks information technology's well suited for media/habitation employ. Nonetheless when performance starts to matter, FreeNAS shines with plenty of options to speed upward VMs and/or databases and get great IOPS performance out of your spinning rust.

If you lot're in the planning phase of your build, sticking to NAS hard drives for any setup is recommended. I recommend WD Red and Seagate Ironwolf drives every bit they're rated for 24/7 utilize and are known to be highly reliable in NAS Servers. Going with cheaper drives may piece of work however performance and reliability may be impacted. iXsystems (the creators of TrueNAS) have a range of systems available. Of course these are guaranteed to run TrueNAS excellent. You can browse their rangen here (Amazon Affiliate Links):

The Config

The goal is uncomplicated. Nosotros'll be setting up iv links between FreeNAS and ESXi (very easy to adapt this to 1-two link setups). This supports failover and multipathing for a solid speed boost. I'll show you some tricks to get amend operation out of your setup along the way too!

The network I'll exist using for iSCSI is a simple 4 direct links between 2 servers. Information technology'southward all-time practice to accept your iSCSI network separate from every other network.

FreeNAS      Cable   ESXi 10.0.0.one/30  <---->  ten.0.0.2/thirty 10.0.0.5/30  <---->  10.0.0.half-dozen/xxx 10.0.0.9/xxx  <---->  10.0.0.10/thirty 10.0.0.thirteen/xxx <---->  x.0.0.14/30

While I'm doing this with 1 gbit links I would highly recommend picking upwards some 10gbit SFP+ cards for low latency high throughput iSCSI. Here are a couple of affiliate links, if you can become them used <$50USD you're getting a good deal.

Amazon – HP 593742-001 NC523SFP 10GB ii-PORT SERVER NETWORK ADAPTER

Amazon – HP 586444-001 NC550SFP dual-port 10GbE server adapter

FreeNAS Network

For the first footstep, I'chiliad going to configure the IP addresses on my 4x connections I'll be using for iSCSI. I'thou using a /30 subnet significant 2 hosts, perfect for this setup. You can configure it with whatsoever addresses you like equally long as information technology'south reachable from ESXi and each link is on a different subnet.

Go to Network > Interfaces > Add together

Select your NIC, proper noun information technology (I similar to utilise the NIC as the name), set your IP and subnet mask. Repeat the ADD pace for each link you'll be using.

As a note, for additional operation you can add "mtu 9000" in the Options field. This volition tell it to use Jumbo Frames and tin result in higher throughput and lower CPU usage. However information technology can cause issues on some systems.

FreeNAS Storage

Side by side step I'thousand going to create a storage puddle, if yous've already washed this skip this step.

Become to Storage > Pools and click the Add together push to create a pool.

Here you tin can name your pool. Then add the disks by selecting them and then using the right arrow to move them across. Select the raid blazon. In my example I'll use mirror, however with more drives you can do raidz/raidz2 or a stripe of mirrors (raid10 like) for operation.

After y'all create your pool, click on the 3 dots side by side to it and select "Add Zvol".

It's recommended to not exceed fifty% of your storage size for your iSCSI share so I'm going to make mine 115GiB

FreeNAS iSCSI

ane. Fourth dimension to setup iSCSI, we'll commencement need to create a portal. To do this go to Sharing > Cake (iSCSI) > Portals and click Add.

Use the "ADD EXTRA PORTAL IP" choice to allow you to add as many interfaces as you'll need and type their IP Addresses in.

2. Next is Initiators > Add together. Here I've added a subnet that covers all my iSCSI networks, however you can leave it saying ALL and information technology'll work A-OK for you.

iii. At present on to Targets > Add. We'll fill in a name and set up the portal grouping ID and initiator group ID to i.

four. Extents > ADD. This is where we can selection our storage zvol under device. If you're using HDDs you may like to select the LUN RPM notwithstanding information technology'southward not going to change anything I'thousand aware of.

5. Concluding step before we turn iSCSI on. Get to Associated Targets > Add together and select your target and extent from the lists. Prepare a LUN ID and relieve.

6. Go to Services and enable iSCSI, besides checking the Commencement Automatically box.

ESXi Config

It'southward fourth dimension to configure ESXi, nosotros have to setup the networking and attach the iSCSI storage to information technology.

ESXi Networking

To network this upwardly to FreeNAS with the 4x links I'g using I'll make iv vSwitches. Go Networking > Virtual Switches > Add together standard virtual switch.

Name your interface and selection the advisable uplink. Note that if you use the "mtu 9000" option in your FreeNAS interface, yous'll have to set the mtu to 9000 here too.

Nosotros need to make iv VMkernel NICs now. Click on VMkernel NICs > Add VMkernel NIC

Pick a name and select the appropriate vSwitch. Under IPv4 settings configure the static address and subnet to friction match the FreeNAS system. If yous're using mtu 9000, gear up that in the mtu aswell.

Repeat this for every interface and you're good to go along the networking front.

ESXi iSCSI

iSCSI setup on ESXi is rather simple. Get Storage > Adaptors > Software iSCSI.

Enable iSCSI and fill in the 4 port bindings to the VMkernel NICs we created earlier. And so add dynamic targets to the FreeNAS IP Addresses.

Upon closing and opening the Software iSCSI configurator, you will see information technology picks up FreeNAS iSCSI share in the Static Targets area.

We have to add a datastore to finish this off. Become to Storage > Datastores > New datastore.

On the first page click adjacent. It may have a minute, then y'all should see your FreeNAS zvol we created evidence upwards. Option a name and click adjacent.

Use the full disk with VMFS6 file system, go next and finish.

Guess what? It's ready to test out!

iSCSI Testing and Tuning

In the current state y'all'll notice it'south simply getting single link speeds, be it 10gbit or 1gbit like I have.

To become higher speeds we need to ready it to round robin and make it modify information technology's path every iop. By default it does it every thousand iops.

To change this nosotros need to go to the control line and input 3 commands. 3 commands isn't that scary right?

Go to Manage > Services and click on TSM-SSH. Click First above it to start the SSH service.

Open an SSH programme like PuTTY and input your ESXi servers IP Accost.

Click yeah to the popup and login with your ESXi credentials that you use on the web interface. Type in the following.

esxcli storage nmp device listing

looking at the output shown, notice the FreeNAS iSCSI share and await for the naa ID numbers I've drawn effectually, taking note of them. Also annotation the bottom red box, nosotros'll change this setting.

Now paste in the post-obit command, replacing the​ NAA_HERE with the contents of the scarlet box starting with naa.. This will modify the iSCSI mode to circular robin

esxcli storage nmp device set up --device                                  NAA_HERE                                --psp VMW_PSP_RR

Following that command, put in this one to set the IOPs to 1.

for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep                                  NAA_HERE                `; practise esxcli storage nmp psp roundrobin deviceconfig fix --type=iops --iops=1 --device=$i; done

After those 2 commands, type the very get-go 1 again to ostend it's working. You should see something like to the post-obit.

Later making these changes we can meet that the performance is almost double. It'south far from 10gbe for me just with 2x 10gbit links y'all could achieve amazing results. Note that I'm using 2x 250GB Seagate HDDs from 2006 that are mirrored for this tutorial.

Share this postal service

This Mail Has 33 Comments

  1. Manoj

    Thank you for sharing your step by step guide on how to configure the FreeNas 11.2 forth with ESX 6.seven for optimum functioning.

    I don't take access to dedicated hardware to build a FreeNas host and was thinking of using FreeNAS on a VM running on top of my lab ESX 6.7 host which does have a couple of 2x 10gbit NIC so I am thinking since the local network is running at 10gbit the operation should be reasonably skilful.

    Will be grateful if y'all are able to shed any light on the in a higher place.
    Thanks!

    1. John Groovy

      Hi Manoj,
      As long as you laissez passer through a PCI-E based SATA controller to the VM so it tin can run across the difficult drives directly it should work well.
      For iSCSI I believe running information technology on an internal network rather than over the NICs would be preferable and then every bit in that location would be no speed cap. Y'all could create a vSwitch with no NICs and put both direction for ESXi (for iSCSI) and FreeNAS iSCSI on this vSwitch. I hope that makes sense and note, I haven't tested it and so I'yard not 100% on it.
      Good luck and give thanks you!
      John

  2. Ahmed

    Hello John!

    I'thou having trouble reconciling these two statements:

    "running VMs on FreeNAS isn't worth your time in my feel"

    "Withal when performance starts to matter, FreeNAS shines with plenty of options to speed upward VMs and/or databases and get great IOPS performance out of your spinning rust."

    Is FreeNAS ok for VMs or no?

    Thank you! Looking frontwards to trying your write-up.

    1. John Keen

      Hullo Ahmed,

      My apologies there. In the first statement I'm talking nearly running VMs on FreeNAS directly in bhyve. In the 2nd I'm talking about running VMs in ESXi with FreeNAS every bit the connected iSCSI storage.

      Thank you!

  3. Benjamin

    1. John Bang-up

      Hi, I'll need to do some more than research on this before I can give an answer. Thank you for your comment.

      1. John

        John, simply curious, did you find anything additional regarding this point about space reclamation?

  4. Nick Janssen

    Thanks for this tutorial! Is it hard to adjust this to a 10Gbit between Esxi and FreeNAS and Esxi and the normal network switch?

    And you said for FreeNAS 8-16GB memory is low end. How much is prefered so? 32GB?

    Thanks in advance!

    1. John Bully

      It should work fine for 10GbE however some people run in to trouble with colossal frames (9000MTU).
      Try it for the actress functioning but if you lot accept trouble, MTU is a likely culprit.

      As for RAM size, information technology depends on your storage size. For 8GB I wouldn't do over ~12TB puddle. for 16GB mayhap 30-40TB.
      Some say 1GB RAM per TB but it isn't based on any facts.
      Due to the way ZFS uses RAM, going to 16GB (or 32GB on a big puddle) will make a noticeable difference for running many VMs vs less RAM.

      Good luck!

  5. Thomas Paine

    Howdy John,
    I have been looking for something like this for my setup.
    I have mine cobbled together, just the performance is abysmal.
    I want to endeavour your setup.
    The difference with mine is I take 3 supermicro servers. Two are esxi hosts, and one is the freenas.
    I have 2 10G nics in each server, plugged into a DLink DXS-1210 10G switch.
    How would I adapt the networking to make this a meliorate setup? Currently I have 4 nics in each of the three servers. two 1G nics, and two 10G nics.
    I want to employ the 10G nics for iscsi but, and the 1G nics for everything else. Ideally I'd like to use the 10G for vmotion as well.

  6. Manoj

    Hi John,
    I ditched the previous idea of building a Virtualize FreeNas and got myself a baremetal FreeNAS box to connect my ii ESXi home lab hosts.

    In your setup, I come across you have four nics on your FreeNAS box, whereas on my motherboard, I but take 4 factory fitted nics and no slots for adding any more. And so I am curious to find out how I could maximize network traffic considering I need to exist able to manage the FreeNAS box using one of the four nics I have. I could create split VLANs but then I will need a L3 switch which I want to try an avert that. Worst example I volition take to live with a unmarried path on i of my ESX hosts.

    On my ESXi hosts, I take 4 nics split as below and was going to use vNIC2 and 3 for iSCSI traffic.

    vNIC0 ——–> Management
    vNIC1 ——–> VM Network
    vNIC2 ——–> Complimentary
    vNIC3 ——–> Free

    Whatsoever ideas yous accept on how I should configure the multi-pathing.
    Cheers!

    1. John Keen

      Hi, Y'all don't need a layer 3 switch for VLANs, a smart or managed layer 2 switch will suffice.
      Then you lot tin setup Management and VM Network on one port, different vlans, leaving three ports free for iSCSI.

      vNIC0 ——–> Management vlan ten, VM Network xx vlan 20, VM Network 30 vlan 30
      vNIC1 ——–> iSCSI vlan 800
      vNIC2 ——–> iSCSI vlan 800
      vNIC3 ——–> iSCSI vlan 800

      A Inexpensive TP-Link smart switch with VLANs isn't also expensive and in my experience they're bully for home use.
      Good luck!

  7. Shane

    I have a nuc with 64gb ram. Installed xi.3 yesterday in a lab. And deployed two esx vms. I have the vmkernel and the freenas on the same layer two network. For some reason, even though both esx servers can come across the iscsi target, only 1 can see the storage device at a time. Any ideas? Thanks!!

    1. John Smashing

      Hi, iSCSI isn't file sharing, information technology's block level storage sharing. Therefor only 1 ESXi system is able to use the iSCSI share at a time.

      1. Mario

        Howdy John,

        Great posts in your blog and very informative.

        Only a note on your reply here, are you sure about this? From other sources, I have read that at least having VCentre managed ESXi servers, multiple ESXi's can access the same LUN.

        – setup and configure iscsi on the freenas device.
        – from your host go to the configure tab, select storage adapters, on the right there should exist 'iscsi software adapter' if not there'due south an option at the top of the right side to create 1.
        – select the iscsi software adapter, click the targets tab, click dynamic discovery, click add together, throw the ip and port of the freenas iscsi in, click ok, rescan storage adapters.

        I've read too some comments that hosts don't even need to be in a cluster to allow this.

        Thanks,

        1. Y

          if you configure your ESXi hosts in a cluster in vCenter, you can have the unabridged cluster employ an iSCSI-based SAN. Yous will have to set iSCSI on each host, and in so doing point each host to your FreeNAS SAN with the correct iSCSI targets. This will brand the iSCSI datastore available to each host in your cluster. Yous tin can even skip the clustering part if you lot really want, bu clustering enables a lot of other nice features in vCenter.

  8. Mark

    Im trying to go my encephalon around virtualizing freenas on an esxi server then passing the drives back to the server. My current thought is:

    Esxi server with i ssd (hosting freenas) and so passing the iii larger traditional difficult drives directly to freenas to create the pool. Connecting that pool to the esxi host via iscsi with virtual 10GbE nics and using that pool to store my media and other virtual containers.

    Is this the gist of your tutorial?

    1. John Keen

      In this tutorial I cover using two separate concrete systems just most elements can utilise to a virtualized FreeNAS one besides.
      You lot will simply need a unmarried virtual NIC between host a VM as there isn't a cap on the speed that I'k enlightened of. It says 10GbE however it may get above that if the system is fast plenty.
      Skillful luck!

  9. Jithesh

    Hello John,

    Thanks for uploading this article.
    I have configured Freenas in my lab network (VM on Hyper V, installed on HP workstation in aforementioned network )
    I am not able add together this Freenas volume in esxi (bare metal).
    Configuration of iSCSI software adapter has been completed.
    Can see dynamic and static target, simply unable to find the freenas drive when calculation VMFS datastore.
    Could you delight let me know why this issue happening.

    1. John Smashing

      Howdy, I would like to give you lot a quick reply but I'g not sure why information technology isn't working for you.
      I don't have an ESXi lab setup now otherwise I would look at options that may be preventing yous from seeing the datastore.
      Good luck!

    2. I had a like trouble. My fix was to make sure I had 9000 MTU for ESXi vswitch + vmkernel + FreeNAS interfaces, then 9216 MTU on my Netgear switch (9000 for 'data' +216 for 'TCP overhead'. My fault was Netgear switch was 9000 for FreeNAS ports but I missed the ESXi ports and left them at 1500. Brand certain your unabridged path is 9000 MTU!

      1. Hahn

        Thanks for your tip. Yous saved my day. After setting my concrete switch port MTU to 9216, I ended up success.

  10. FreeNAS installed on a VM volition be used to create an iSCSI target in this example. The iSCSI target is then connected equally shared datastore to the ESXi host. FreeNAS is a free distribution based on the FreeBSD operating arrangement that provides a web interface for creating and managing network shares. Download the ISO installation epitome from the official site and place information technology to the

  11. JR

    Howdy, quick question– in freenas/truenas operation wise (transfer speed) which is better? a) Setup an iscsi block share directly to a deejay b) Setup disk in ZFS pool > zvol and present the zvol as an iscsi block? Thanks

  12. Sam

    Hey John, actually enjoyed this article and it helped me getting off the basis with my setup. One question I had though, I retrieve some time ago, reading an article that said 1 large ZVOL is not suggested. Instead, partition to multiple smaller ZVOL's. Is this your experience, suggestion as well? For reference, I have 23 TiB storage bachelor, I would love to present 23 TiB as one Datastore to my ESXi environment, merely will not do it if information technology is not all-time do.

    Thanks!

    1. John Keen

      How-do-you-do, I think when information technology comes to iSCSI sharing i large zvol tin cause fragmentation. The reasons I tin't think as well well.
      I wish I could help more there, good luck!

  13. Jan Jaap

    Hey John, Thanks for the very adept tutorial. I was using NFS sharing only i was facing very poor write speeds. But to let you lot know equally i accept Freenas running as virtual motorcar in ESXI (for over 3 years now with SATA passtrough) at supermico lath X11SSH-LN4F. I had to create outset ii network adapters for freenas and connect these adapters to the virtual switches same every bit you have done above. I used 2 in stead of 4 and i asume 1 was maybe also enoug. Then started setting upwardly the adapters and iSCSI in freenas and in the terminate also in ESXI. In that location was a small moment in ESXI adding the iSCSI was not working in the adapter setting. Now information technology accustomed as static and dynamic only as software ISCSI
    In the end everything is working very good. speeds over 600mb/s. i assumere here is the 16GB of ram in freenas the buffer and it takes the maximum SATA speed. information technology is only a gauge.

  14. Hey John, what'due south the step-back/recovery steps for the ESXi round robin? If I set information technology and then make up one's mind I don't want the round robin, how exercise I un-round it?

    1. John Keen

      Howdy, I don't have an ESXi box in my house right now to examination this with only I would be guessing it's this:
      esxcli storage nmp device fix –device NAA_HERE –psp VMW_PSP_MRU
      Remember to change the NAA_HERE, good luck. You can also set iops back up to g the aforementioned manner it's set to 1 in this guide.

  15. Y

    This is a very informative post about how to gear up upwardly and optimize iSCSI between FreeNAS and VMware.

    Couple of best practices from VMware for six.vii and 7.0 while doing this (I've been using VMware for well-nigh 10 years at present):
    i) if you're going to go the multiple vSwitch method, your initiators and targets should /non/ be on the same subnet. In this case, use unlike broadcast domains and do not do port binding
    2) if your hosts (initiators) and targets are on the aforementioned subnet, you should use a unmarried vSwitch and make sure you configure a 1:1 relationship betwixt each iSCSI VMkernel adapter and physical NIC. This means port binding, and also making sure your teaming and failover policies have only one agile NIC per port group.
    3) if you use vCenter, yous should use a Distributed switch with the # of uplinks equal to the physical NICs yous have for iSCSI; then create one distributed port group per iSCSI VMkernel/physical NIC combo. Then inside each port group, make your VMkernel adapter, enabling iSCSI, and binding it to a unmarried unique iSCSI physical NIC. You should and so still edit the teaming/failover policies in the port group to ensure that only ane unique NIC is active per grouping – it should be the NIC y'all're dedicating to that particular port grouping/VMkernel adapter.
    4) do /not/ try and use teaming with port bounden, and do /not/ employ LACP LAGs with iSCSI nether any circumstances

  16. Dejan

    Thanks for very practiced manual:

    truenas 12
    3x10Gb NIC
    Seq Q32T1 : 2.1 GB Read, two.v GB Write

  17. Wonderful tutorial! It is nevertheless really helpful. Give thanks you for explaining it so well. Espacially the round robbin part is of import! I just need a few more NICs to improve performance.

    Thak you so much!

  18. James Stanworth

    Thanks for the earlier version of this tutorial. Information technology got my Freenas 11 – Esxi 6.0 combination upwards and running nicely. I desire to expand the number of links (2 to 4) so I'm paying another visit.

    I've got a few questions . . .
    – I see the screen shot with software used to mensurate performance. What are you using here? I've always relied on the Freenas GUI to go an idea of throughput speeds much I'm not really sure if that is giving a fair moving-picture show.
    – This is a chip outside the scope of your tutorial but – should I be setting up a seperate pool on the Freenas side for each VM? At the moment I've got three pools and each stores 2-3 VMs. This makes information technology hard to understand (a) space allotment and (b) manage features like snapshots.
    For (a) your proposition is non to use more than 50 of the puddle for VM. I judge that is read on the Freenas side (and then 500 GB as a pool should not have more than 250Gb of VM)?
    (b) under this kind of setup should I fifty-fifty exist thinking of snapshots on the Freenas side or just manage this from the Esxi?