How To Setting Up NFS (Network File System)

Introduction

NFS (Network File System) allows you to 'share' a directory located on one networked computer with other computers/devices on that network. The computer 'sharing' the directory is called the server and the computers or devices connecting to that server are called clients. The clients 'mount' the shared directory, it becomes part of their own directory structure.

NFS is perfect for a NAS (Networked Attached Storage) deployment in a Linux/Unix environment. It is a native Linux/Unix protocol as opposed to Samba which uses the SMB protocol developed by Microsoft. The Apple OS has good support for NFS. Windows 7 has some support for NFS.

NFS is perhaps best for more 'permanent' network mounted directories such as /home directories or regularly accessed shared resources. If you want a network share that guest users can easily connect to, Samba is more suited. This is because tools exist more readily across operating systems to temporarily mount and detach from Samba shares.

Before deploying NFS you should be familiar with:

  • Linux file and directory permissions
  • Mounting and detaching (unmounting) filesystems

NFSv4 quick start

Providing you understand what you are doing, use this brief walk-through to set up an NFSv4 server on Ubuntu (with no authentication security). Then mount the share on an Ubuntu client. It has been tested on Ubuntu 10.04 Lucid Lynx.

NFSv4 server

Install the required packages…

  • 				# apt-get install nfs-kernel-server
    	

NFSv4 exports exist in a single pseudo filesystem, where the real directories are mounted with the –bind option.

  • Let's say we want to export our users' home directories in /home/users. First we create the export filesystem:

    				# mkdir -p /export/users
    	
    It's important that /export and /export/users have 777 permissions as we will be accessing the NFS share from the client without LDAP/NIS authentication. This will not apply if using authentication (see below). Now mount the real users directory with:
    				# mount --bind /home/users /export/users
    	
    To save us from retyping this after every reboot we add the following

    line to /etc/fstab

    				/home/users    /export/users   none    bind  0  0
    	

There are three configuration files that relate to an NFSv4 server: /etc/default/nfs-kernel-server, /etc/default/nfs-common and /etc/exports.

  • Those config files in our example would look like this:

    In /etc/default/nfs-kernel-server we set:

    				NEED_SVCGSSD=no # no is default
    	
    because we are not activating NFSv4 security this time.

    In /etc/default/nfs-common we set:

    				NEED_IDMAPD=yes	NEED_GSSD=no # no is default
    	
    because we want UID/GUID to be mapped from names.

In order for the ID names to be automatically mapped, both the client and server require the /etc/idmapd.conf file to have the same contents with the correct domain names. Furthermore, this file should have the following lines in the Mapping section:

  • 				[Mapping]		Nobody-User = nobody	Nobody-Group = nogroup
    	

    However, the client may have different requirements for the Nobody-User and Nobody-Group. For example on RedHat variants, it's nfsnobody for both. cat /etc/passwd and cat /etc/group should show the "nobody" accounts.

This way, server and client do not need the users to share same UID/GUID.

For those who use LDAP-based authentication, add the following lines to your client's idmapd.conf:

[Translation]Method = nsswitch

This will cause idmapd to know to look at nsswitch.conf to determine where it should look for credential information (and if you have LDAP authentication already working, nsswitch shouldn't require further explanation).

  • To export our directories to a local network 192.168.1.0/24

    we add the following two lines to /etc/exports

    				/export       192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)	/export/users 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
    	

Now restart the service

  • 				# /etc/init.d/nfs-kernel-server restart
    	

NFSv4 client

Install the required packages…

  • 				# apt-get install nfs-common
    	

The client needs the same changes to /etc/default/nfs-common to connect to an NFSv4 server.

  • In /etc/default/nfs-common we set:

    				NEED_IDMAPD=yes	NEED_GSSD=no # no is default
    	

    because we want UID/GUID to be mapped from names. This way, server and client do not need the users to share same UID/GUID. Remember that mount/fstab defaults to NFSv3, so "mount -t nfs4" is necessary to make this work.

On the client we can mount the complete export tree with one command:

  • 				# mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/ /mnt/
    	
  • 	
The directories must start and end with the forward slash.
sudo mount -t nfs4 -o proto=tcp,port=2049 fileshare-01:/mnt/local/sda1/ /mnt/nfs/fileshare-01.sda1/

Note that nfs-server:/export is not necessary in NFSv4, as it is in NFSv3. The root export :/ defaults to export with fsid=0.

It can fail sometimes with the message

mount.nfs4: No such device

You have to load the nfs module by giving the command

# modprobe nfs

To make sure that the module is loaded at each boot, simply add nfs on the last line of /etc/modules. We can also mount an exported subtree with:

  • 				# mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/users /home/users
    	

To save us from retyping this after every reboot we add the following line to /etc/fstab:

  • 				nfs-server:/   /mnt   nfs4    _netdev,auto  0  0
    	

    The auto option mounts on startup and the _netdev option waits until system network devices are loaded. However this will not work with WiFi as WiFi is set up at the user level (after login) not at system startup. If you use _netdev with WiFi the boot process will pause waiting for the server to become available.

Note that _netdev only works with nfs version 3 and before. nfs4 ignores this option. Depending on how fast the network comes up on boot the mount entry may fail and the system will just keep booting. It can still be useful if you make your own script to wait for the network to come up and then mount -a -O _netdev

Ubuntu Server doesn't come with any init.d/netfs or other scripts to do this for you.

NFS Server

Pre-Installation Setup

None of the following pre-installation steps are strictly necessary.

User Permissions

NFS user permissions are based on user ID (UID). UIDs of any users on the client must match those on the server in order for the users to have access. The typical ways of doing this are:

  • Manual password file synchronization
  • Use of LDAP

  • Use of NIS

It's also important to note that you have to be careful on systems where the main user has root access – that user can change UID's on the system to allow themselves access to anyone's files. This page assumes that the administrative team is the only group with root access and that they are all trusted. Anything else represents a more advanced configuration, and will not be addressed here.

Group Permissions

With NFS, a user's access to files is determined by his/her membership of groups on the client, not on the server. However, there is an important limitation: a maximum of 16 groups are passed from the client to the server, and, if a user is member of more than 16 groups on the client, some files or directories might be unexpectedly inaccessible.

Host Names

optional if using DNS

Add any client name and IP addresses to /etc/hosts. The real (not 127.0.0.1) IP address of the server should already be here. This ensures that NFS will still work even if DNS goes down. You could rely on DNS if you wanted, it's up to you.

NIS

optional – perform steps only if using NIS

Note: This only works if using NIS. Otherwise, you can't use netgroups, and should specify individual IP's or hostnames in /etc/exports. Read the BUGS section in man netgroup.

Edit /etc/netgroup and add a line to classify your clients. (This step is not necessary, but is for convenience).

myclients (client1,,) (client2,,)

Obviously, more clients can be added. myclients can be anything you like; this is a netgroup name.

Run this command to rebuild the YP database:

sudo make -C /var/yp

Portmap Lockdown

optional

Add the following line to /etc/hosts.deny:

portmap mountd nfsd statd lockd rquotad : ALL

By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server.

Now add the following line to /etc/hosts.allow:

portmap mountd nfsd statd lockd rquotad : list of IP addresses

Where the "list of IP addresses" string is, you need to make a list of IP addresses that consists of the server and all clients. These have to be IP addresses because of a limitation in portmap (it doesn't like hostnames). Note that if you have NIS set up, just add these to the same line.

Installation and Configuration

Install NFS Server

sudo apt-get install portmap nfs-kernel-server

Shares

Edit /etc/exports and add the shares:

/home @myclients(rw,sync,no_subtree_check)/usr/local @myclients(rw,sync,no_subtree_check)

The above shares /home and /usr/local to all clients in the myclients netgroup.

/home 192.168.0.10(rw,sync,no_subtree_check) 192.168.0.11(rw,sync,no_subtree_check)/usr/local 192.168.0.10(rw,sync,no_subtree_check) 192.168.0.11(rw,sync,no_subtree_check)

The above shares /home and /usr/local to two clients with fixed ip addresses. Best used only with machines that have static ip addresses.

/home 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)/usr/local 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)

The above shares /home and /usr/local to all clients in the private network falling within the designated ip address range.

rw makes the share read/write, and sync requires the server to only reply to requests once any changes have been flushed to disk. This is the safest option (async is faster, but dangerous. It is strongly recommended that you read man exports.

After setting up /etc/exports, export the shares:

sudo exportfs -ra

You'll want to do this command whenever /etc/exports is modified.

Restart Services

By default, portmap only binds to the loopback interface. To enable access to portmap from remote machines, you need to change /etc/default/portmap to get rid of either "-l" or "-i 127.0.0.1".

If /etc/default/portmap was changed, portmap will need to be restarted:

sudo /etc/init.d/portmap restart

The NFS kernel server will also require a restart:

sudo /etc/init.d/nfs-kernel-server restart

Security Note

Aside from the UID issues discussed above, it should be noted that an attacker could potentially masquerade as a machine that is allowed to map the share, which allows them to create arbitrary UIDs to access your files. One potential solution to this is IPSec, see also the NFS and IPSec section below. You can set up all your domain members to talk only to each other over IPSec, which will effectively authenticate that your client is who it says it is.

IPSec works by encrypting traffic to the server with the server's key, and the server sends back all replies encrypted with the client's key. The traffic is decrypted with the respective keys. If the client doesn't have the keys that the client is supposed to have, it can't send or receive data.

An alternative to IPSec is physically separate networks. This requires a separate network switch and separate ethernet cards, and physical security of that network.

NFS Client

Installation

sudo apt-get install portmap nfs-common

Portmap Lockdown

optional

Add the following line to /etc/hosts.deny:

portmap : ALL

By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server.

Now add the following line to /etc/hosts.allow:

portmap : NFS server IP address

Where "NFS server IP address" is the IP address of the server. This must be numeric! It's the way portmap works.

Host Names

optional if using DNS

Add the server name to /etc/hosts. This ensures the NFS mounts will still work even if DNS goes down. You could rely on DNS if you wanted, it's up to you.

Mounts

Check to see if everything works

You should try and mount it now. The basic template you will use is:

sudo mount ServerIP:/folder/already/setup/to/be/shared /home/username/folder/in/your/local/computer

so for example:

sudo mount 192.168.1.42:/home/music /home/poningru/music

Mount at startup

NFS mounts can either be automatically mounted when accessed using autofs or can be setup with static mounts using entries in /etc/fstab. Both are explained below.

Automounter

Install autofs:

sudo apt-get install autofs

The following configuration example sets up home directories to automount off an NFS server upon logging in. Other directories can be setup to automount upon access as well.

Add the following line to the end of /etc/auto.master:

  /home         /etc/auto.home

Now create /etc/auto.home and insert the following:

  *             solarisbox1.company.com.au,solarisbox2.company.com.au:/export/home/&

The above line automatically mounts any directory accessed at /home/[username] on the client machine from either solarisbox1.company.com.au:/export/home/[username] or solarisbox2.company.com.au:/export/home/[username].

Restart autofs to enable the configuration:

sudo /etc/init.d/autofs start

Static Mounts

Prior to setting up the mounts, make sure the directories that will act as mountpoints are already created.

In /etc/fstab, add lines for shares such as:

servername:dir /mntpoint nfs rw,hard,intr 0 0

The rw mounts it read/write. Obviously, if the server is sharing it read only, the client won't be able to mount it as anything more than that. The hard mounts the share such that if the server becomes unavailable, the program will wait until it is available. The alternative is soft. intr allows you to interrupt/kill the process. Otherwise, it will ignore you. Documentation for these can be found in the Mount options for nfs section of man mount.

The filesystems can now be mounted with mount /mountpoint, or mount -a to mount everything that should be mounted at boot.

Notes

Minimalistic NFS Set Up

The steps above are very comprehensive. The minimum number of steps required to set up NFS are listed here:

http://www.ubuntuforums.org/showthread.php?t=249889

Using Groups with NFS Shares

When using groups on NFS shares (NFSv2 or NFSv3), keep in mind that this might not work if a user is a member of more than 16 groups. This is due to limitations in the NFS protocol. You can find more information on Launchpad ("Permission denied when user belongs to group that owns group writable or setgid directories mounted via nfs") and in this article: "What's the deal on the 16 group id limitation in NFS?"

IPSec Notes

If you're using IPSec, the default shutdown order in Breezy/Dapper causes the client to hang as it's being shut down because IPSec goes down before NFS does. To fix it, do:

sudo update-rc.d -f setkey removesudo update-rc.d setkey start 37 0 6 S .

A bug has been filed here: https://launchpad.net/distros/ubuntu/+source/ipsec-tools/+bug/37536

Troubleshooting

Mounting NFS shares in encrypted home won't work on boot

Mounting an NFS share inside an encrypted home directory will only work after you are successfully logged in and your home is decrypted. This means that using /etc/fstab to mount NFS shares on boot will not work – because your home has not been decrypted at the time of mounting. There is a simple way around this using Symbolic links:

  • Create an alternative directory to mount the NFS shares in:

$ sudo mkdir /nfs$ sudo mkdir /nfs/music
  • Edit /etc/fstab to mount the NFS share into that directory instead:

nfsServer:music /nfs/music nfs4 _netdev,auto 0 0

  • Create a symbolic link inside your home, pointing to the actual mount location (in our case delete the 'Music' directory already existing there first):

$ rmdir /home/user/Music$ ln -s /nfs/music/ /home/user/Music

Other resources


Tag Cloud