Mount an NFS share from Windows

http://www.linuxhomenetworking.com/forums/showthread.php/18733-Mount-an-NFS-share-from-Windows

Mount an NFS share from Windows

by Wayne E Goodrich (Outlaw)
(Transferred from the wiki by Peter)

Introduction

Microsoft makes their Services For UNIX freely available, which is nice, since it's now easier to have a Linux server share out files to both Linux and Windows without setting up Samba on the Linux client. You need only run NFS tools (in Windows) and have a simple NFS mount rule in your fstab (in Linux). This is primarily why I decided to go this route. I have a mix of Windows and Linux systems on my home LAN, and I'd rather keep all the stuff I want access to in one central place. Some examples of stuff one would want central access to would be MP3s, any web graphics you want to manipulate from each platform, screenshots etc. It's a great way to keep your Linux or Win home directory small and uncluttered. Go here to get SFU and read up on the system requirements.
Alright, lets get on with setting this all up.

Set up the Linux Server

If you use Redhat Linux, type
Code:
	rpm -qa|grep nfs
	
You should see nfs-utils-some-version. For Debian, type
Code:
	dpkg --list|grep nfs
	
You should see nfs-common and/or nfs-kernel-server or nfs-user-server. On your server, enable or install NFS server.
In redhat, most likely, it is already available. Type
Code:
	chkconfig --list|grep nfs
	nfs             0:off   1:off   2:off   3:off   4:off   5:off   6:off
	nfslock         0:off   1:off   2:off   3:on    4:on    5:on    6:off
	
So in this case, we need to just fire up NFS server:
Code:
	  chkconfig --level 345 nfs on; service nfs start
	
which enables nfs server in runlevels 3, 4, and 5 and starts nfs.
In Debian, if the nfs server is not installed, retrieve nfs-kernel-server.
Code:
	apt-get install nfs-kernel-server
	
We also need to start portmap: On Redhat, start it if it is not already running
Code:
	service portmap status
	
If it's not running,
Code:
	service portmap start; chkconfig --level 2345 portmap on
	
On Debian, portmap is started from inetd and should be enabled by default.
Now let's check to see if everything we need is running:
Code:
	rpcinfo -p
	program vers proto   port
	100000    2   tcp    111  portmapper
	100000    2   udp    111  portmapper
	100024    1   udp   1024  status
	100024    1   tcp   1025  status
	100003    2   udp   2049  nfs
	100003    3   udp   2049  nfs
	100003    2   tcp   2049  nfs
	100003    3   tcp   2049  nfs
	100021    1   udp   1026  nlockmgr
	100021    3   udp   1026  nlockmgr
	100021    4   udp   1026  nlockmgr
	100021    1   tcp   1026  nlockmgr
	100021    3   tcp   1026  nlockmgr
	100021    4   tcp   1026  nlockmgr
	100005    1   udp   1027  mountd
	100005    1   tcp   1027  mountd
	100005    2   udp   1027  mountd
	100005    2   tcp   1027  mountd
	100005    3   udp   1027  mountd
	100005    3   tcp   1027  mountd
	
Now we can define the export, and secure portmap with tcpwrappers.
In my case, I have a 16G directory where I want my share to reside. This is /stuff/pub.
Code:
	vim /etc/exports
	#
	/stuff/pub              192.168.0.4(rw,sync)
	
Where the IP address is that of the client who will be mounting the export.
Now lets lock down the portmapper
Code:
	  vim /etc/hosts/deny
	
Make sure that in these two following files, the last line is an empty line.
Code:
	# hosts.deny    This file describes the names of the hosts which are
	#               *not* allowed to use the local INET services, as decided
	#               by the '/usr/sbin/tcpd' server.
	#
	# The portmap line is redundant, but it is left to remind you that
	# the new secure portmap uses hosts.deny and hosts.allow.  In particular
	# you should know that NFS uses portmap!
	portmap : ALL : deny
	
Now let's explicitly allow our client: – vim /etc/hosts.allow
Code:
	# hosts.allow   This file describes the names of the hosts which are
	#               allowed to use the local INET services, as decided
	#               by the '/usr/sbin/tcpd' server.
	#
	portmap : 192.168.0.4 : allow
	
Mount The Export in Linux and Windows

Now we can mount the export from our clients. On our Linux client:
Code:
	mkdir /share
	mount -t nfs NFSServer:/stuff/pub /share
	
If the attempt times out, or comes back immediately with "NFS server is down", double check your hosts.allow and hosts.deny files.
To make the mount automatic at boot, add the following line to /etc/fstab:
Code:
	saturn:/stuff/pub  /home/radar/pub nfs  soft,intr,rsize=8192,wsize=8192 0 0
	
Verify system requirements

On our Windows 2000 or Windows XP Pro client, we need Windows Services for UNIX and the drive must be formatted with NTFS. See the SFU system requirements from above link.

Download

Download and extract, then run the installer. We'll only need Client for NFS and Auth Tools for NFS, so select the custom installation. Under NFS, choose Client for NFS and under Auth Tools for NFS, choose User Name Mapping. Deselect everythng else, since we only need NFS client and tools.

Install SFU

During the install, let it default to the machine's own domain, select passwd and group files and select user name mapping as options.

Configure SFU

When the installation is through, copy over your /etc/passwd and /etc/group files to C:\ on the client box. Now launch the SFU administration program from the programs menu. Select User Name Mapping to associate the windows user to the unix user who has permissions on the export. Under the configuration bar, select passwd and group files and enter C:\passwd & C:\group in the proper fields.

Map users

Select maps bar. You should see \\MACHINE. Click Show User Maps, then click Show Windows users and Show Unix users. You should see all your accounts, so now, click a user from both to associate. Click Add and verify the mapping under Mapped Users.

Map a network drive

Now go to Start >> My Network Places >> select tools from toolbar >> map network drive
Select drive letter >> click browse >> select NFS network
Your NFS server should show up under default LAN, expand the file tree until you see the export. With the export open, click Ok and verify that there is a new mapped drive under My Computer. If your user needs write access to the share, the export needs to be permitted to the unix user on the NFS server with proper file modes.



More Information

NFS How-To
Redhat NFS Guide
Debian NFS help

Grep

-h Show filename

-n Show line number

-i Ignore case

-c Count

-F String is not a RegEx (faster)

Contiously look at a file for 500

tail -f access.log | grep " 500 "

Find the occurence of 02/Jun/2012:15 or 02/Jun/2012:16

grep " 500 " access.log | egrep "02/Jun/2012:15|02/Jun/2012:16"

How To Setting Up NFS (Network File System)

Introduction

NFS (Network File System) allows you to 'share' a directory located on one networked computer with other computers/devices on that network. The computer 'sharing' the directory is called the server and the computers or devices connecting to that server are called clients. The clients 'mount' the shared directory, it becomes part of their own directory structure.

NFS is perfect for a NAS (Networked Attached Storage) deployment in a Linux/Unix environment. It is a native Linux/Unix protocol as opposed to Samba which uses the SMB protocol developed by Microsoft. The Apple OS has good support for NFS. Windows 7 has some support for NFS.

NFS is perhaps best for more 'permanent' network mounted directories such as /home directories or regularly accessed shared resources. If you want a network share that guest users can easily connect to, Samba is more suited. This is because tools exist more readily across operating systems to temporarily mount and detach from Samba shares.

Before deploying NFS you should be familiar with:

  • Linux file and directory permissions
  • Mounting and detaching (unmounting) filesystems

NFSv4 quick start

Providing you understand what you are doing, use this brief walk-through to set up an NFSv4 server on Ubuntu (with no authentication security). Then mount the share on an Ubuntu client. It has been tested on Ubuntu 10.04 Lucid Lynx.

NFSv4 server

Install the required packages…

  • 				# apt-get install nfs-kernel-server
    	

NFSv4 exports exist in a single pseudo filesystem, where the real directories are mounted with the –bind option.

  • Let's say we want to export our users' home directories in /home/users. First we create the export filesystem:

    				# mkdir -p /export/users
    	
    It's important that /export and /export/users have 777 permissions as we will be accessing the NFS share from the client without LDAP/NIS authentication. This will not apply if using authentication (see below). Now mount the real users directory with:
    				# mount --bind /home/users /export/users
    	
    To save us from retyping this after every reboot we add the following

    line to /etc/fstab

    				/home/users    /export/users   none    bind  0  0
    	

There are three configuration files that relate to an NFSv4 server: /etc/default/nfs-kernel-server, /etc/default/nfs-common and /etc/exports.

  • Those config files in our example would look like this:

    In /etc/default/nfs-kernel-server we set:

    				NEED_SVCGSSD=no # no is default
    	
    because we are not activating NFSv4 security this time.

    In /etc/default/nfs-common we set:

    				NEED_IDMAPD=yes	NEED_GSSD=no # no is default
    	
    because we want UID/GUID to be mapped from names.

In order for the ID names to be automatically mapped, both the client and server require the /etc/idmapd.conf file to have the same contents with the correct domain names. Furthermore, this file should have the following lines in the Mapping section:

  • 				[Mapping]		Nobody-User = nobody	Nobody-Group = nogroup
    	

    However, the client may have different requirements for the Nobody-User and Nobody-Group. For example on RedHat variants, it's nfsnobody for both. cat /etc/passwd and cat /etc/group should show the "nobody" accounts.

This way, server and client do not need the users to share same UID/GUID.

For those who use LDAP-based authentication, add the following lines to your client's idmapd.conf:

[Translation]Method = nsswitch

This will cause idmapd to know to look at nsswitch.conf to determine where it should look for credential information (and if you have LDAP authentication already working, nsswitch shouldn't require further explanation).

  • To export our directories to a local network 192.168.1.0/24

    we add the following two lines to /etc/exports

    				/export       192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)	/export/users 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
    	

Now restart the service

  • 				# /etc/init.d/nfs-kernel-server restart
    	

NFSv4 client

Install the required packages…

  • 				# apt-get install nfs-common
    	

The client needs the same changes to /etc/default/nfs-common to connect to an NFSv4 server.

  • In /etc/default/nfs-common we set:

    				NEED_IDMAPD=yes	NEED_GSSD=no # no is default
    	

    because we want UID/GUID to be mapped from names. This way, server and client do not need the users to share same UID/GUID. Remember that mount/fstab defaults to NFSv3, so "mount -t nfs4" is necessary to make this work.

On the client we can mount the complete export tree with one command:

  • 				# mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/ /mnt/
    	
  • 	
The directories must start and end with the forward slash.
sudo mount -t nfs4 -o proto=tcp,port=2049 fileshare-01:/mnt/local/sda1/ /mnt/nfs/fileshare-01.sda1/

Note that nfs-server:/export is not necessary in NFSv4, as it is in NFSv3. The root export :/ defaults to export with fsid=0.

It can fail sometimes with the message

mount.nfs4: No such device

You have to load the nfs module by giving the command

# modprobe nfs

To make sure that the module is loaded at each boot, simply add nfs on the last line of /etc/modules. We can also mount an exported subtree with:

  • 				# mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/users /home/users
    	

To save us from retyping this after every reboot we add the following line to /etc/fstab:

  • 				nfs-server:/   /mnt   nfs4    _netdev,auto  0  0
    	

    The auto option mounts on startup and the _netdev option waits until system network devices are loaded. However this will not work with WiFi as WiFi is set up at the user level (after login) not at system startup. If you use _netdev with WiFi the boot process will pause waiting for the server to become available.

Note that _netdev only works with nfs version 3 and before. nfs4 ignores this option. Depending on how fast the network comes up on boot the mount entry may fail and the system will just keep booting. It can still be useful if you make your own script to wait for the network to come up and then mount -a -O _netdev

Ubuntu Server doesn't come with any init.d/netfs or other scripts to do this for you.

NFS Server

Pre-Installation Setup

None of the following pre-installation steps are strictly necessary.

User Permissions

NFS user permissions are based on user ID (UID). UIDs of any users on the client must match those on the server in order for the users to have access. The typical ways of doing this are:

  • Manual password file synchronization
  • Use of LDAP

  • Use of NIS

It's also important to note that you have to be careful on systems where the main user has root access – that user can change UID's on the system to allow themselves access to anyone's files. This page assumes that the administrative team is the only group with root access and that they are all trusted. Anything else represents a more advanced configuration, and will not be addressed here.

Group Permissions

With NFS, a user's access to files is determined by his/her membership of groups on the client, not on the server. However, there is an important limitation: a maximum of 16 groups are passed from the client to the server, and, if a user is member of more than 16 groups on the client, some files or directories might be unexpectedly inaccessible.

Host Names

optional if using DNS

Add any client name and IP addresses to /etc/hosts. The real (not 127.0.0.1) IP address of the server should already be here. This ensures that NFS will still work even if DNS goes down. You could rely on DNS if you wanted, it's up to you.

NIS

optional – perform steps only if using NIS

Note: This only works if using NIS. Otherwise, you can't use netgroups, and should specify individual IP's or hostnames in /etc/exports. Read the BUGS section in man netgroup.

Edit /etc/netgroup and add a line to classify your clients. (This step is not necessary, but is for convenience).

myclients (client1,,) (client2,,)

Obviously, more clients can be added. myclients can be anything you like; this is a netgroup name.

Run this command to rebuild the YP database:

sudo make -C /var/yp

Portmap Lockdown

optional

Add the following line to /etc/hosts.deny:

portmap mountd nfsd statd lockd rquotad : ALL

By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server.

Now add the following line to /etc/hosts.allow:

portmap mountd nfsd statd lockd rquotad : list of IP addresses

Where the "list of IP addresses" string is, you need to make a list of IP addresses that consists of the server and all clients. These have to be IP addresses because of a limitation in portmap (it doesn't like hostnames). Note that if you have NIS set up, just add these to the same line.

Installation and Configuration

Install NFS Server

sudo apt-get install portmap nfs-kernel-server

Shares

Edit /etc/exports and add the shares:

/home @myclients(rw,sync,no_subtree_check)/usr/local @myclients(rw,sync,no_subtree_check)

The above shares /home and /usr/local to all clients in the myclients netgroup.

/home 192.168.0.10(rw,sync,no_subtree_check) 192.168.0.11(rw,sync,no_subtree_check)/usr/local 192.168.0.10(rw,sync,no_subtree_check) 192.168.0.11(rw,sync,no_subtree_check)

The above shares /home and /usr/local to two clients with fixed ip addresses. Best used only with machines that have static ip addresses.

/home 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)/usr/local 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)

The above shares /home and /usr/local to all clients in the private network falling within the designated ip address range.

rw makes the share read/write, and sync requires the server to only reply to requests once any changes have been flushed to disk. This is the safest option (async is faster, but dangerous. It is strongly recommended that you read man exports.

After setting up /etc/exports, export the shares:

sudo exportfs -ra

You'll want to do this command whenever /etc/exports is modified.

Restart Services

By default, portmap only binds to the loopback interface. To enable access to portmap from remote machines, you need to change /etc/default/portmap to get rid of either "-l" or "-i 127.0.0.1".

If /etc/default/portmap was changed, portmap will need to be restarted:

sudo /etc/init.d/portmap restart

The NFS kernel server will also require a restart:

sudo /etc/init.d/nfs-kernel-server restart

Security Note

Aside from the UID issues discussed above, it should be noted that an attacker could potentially masquerade as a machine that is allowed to map the share, which allows them to create arbitrary UIDs to access your files. One potential solution to this is IPSec, see also the NFS and IPSec section below. You can set up all your domain members to talk only to each other over IPSec, which will effectively authenticate that your client is who it says it is.

IPSec works by encrypting traffic to the server with the server's key, and the server sends back all replies encrypted with the client's key. The traffic is decrypted with the respective keys. If the client doesn't have the keys that the client is supposed to have, it can't send or receive data.

An alternative to IPSec is physically separate networks. This requires a separate network switch and separate ethernet cards, and physical security of that network.

NFS Client

Installation

sudo apt-get install portmap nfs-common

Portmap Lockdown

optional

Add the following line to /etc/hosts.deny:

portmap : ALL

By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server.

Now add the following line to /etc/hosts.allow:

portmap : NFS server IP address

Where "NFS server IP address" is the IP address of the server. This must be numeric! It's the way portmap works.

Host Names

optional if using DNS

Add the server name to /etc/hosts. This ensures the NFS mounts will still work even if DNS goes down. You could rely on DNS if you wanted, it's up to you.

Mounts

Check to see if everything works

You should try and mount it now. The basic template you will use is:

sudo mount ServerIP:/folder/already/setup/to/be/shared /home/username/folder/in/your/local/computer

so for example:

sudo mount 192.168.1.42:/home/music /home/poningru/music

Mount at startup

NFS mounts can either be automatically mounted when accessed using autofs or can be setup with static mounts using entries in /etc/fstab. Both are explained below.

Automounter

Install autofs:

sudo apt-get install autofs

The following configuration example sets up home directories to automount off an NFS server upon logging in. Other directories can be setup to automount upon access as well.

Add the following line to the end of /etc/auto.master:

  /home         /etc/auto.home

Now create /etc/auto.home and insert the following:

  *             solarisbox1.company.com.au,solarisbox2.company.com.au:/export/home/&

The above line automatically mounts any directory accessed at /home/[username] on the client machine from either solarisbox1.company.com.au:/export/home/[username] or solarisbox2.company.com.au:/export/home/[username].

Restart autofs to enable the configuration:

sudo /etc/init.d/autofs start

Static Mounts

Prior to setting up the mounts, make sure the directories that will act as mountpoints are already created.

In /etc/fstab, add lines for shares such as:

servername:dir /mntpoint nfs rw,hard,intr 0 0

The rw mounts it read/write. Obviously, if the server is sharing it read only, the client won't be able to mount it as anything more than that. The hard mounts the share such that if the server becomes unavailable, the program will wait until it is available. The alternative is soft. intr allows you to interrupt/kill the process. Otherwise, it will ignore you. Documentation for these can be found in the Mount options for nfs section of man mount.

The filesystems can now be mounted with mount /mountpoint, or mount -a to mount everything that should be mounted at boot.

Notes

Minimalistic NFS Set Up

The steps above are very comprehensive. The minimum number of steps required to set up NFS are listed here:

http://www.ubuntuforums.org/showthread.php?t=249889

Using Groups with NFS Shares

When using groups on NFS shares (NFSv2 or NFSv3), keep in mind that this might not work if a user is a member of more than 16 groups. This is due to limitations in the NFS protocol. You can find more information on Launchpad ("Permission denied when user belongs to group that owns group writable or setgid directories mounted via nfs") and in this article: "What's the deal on the 16 group id limitation in NFS?"

IPSec Notes

If you're using IPSec, the default shutdown order in Breezy/Dapper causes the client to hang as it's being shut down because IPSec goes down before NFS does. To fix it, do:

sudo update-rc.d -f setkey removesudo update-rc.d setkey start 37 0 6 S .

A bug has been filed here: https://launchpad.net/distros/ubuntu/+source/ipsec-tools/+bug/37536

Troubleshooting

Mounting NFS shares in encrypted home won't work on boot

Mounting an NFS share inside an encrypted home directory will only work after you are successfully logged in and your home is decrypted. This means that using /etc/fstab to mount NFS shares on boot will not work – because your home has not been decrypted at the time of mounting. There is a simple way around this using Symbolic links:

  • Create an alternative directory to mount the NFS shares in:

$ sudo mkdir /nfs$ sudo mkdir /nfs/music
  • Edit /etc/fstab to mount the NFS share into that directory instead:

nfsServer:music /nfs/music nfs4 _netdev,auto 0 0

  • Create a symbolic link inside your home, pointing to the actual mount location (in our case delete the 'Music' directory already existing there first):

$ rmdir /home/user/Music$ ln -s /nfs/music/ /home/user/Music

Other resources

Using cron to run a script each time the server starts

Often you want to run a script each time your server boots. For example, in the How to run VNC on startup guide we wrote a script to launch VNC. One way to get this script to run on boot is to add it as a cron job. This is very easy to do using Webmin. So, within Webmin click on System and then Scheduled Cron Jobs. Then click the Create a new scheduled cron job option at the top of the screen that opens.

Click the button next to the Execute cron job as and choose the username you created when you installed Ubuntu. Hint: your username appears in a Putty/Terminal session prompt. eg. yourusernameappearshere@MyMediaServer.

Note: If the script you've written needs to be run as root then obviously you'd choose root in the Execute cron job as section instead of your username. If the script needs to be run as any other user then obviously enter that username instead.

Enter the name of your script including the full pathname eg. /home/htkh/MyScripts/StartVNC.sh >/dev/null into the Command box, replacing htkh with your own username, MyScripts with the name of the folder you created to store your scripts and StartVNC.sh with the script name. The >/dev/null parameter will discard any output the script may produce. If your script actually needs to produce any output then it should be piped to a file. See the Monitor server temperatures scripts for an example of piping output to a file.

Enter a suitable description in the Description field.

In the When to Execute – Simple schedule drop-down list choose When system boots then click the Create button at the bottom of the screen.

Now's probably a good time to test it. I'd recommend first testing that you've set the job up correctly in Webmin. You can do this by clicking on the job you've just created from the long list of cron jobs. Then click the Run Now button at the bottom of the screen. You should see a message similar to the one you saw when you tested it from a Putty/Terminal session. If you don't then go back and check your settings.

Create a list of users via script

This has not been tested yet.

#!/usr/bin/ksh

NEW_USERS="/path/to/text_data_file"
HOME_BASE="/home/"

cat ${NEW_USERS} | \
while read USER PASSWORD GROUP
do
useradd -g ${GROUP} -p ${PASSWORD} -m -d ${HOME_BASE}${USER} ${USER}

#echo "$pass" | useradd -p "$name"
echo "$PASSWORD" | smbpasswd -as "$USER"

(echo $PASSWORD; echo $PASSWORD)| smbpasswd -s -a $USER_NAME)


done

To remove the ^M characters at the end of all lines in vi

In UNIX, you can escape a control character by preceeding it with a CONTROL-V. The :%s is a basic search and replace command in vi. It tells vi to replace

:%s/^V^M//g

The ^v is a CONTROL-V character and ^m is a CONTROL-M. When you type this, it will look like this:

:%s/^M//g


Tag Cloud