Filer review: Difference between revisions

From Computer Laboratory System Administration
Jump to navigationJump to search
No edit summary
 
(52 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Markus Kuhn, Martyn Johnson, February 2012
Markus Kuhn, Martyn Johnson, February 2012 to September 2020


:''This is an evolving early-draft report by the ad-hoc Filer Working Group, who started in early 2012 to review the use and configuration of the departmental filer. For more information about the filer, there are also the [http://www.cl.cam.ac.uk/local/sys/filesystems/ departmental filespace user documentation], some [http://www.wiki.cl.cam.ac.uk/clwiki/SysInfo/NetApp notes on the NetApp file server] by and for sys-admin, and its own [http://elmer.cl.cam.ac.uk/na_admin/ man pages].''
:''This is an evolving report by the ad-hoc Filer Working Group, who started in early 2012 to review the use and configuration of the departmental filer, with a view to remove historically-grown complexities and to make it easier and provide better documentation for users of self-managed machines to access the filer. For more information about the filer, there are also the [http://www.cl.cam.ac.uk/local/sys/filesystems/ departmental filespace user documentation], some [http://www.wiki.cl.cam.ac.uk/clwiki/SysInfo/NetApp notes on the NetApp file server] by and for sys-admin. Many of the suggestions of this review have been implemented by late 2019, but (as of September 2020) there remain some [[#Things_yet_to_do|outstanding issues]] regarding UID/GID cleanup, the client-side filer namespace, and easier deployment of the automounted NFS configuration on self-managed Unix/[https://www.cl.cam.ac.uk/~mgk25/offline-ldap/ Linux]/[https://www.cl.cam.ac.uk/local/sys/filesystems/mac/ macOS]/[https://github.com/microsoft/WSL2-Linux-Kernel/issues/161 WSL2] machines.''


The Computer Laboratory has operated a centrally provided NFS file store for Unix/Linux systems continuously since the mid 1980s. This service hosts the commonly used home directories and working directories of most users, and is widely used by research groups to collaborate, via group directories for shared project files and software. It also forms the interface to the departmental mail and web servers. At present, this NFS service is provided by a [http://www.netapp.com/ NetApp] [http://www.netapp.com/us/products/storage-systems/fas3100/ FAS3140-R5] storage server "elmer" (SN: 210422922), running under "Data ONTAP Release 7.3.3". This server also provides access to the same filespace to other operating systems via the CIFS and WebDAV protocols, and provides a mechanism for the coexistence of files governed by either POSIX permission bits and Windows access control lists in the same directory. Elmer also hosts disk images for virtual machines, which are accessed over block-level iSCSI protocol. An additional [http://www.netapp.com/us/products/storage-systems/fas2000/ FAS2040-R5] server "echo" (SN: 200000186549) handles off-site backup using [http://www.netapp.com/us/products/protection-software/snapvault.html SnapVault].
The Computer Laboratory has operated a centrally provided NFS file store for Unix/Linux systems continuously since the mid 1980s. This service hosts the commonly used home directories and working directories of most users, and is widely used by research groups to collaborate, via group directories for shared project files and software. It also forms the interface to the departmental mail and web servers.
 
== Servers ==
 
Until late 2019, the departmental NFS service was provided by a [http://www.netapp.com/ NetApp] [http://www.netapp.com/us/products/storage-systems/fas3200/ FAS3220] storage server "elmer", running under [https://mysupport.netapp.com/documentation/docweb/index.html?productID=62512 Data ONTAP Release 8.2.5 7-Mode]. This server also provided access to the same filespace to Windows clients via the SMB/CIFS protocol, and provides a mechanism for the coexistence of files governed by either POSIX permission bits and Windows access control lists in the same directory. Elmer also hosted disk images for virtual machines, which are accessed predominantly over NFS, with some legacy use of the block-level iSCSI protocol. An additional [http://www.netapp.com/us/products/storage-systems/fas2000/ FAS2040-R5] server "echo" (SN: 200000186549) running under [https://mysupport.netapp.com/documentation/docweb/index.html?productID=62512 Data ONTAP Release 8.2.5 7-Mode] handles off-site backup using [http://www.netapp.com/us/products/protection-software/snapvault.html SnapVault].
 
User authentication is provided by one of two Active Directory domains: AD.CL.CAM.AC.UK (old) and DC.CL.CAM.AC.UK (new). Each is served by three Microsoft Active Directory domain servers running under Windows 2008R2 (?). They provide Kerberos KDC and an LDAP services. In addition, there are four separate Linux LDAP servers (ldap{,-serv{1,2,3,4}}.cl.cam.ac.uk) that serve passwd, group and automount tables for Linux (and some self-managed macOS) clients.
 
In spring 2019, the department bought a new NetApp filer “elly” running under DataONTAP 9.5P6, which runs multiple virtual filers, of which “wilbur” took over as the main departmental NFS/SMB file server from “elmer”. While previously “filer.cl.cam.ac.uk” was an alias (DNS CNAME) for “elmer.cl.cam.ac.uk”, it is now a DFS server, performing a similar layer of namespace indirection for SMB clients as automounting does for NFS clients. However, at the moment, “filer.cl.cam.ac.uk” simply passes through transparently the SMB shares exported by  “wilbur.cl.cam.ac.uk” (aka “wilbur.dc.cl.cam.ac.uk”).


==Review==
==Review==
Line 15: Line 23:


==Namespace management==
==Namespace management==
''This section still describes the Data ONTAP 7 volume/qtree setup (elmer), which has been replaced since in late 2019 with a simpler filer-side single NFS-exported namespace on Data ONTAP 9 (wilbur). However the mentioned issues of the client-side differences between the Linux and Windows namespaces and the use of symbolic links on Linux remain.''


NetApp's [http://www.tech.proact.co.uk/netapp/data_ontap_intro.pdf Data ONTAP 7G] operating system requires filer administrators to structure the storage space at several levels. Familiarity with these will help to understand some of the historic design decisions made.
NetApp's [http://www.tech.proact.co.uk/netapp/data_ontap_intro.pdf Data ONTAP 7G] operating system requires filer administrators to structure the storage space at several levels. Familiarity with these will help to understand some of the historic design decisions made.


*An '''aggregate''' is a collection of physical discs, made up of one or more RAID-sets. It is the smallest unit that can be physically unplugged and moved intact to a different filer. Elmer has two aggregates because one cannot mix different major disk technologies (Fibre Channel vs. SATA) in an aggregate. The backup filer echo has just one. Discs can be added to an aggregate on the fly, but never removed.
*An '''aggregate''' is a collection of physical discs, made up of one or more RAID-sets. It is the smallest unit that can be physically unplugged and moved intact to a different filer. Both filers have multiple aggregates for two main reasons: restrictions on the maximum size of an aggregate, and separation of discs of different size and age to facilitate their eventual replacement. Discs can be added to an aggregate on the fly, but never removed, so the only way to retire discs is to empty the aggregate.


*A '''volume''' is a major unit of space allocation within an aggregate. Typically, they have reserved space, though it is possible to over-commit if one really wants to. Many properties are bound to a volume, e.g. language(?). Significantly, a volume is the unit of snapshotting – each volume has its own snapshot schedule and retention policy.
*A '''volume''' is a major unit of space allocation within an aggregate. Typically, they have reserved space, though it is possible to over-commit if one really wants to. Many properties are bound to a volume, e.g. language(?). Significantly, a volume is the unit of snapshotting – each volume has its own snapshot schedule and retention policy.


* A '''q-tree''' ("quota tree") is a magic directory within the root directory of a volume which has a quota attached to it and all its descendants. (This is merely for quota; there is no space reservation associated with a q-tree.)
* A '''qtree''' ("quota tree") is a magic directory within the root directory of a volume which has a quota attached to it and all its descendants. (This is merely for quota; there is no space reservation associated with a qtree.)


When we first got the filer in 19??, the aggregate layer did not exist in Data ONTAP, and a volume was just a collection of discs. Therefore a single volume couldn't get too bit, and it was not feasible to put, for example, all user home directories into a single q-tree, as a q-tree couldn't span multiple volumes, and therefore no multiple sets of disks. This imposed an upper bound on the size of a q-tree. In addition, the backup system imposed constraints on the total number of q-trees. It was therefore also not possible to give every user their own q-tree. As a compromise, Martyn Johnson then created eight q-trees called homes-1 to homes-8, which are now all located in volume 1, along with various q-trees for each research group filespaces (and for various other functions) spread across several volumes. This can be seen in the elmer volumes, which are mounted on lab-managed Linux machines under /a/elmer-vol*:
When we first got the filer in 2002, the aggregate layer did not exist in Data ONTAP, and a volume was just a collection of discs. This meant that the number of volumes was small and fixed. We would have liked to give each user filespace its own qtree, but the then existing hard limit of 255 qtrees on a volume made this impossible. (Today, Data ONTAP 7G allows up to 4,995 qtrees per volume.) A single qtree for all home directories would have been uncomfortably large for the tape backup system in use at the time, whose unit of backup was the qtree. As a compromise, Martyn Johnson then created eight qtrees called homes-1 to homes-8, all located in volume 1, along with various qtrees for research group filespaces (and various other functions) spread across several volumes. This can be seen in the elmer volumes, which are mounted on lab-managed Linux machines under /a/elmer-vol*:


   $ ls /a/elmer-vol1
   $ ls /a/elmer-vol1
Line 52: Line 62:
   vol1/homes-5/mgk25/
   vol1/homes-5/mgk25/


now includes a q-tree identifier (e.g., homes-1) that the user cannot infer from the user identifier, and which we therefore would ideally hide from users. Users should instead see simple pathnames such as /homes/maj1. Therefore, a two-stage mapping system between filer pathnames and user-visible pathnames was implemented for NFSv3:
now includes a qtree identifier (e.g., homes-1) that the user cannot infer from the user identifier, and which we therefore would ideally hide from users. Users should instead see simple pathnames such as /homes/maj1. Therefore, a two-stage mapping system between filer pathnames and user-visible pathnames was implemented for NFSv3:


* '''Server-side mapping:''' Firstly, the filer's /etc/exports file (see /a/elmer-vol0/etc/exports in lab-managed Linux machines) uses the -actual option as in "/vol/userfiles/mgk25 -actual=/vol/vol1/homes-5/mgk25" to export each superhome of a user under a "userfiles" alias pathname that lacks the q-tree identifier.
* '''Server-side mapping:''' Firstly, the filer's /etc/exports file (see /a/elmer-vol0/etc/exports in lab-managed Linux machines) uses the -actual option as in "/vol/userfiles/mgk25 -actual=/vol/vol1/homes-5/mgk25" to export each superhome of a user under a "userfiles" alias pathname that lacks the qtree identifier.


* '''Client-side mapping:''' Secondly, autofs is then used to individually mount the unix_home directory of each user under a more customary location in the client-side namespace, using mount entries such as "elmer:/vol/userfiles/mgk25/unix_home on /auto/homes/mgk25" or "elmer:/vol/vol3/grp-rb2/ecad on /auto/groups/ecad". Finally, symbolic links such as "/homes -> /auto/homes", "/usr/groups -> /auto/groups", and "/anfs -> /auto/anfs" to give access via customary short pathnames.
* '''Client-side mapping:''' Secondly, autofs is then used to individually mount the unix_home directory of each user under a more customary location in the client-side namespace, using mount entries such as "elmer:/vol/userfiles/mgk25/unix_home on /auto/homes/mgk25" or "elmer:/vol/vol3/grp-rb2/ecad on /auto/groups/ecad". Finally, symbolic links such as "/homes -> /auto/homes", "/usr/groups -> /auto/groups", and "/anfs -> /auto/anfs" to give access via customary short pathnames.
Line 60: Line 70:
This solution is historically grown and was motivated by three considerations:
This solution is historically grown and was motivated by three considerations:


* When new users are created, their home directory should become instantly available on all client machines, which meant that the mapping needed to eliminate the q-tree identifier from the pathname had to be performed on the server, as there was no practical way to push out such changes in real-time to all client machines.
* When new users are created, their home directory should become instantly available on all client machines, which meant that the mapping needed to eliminate the qtree identifier from the pathname had to be performed on the server, as there was no practical way to push out such changes in real-time to all client machines.


* There was already an existing historic automount configuration infrastructure and customary namespace in place on lab-managed Unix clients when the filer was installed, and the aim has been to maintain both.
* There was already an existing historic automount configuration infrastructure and customary namespace in place on lab-managed Unix clients when the filer was installed, and the aim has been to maintain both.
Line 68: Line 78:
This arrangement is far from ideal, and causes a number of problems:
This arrangement is far from ideal, and causes a number of problems:


* Quota restrictions (as reported on lab-managed Linux machines by "cl-rquota" are reported in terms of q-tree identifiers (e.g., homes-5, scr-1, www-1). These  have hardly any useful link with the pathnames that the user is accustomed to, making it difficult to guess which subdirectory needs cleaning up if one runs out of quota.
* Quota restrictions (as reported on lab-managed Linux machines by "cl-rquota" are reported in terms of qtree identifiers (e.g., homes-5, scr-1, www-1). These  have hardly any useful link with the pathnames that the user is accustomed to, making it difficult to guess which subdirectory needs cleaning up if one runs out of quota.


* Some research groups historically had to fragment their group space across multiple q-trees, which complicates understanding quotas and namespace.
* Some research groups historically had to fragment their group space across multiple qtrees, which complicates understanding quotas and namespace.


* Some users have quota in other user's home directory (namely those sharing the same of the eight home-* q-trees), but others do not, which can lead to confusion when users try to collaborate using shared directories in someone's home directory.
* Some users have quota in other user's home directory (namely those sharing the same of the eight home-* qtrees), but others do not, which can lead to confusion when users try to collaborate using shared directories in someone's home directory.


* An elaborate system of symbolic links and autofs configuration is needed to create the customary client-side name-space, which requires substantial setup and tweaking on lab-managed machines that was never documented or supported for implementation on private machines.
* An elaborate system of symbolic links and autofs configuration is needed to create the customary client-side name-space, which requires substantial setup and tweaking on lab-managed machines that was never documented or supported for implementation on private machines.


* The scheme relies on the -actual option in the filer's /etc/exports, which works for NFSv3, but unfortunately apparently not for NFSv4. As a result, we cannot switch to NFSv4 with the current setup, and we are loosing out on the substantial performance and ease-of-tunneling advantages that NFSv4 would have in particular for remote access (only one single TCP connection to filer needed, "delegation" to maintain consistency of a local file cache, etc.).
* The scheme relies on the -actual option in the filer's [https://library.netapp.com/ecmdocs/ECMP1147528/html/man5/na_exports.5.html /etc/exports]. The current -actual setup of exporting /vol/userfiles/ only works for NFSv3, because "NFSv4 clients will not see an exported path using the actual option unless the export path is only one level deep and is not /vol" [https://library.netapp.com/ecmdocs/ECMP1147528/html/man5/na_exports.5.html]. As a result, we currently cannot access /vol/userfiles with NFSv4, but this could be fixed by simply moving these /vol/userfiles mounts out of /vol. NFSv4 promises substantial performance and ease-of-tunneling advantages, in particular for remote access (only one single TCP connection to filer needed, "delegation" to maintain consistency of a local file cache, etc.).


* Having the unix_home and windows_home located in the same q-tree has required the use of a "mixed" access control policy, where files can flip between POSIX permissions and NTFS-style access control lists, which has caused confusion and some disasters (see section "[[#Access_control|Access control]]" below), and which seems no longer recommended by NetApp.
* Having the unix_home and windows_home located in the same qtree has required the use of a "mixed" access control policy, where files can flip between POSIX permissions and NTFS-style access control lists, which has caused confusion and some disasters (see section "[[#Access_control|Access control]]" below), and which seems no longer recommended by NetApp.


There are two reasons why fixing this is non-trivial and hasn't been done long ago:
There are two reasons why fixing this is non-trivial and hasn't been done long ago:


# The autofs configuration required for client-side mapping is disseminated via LDAP from an LDIF master file that is currently generated by a very complex set of historically grown scripts that are considered unmaintainable and not fully understood by any member of sys-admin. This system is overdue for reimplementation.
# The autofs configuration required for client-side mapping is disseminated via LDAP from an LDIF master file that is currently generated by a very complex set of historically grown scripts that are considered unmaintainable and not fully understood by any member of sys-admin. This system is overdue for reimplementation.
# The filer has no more efficient means to move files from one q-tree into another than simply copying. Therefore, reorganising the way in which home and research-group directories are distributed across q-trees would take several days to complete, which could be disruptive, but is manageable if done in phases. However, more significantly, this copying of all home-directory files but would also cause significant "churn" in the backup system, essentially doubling for a ''long'' time the amount of disc space required on the backup system.
# The filer has no more efficient means to move files from one qtree into another than simply copying. Therefore, reorganising the way in which home and research-group directories are distributed across qtrees would take several days to complete, which could be disruptive, but is manageable if done in phases. However, more significantly, this copying of all home-directory files but would also cause significant "churn" in the backup system, essentially doubling for a ''long'' time the amount of disc space required on the backup system.


An additional problem is that the namespace under Linux and under Windows differs substantially, for example
An additional problem is that the namespace under Linux and under Windows differs substantially, for example
Line 89: Line 99:
   /homes/mgk25                        =  \\filer\userfiles\unix_home\mgk25
   /homes/mgk25                        =  \\filer\userfiles\unix_home\mgk25
   /auto/userfiles/mgk25/windows_home  =  \\filer\mgk25
   /auto/userfiles/mgk25/windows_home  =  \\filer\mgk25
   /user/groups/ecad                  =  \\filer\ecad
   /user/groups/ecad                  =  \\filer\groups\ecad
   /anfs/www                          =  \\filer\www
   /anfs/www                          =  \\filer\www


Line 96: Line 106:
Outline of a possible solution:
Outline of a possible solution:


* Since the constraints that led to the historic set of q-trees no longer apply, new users (and selected existing power users) should have their home directories assigned to either a single common q-tree, or to individual per-user q-trees (to be decided). A single common q-tree has the advantage of allowing easy collaboration in one's home directory and the disadvantage that it remains difficult to find where files are that count towards ones overfull quota. With a per-user q-tree, this advantage and disadvantage are swaped. (The current scheme has both aspects as a disadvantage!).
* Since the constraints that led to the historic set of qtrees no longer apply, new users (and selected existing power users) should have their home directories assigned to either a single common qtree, or to individual per-user qtrees (to be decided). A single common qtree has the advantage of allowing easy collaboration in one's home directory and the disadvantage that it remains difficult to find where files are that count towards ones overfull quota. With a per-user qtree, this advantage and disadvantage are swaped. (The current scheme has both aspects as a disadvantage!).


* For most existing users, the existing q-tree scheme should remain in place until they are a minority small enough to make the backup churn manageable that is caused by reassigning all of them as well.
* For most existing users, the existing qtree scheme should remain in place until they are a minority small enough to make the backup churn manageable that is caused by reassigning all of them as well.


* The way the client-side namespace mapping is configured needs to be rewritten from scratch and carefully documented, in a way that can easily applied to private Linux machines as well. This could be done by creating a new /filer namespace and running the old and the new system in parallel for a short time.
* The way the client-side namespace mapping is configured needs to be rewritten from scratch and carefully documented, in a way that can easily applied to private Linux machines as well. This could be done by creating a new /filer namespace and running the old and the new system in parallel for a short time.
Line 108: Line 118:
This changed a few years ago with the implementation of "Kerberized NFS" (RPCSEC_GSS) on Linux. New lab-managed Linux clients now routinely authenticate their users to the NFSv3 server via Kerberos credentials, using the Lab's Active Directory server as the key distribution centre (KDC). As a result, it has become possible to give users of such clients root access without giving them the ability to impersonate others on the filer. The same Kerberos setup is also used by Windows/CIFS clients.
This changed a few years ago with the implementation of "Kerberized NFS" (RPCSEC_GSS) on Linux. New lab-managed Linux clients now routinely authenticate their users to the NFSv3 server via Kerberos credentials, using the Lab's Active Directory server as the key distribution centre (KDC). As a result, it has become possible to give users of such clients root access without giving them the ability to impersonate others on the filer. The same Kerberos setup is also used by Windows/CIFS clients.


[... ease of setup ...]
Several users have already connected their home Linux PC to the departmental Kerberos KDC, for the purpose of easy "ssh -K" remote logins with forwarded ticket to Kerberized lab-managed machines. This merely requires tunneling port 88 via ssh or VPN, along with configuring /etc/krb5.conf.
[... host key question ...]
 
However, just connecting to the KDC and acquiring a forwardable ticket may not be sufficient for NFS access. At present, lab-managed machines appear to require not only a Kerberos ticket for the logged-in user, but also a host key (in /etc/krb5.keytab), and an associated host entry on the Active Directory KDC. No equivalent is required for Windows CIFS access to the filer, and it remains unclear what that hostkey is exactly used for (authenticationg the rpcbind or mount protocols?) and whether the need for it can be removed or circumvented. It would be desirable to avoid having to register each private machine that wants to access the filer via NFS to first have to be registered in the Active Directory. Giving each user a host key for their personal machine would also be possible, but to avoid continuous additional sys-admin workload, this should ideally be automated as a self-service facility (like the MAC registration).


==Access control==
==Access control==


The NetApp filespace can be configured, at q-tree level, to implement either "POSIX/NFS", "NTFS/CIFS", or "mixed" access control mechanics. We currently use the "mixed" mechanics, because both a user's unix_home and windows_home are located in the same q-tree. In a "mixed q-tree" each file is governed by the semantics associated with the last protocol that modified its metadata. Access by the other protocol to that file will display crudely approximated permission settings that often do not reflext the actually enforced rights. This has been a rather mixed blessing and causes a number of problems:
The NetApp filespace can be configured, at qtree level, to implement either "POSIX/NFS", "NTFS/CIFS", or "mixed" access control mechanics. We currently use the "mixed" mechanics, because both a user's unix_home and windows_home are located in the same qtree. In a "mixed qtree" each file is governed by the semantics associated with the last protocol that modified its metadata. Access by the other protocol to that file will display crudely approximated permission settings that often do not reflext the actually enforced rights. This has been a rather mixed blessing and causes a number of problems:
* Neither NFS nor CIFS users have access to an unambiguous and complete access-control state of a file, it is not even possible to say for sure whether a file is currently goverened by POSIX or NTFS semantics.
* Neither NFS nor CIFS users have access to an unambiguous and complete access-control state of a file, it is not even possible to say for sure whether a file is currently goverened by POSIX or NTFS semantics.
* There was once a proprietary Windows tool to give a full view, but that seems no longer supported.
* There was once a proprietary Windows tool to give a full view, but that seems no longer supported.
Line 119: Line 130:
* Some users have accidentally caused substantial damage to the permission settings of their POSIX files by accidentally using the Windows ACL inheritance settings, which the cause the client to recursively change the permissions settings of an entire filetree.
* Some users have accidentally caused substantial damage to the permission settings of their POSIX files by accidentally using the Windows ACL inheritance settings, which the cause the client to recursively change the permissions settings of an entire filetree.


It now seems much preferably that each user had a unix_home in a q-tree with POSIX-style access semantics and a windows-home in a q-tree with NTFS-style ACLs, and that for each group directory a decision is made what the primary accessing platform will be, with access-control semantics fixed accordingly, and that mixed q-trees are best avoided entirely. They seemed a good idea at the time.
It now seems much preferably that each user had a unix_home in a qtree with POSIX-style access semantics and a windows-home in a qtree with NTFS-style ACLs, and that for each group directory a decision is made what the primary accessing platform will be, with access-control semantics fixed accordingly, and that mixed qtrees are best avoided entirely. They seemed a good idea at the time.


==LDAP==
==LDAP==


In order to translate numeric user and group IDs into meaningful names, make the ~login shell expansion work, etc., the "getent passwd" user database of the local machine needs to be linked to the departmental LDAP server that has information about all users. This can be easily done (configure /etc/nsswitch.conf and /etc/ldap.conf).
The department operates an LDAP server that serves several administrative functions on lab-managed Linux machines. In particular it also provides
 
*the distributed user and group databases (augmenting /etc/passwd, /etc/group)
*automount maps and entries for client-side NFS namespace management
 
Details of the setup are explained in /usr/groups/admin/ldap-server/README and the full content of the LDAP database can be seen in /usr/groups/admin/ldap-server/ldif/master.ldif.
 
===User and group database===
 
In order to translate numeric user and group IDs into meaningful names, make the "~userid" shell expansion work, etc., the "getent passwd" user database of the local machine needs to be linked to the departmental LDAP server that has information about all users. This can be easily done (configure /etc/nsswitch.conf and /etc/ldap.conf).


However, there is still a minor problem/risk involved with using the departmental LDAP server (which affects not only home PCs but also lab-managed machines!):
However, there is still a minor problem/risk involved with using the departmental LDAP server (which affects not only home PCs but also lab-managed machines!):


The departmental LDAP server currently still serves a lot of numeric user IDs below 1000 and numeric group IDs below 500, which are actually reserved for use by the local operating system installation (and there are particularly many collisions with Ubuntu Linux system users and groups).
The departmental LDAP server currently still serves a lot of numeric user and group IDs below 1000, which are actually reserved for use by the local operating system installation (and there are particularly many collisions with FreeBSD and Ubuntu Linux system users and groups).
 
The only solution is to reassign the remaining user and group IDs to higher numbers, such that the departmental LDAP server no longer serves any user and group IDs below 1000. (Not serving any below ~1010 might be a better idea, because a home PC might already have uids 1000, 1001, 1002 assigned to local users, such as family members.)


The only solution is to reassign the remaining user and group IDs to higher numbers, such that the departmental LDAP server no longer serves any user IDs below 1000 and group IDs below 500. (Setting these limits ~10 higher might be a good idea, because a home Linux PC might already have uids 1000, 1001, 1002 assigned to local users, such as family members.)
In 2011/12, there was a [http://www.cl.cam.ac.uk/news/2011/10/unix-group-cleanup/ campaign to reassign group identifiers below 500], which fixed many of the collisions that had previously occured with Ubuntu Linux system groups. Unfortunately, these were merely moved above 500, where they now collide with either FreeBSD groups, or personal groups associated with CL pseudo users.


There is already an ongoing [http://www.cl.cam.ac.uk/news/2011/10/unix-group-cleanup/ campaign to reassign group identifiers], which is mostly slowed down by the need to chown a large number of files on the filer, along with the inadequate scripting interface of our user database (Microsoft's Active Directory).
While Linux distributions and OSX generally do not use UIDs/GIDs above 499 (and in practice rarely any above ~130), this is not the case with FreeBSD, where the range 0-999 is reserved for that purpose, and is densely populated in [http://svnweb.freebsd.org/ports/head/UIDs?view=co /etc/password] and [http://svnweb.freebsd.org/ports/head/GIDs?view=co /etc/group]
 
Another problem that needs to be addressed is that the ranges occupied by users and groups must not overlap, because it is desirable to allocate for each user an associated personal group of the same name and number. At present, a number of pseudo-users in the 5xx range collide with groups and therefore the pseudo-user cannot have a personal group associated with it.
 
The following Unix Perl scripts can use used to find LDAP UID/GID collisions:
 
  /homes/mgk25/proj/filer/ldap_uids.pl
  /homes/mgk25/proj/filer/ldap_gids.pl
 
In 2015-07-09, we finally agreed on a new [[UID/GID allocation]] policy, which is now applied for new users and groups, but many existing entries still have to be fixed.


== Summary ==
== Summary ==
===Why does Windows/CIFS home access "just work" and Linux/NFS not?===
===Why does Windows/CIFS home access "just work" and Linux/NFS not?===


Users of home PCs running Windows have for a long time been able to access departmental filespace simply by activating a [http://www.cl.cam.ac.uk/local/sys/microsoft/vpn/ VPN connection to the Computer Laboratory's Cisco PPTP server], and typing "\\filer\..." into the "Run" or "Search" box of their desktop. There is no need to be integrated into the domain; Windows will simply show a pop-up box and ask for the departmental Kerberos password before the files become visible.
Users of home PCs running Windows have for a long time been able to access departmental filespace simply by activating a [http://www.cl.cam.ac.uk/local/sys/microsoft/vpn/ VPN connection to the Computer Laboratory's Cisco PPTP server], and typing "\\filer\..." into the "Run" or "Search" box of their desktop. There is no need to be integrated into the domain; Windows will simply show a pop-up box and ask for the departmental Kerberos password before the files become visible. OS X also can use CIFS, but each share has to be mounted manually, which is inconvenient.


Why are things not equally simple under Linux? There are several reasons:
Why are things not equally simple under Linux? There are several reasons:


* There is no VPN service supported for Linux. The commonly-used alternative, OpenSSH, does not support the forwarding of the NFSv3 protocol, where the port number used by the mount protocol is dynamically negotiated via rpcbind.
* Before summer 2015 we had no VPN service supported for Linux. The commonly-used alternative, OpenSSH, does not support the forwarding of the NFSv3 protocol, where the port number used by the mount protocol is dynamically negotiated via rpcbind.
** Solution 1: Enable NFSv4, which has no separate mount protocol and is easy to tunnel over ssh by forwarding tcp port 2049.
** Solution 1: Enable NFSv4, which has no separate mount protocol and is easy to tunnel over ssh by forwarding tcp port 2049.
*** Prerequisite 1: AUTH_SYS must be disabled first for all generally accessible clients, because a NetApp-specific security vulnerability of AUTH_SYS is particularly trivial to exploit via NFSv4.
*** Prerequisite 1: AUTH_SYS must be disabled first for all generally accessible clients, because a NetApp-specific security vulnerability of AUTH_SYS is particularly trivial to exploit via NFSv4.
*** Prerequisite 2: All remaining AUTH_SYS clients (for cron, web server, etc.) must be on a dedicated server VLAN (already done?) and the filer be protected from source-IP spoofing of associated source IP addresses (already done?).
*** Prerequisite 2: All remaining AUTH_SYS clients (for cron, web server, etc.) must be on a dedicated server VLAN (already done?) and the filer be protected from source-IP spoofing of associated source IP addresses (already done?).
** Solution 2: Offer and document a VPN gateway that is easy to set up under Linux. (Replacing the VPN service may also benefit Windows users, as there are security and configuration concerns with the existing very old setup.)
** Solution 2: Using the new UIS-provided CL VPN
* "Kerberized NFS" currently requires setting up a Kerberos host key for the machine, in addition to the Kerberos password required from the users.
** Solution: Investigate what the host key is actually required for, and whether we can eliminate that need. (Windows seems to require no equivalent.)
* Once NFS works, the user will be faced with only a half-mapped namespace very different from what is customary on lab-managed machines.
* Once NFS works, the user will be faced with only a half-mapped namespace very different from what is customary on lab-managed machines.
** Solution: design a simple and stable configuration for client-side mapping of the namespace. This may continue to involve an LDAP-disseminated autofs configuration, but might as well be just a list of additional entries to /etc/fstab. In the interest of efficiency, it may be desirable to have just one single NFS mount from the filer, along with local type=bind mounts to shape the desired namespace, in the absence of a comparable mechanism on the filer. Ideally, there should just be a single mount to /filer, with all remaining mapping being done on the filer.
** Solution: use autofs, but on homecomputers [https://www.cl.cam.ac.uk/~mgk25/offline-ldap/ install autofs tables as static files], rather than retrieve via LDAP
* To convert numeric user/group IDs to user-friendly names, an LDAP connection is needed. A minor obstacle/risk is that existing historic LDAP entries still collide with numbers used by Linux distributions (e.g., Ubuntu).
* To convert numeric user/group IDs to user-friendly names, access to a passwd/groups database is needed (e.g., via LDAP). A minor obstacle/risk is that existing historic LDAP entries still collide with numbers used by operating system distributions (e.g., Ubuntu Linux, FreeBSD, OSX).
**Solution: [http://www.cl.cam.ac.uk/news/2011/10/unix-group-cleanup/ Reassign the remaining LDAP user and group IDs to higher numbers.]
**Solution: complete implementation of the new [[UID/GID allocation]] policy in LDAP
**Solution: on Linux machines outside the CL LAN, [https://www.cl.cam.ac.uk/~mgk25/offline-ldap/ install CL passwd/group files into /var/lib/extrausers/]


===Why can't we just use CIFS from all platforms===
===Why can't we just use CIFS from all platforms===
Line 156: Line 186:
It is possible to mount filesystems via CIFS from Linux via [http://linux-cifs.samba.org/ Linux CIFS VFS] or the older (and no longer maintained?) [http://www.samba.org/samba/smbfs/ smbfs].
It is possible to mount filesystems via CIFS from Linux via [http://linux-cifs.samba.org/ Linux CIFS VFS] or the older (and no longer maintained?) [http://www.samba.org/samba/smbfs/ smbfs].


However, using CIFS from Linux substantially increases the risks associated with the existing "mixed" access-control mechanism: each CIFS write access can destroy POSIX access-control information. (What about using CIFS from Linux in POSIX-style q-trees?)
However, using CIFS from Linux substantially increases the risks associated with the existing "mixed" access-control mechanism: each CIFS write access can destroy POSIX access-control information. (What about using CIFS from Linux in POSIX-style qtrees?)


As a result, CIFS seems hardly more attractive than the existing options of using [http://fuse.sourceforge.net/sshfs.html sshfs] or [http://www.cl.cam.ac.uk/local/sys/microsoft/webdav/#linux webdav].
As a result, CIFS seems hardly more attractive than the existing options of using [http://fuse.sourceforge.net/sshfs.html sshfs] or [http://www.cl.cam.ac.uk/local/sys/microsoft/webdav/#linux webdav].
Samba and OS X implement a [https://www.samba.org/samba/CIFS_POSIX_extensions.html POSIX extensions to CIFS], but is that supported by our NetApp filer?
== Implementation progress ==
See also: [https://www.cl.cam.ac.uk/~mgk25/only-cl/project-anoia.pdf Project Anoja]
=== Things already achieved ===
* 2014-07-22 (mgk): understood and documented why the existing NFSv3/LDAP setup does not work under OS X 10.9 [https://www.cl.cam.ac.uk/local/sys/mac/file_access/#nfs][http://www.wiki.cl.cam.ac.uk/clwiki/SysInfo/MacNFS]
* 2014-09-13 (pb): Implemented cl-tgt to help with Kerberized access from unattended processes (cron jobs, compute cluster, etc.)
* 2014-11 (pb): Phased out AUTH_SYS NFS access from all clients to which non-sys-admins have access.
* 2014-11-10 (maj): Switched on NFSv4 again (options nfs.v4.enable on)
* 2014-11-11 (maj): set idmapd domain correctly on elmer to make name translation work for NFSv4
* 2014-11-12 (maj): Understood why /vol/userfiles fails on NFSv4
* 2015-07: new CL VPN service enables NFSv3 access to Linux home users
* 2015-07-09 (mgk): new [[UID/GID allocation]] policy agreed and implemented (for new groups and pseudo-users)
* 2016-06-10 (mgk): published [https://www.cl.cam.ac.uk/~mgk25/offline-ldap/ NFS filer access from laptops and home computers] with instructions for automounting without LDAP
* 2016-07-07 (mgk): published [https://www.cl.cam.ac.uk/~mgk25/osx-kerberos-nfs.html explanation for why under OS X 10.10 and OS X 10.11 Kerberized NFS is incompatible with Active Directory]
* 2016-09-20 (Apple): macOS Sierra 10.12 now supports AES encryption type in Kerberized NFS (but our AD.CL.CAM.AC.UK domain does not yet, as it still runs at domain functional level Windows Server 2003)
* 2017-03-19 (mgk): published [https://github.com/mgkuhn/ugid-scan ugid-scan/find] tool to help migrating UIDs
* 2017-03-21 (mgk): wrote recipe for [[Moving the UID/GID of a user]]
* 2017-06-06 (gt19): AD.CL.CAM.AC.UK domain functional level raised to Windows Server 2008 to enable Kerberos AES encryption types
* 2017-07-12 (maj1): upgraded echo/enid to 8.2.4P6 7-Mode, reports AD.CL.CAM.AC.UK domain functional level as 2008 R2, however [https://community.netapp.com/t5/Network-Storage-Protocols-Discussions/Kerberized-NFS-access-from-macOS-Sierra-to-8-2-4P6-7-Mode/m-p/132794#M8819 echo still does not request AES tickets]
* 2018-01-09 (maj1): upgraded all four NetApp controllers to 8.2.5 7-Mode
* 2019-08 (maj1): redesigned the  /usr/groups/admin/autofs-maps/generate_ldif script that generates the automount LDIF tables for the Linux LDAP servers. This fixed among other things:
** presence of many obsolete/counterproductive NFS options, which should be either dropped or moved into /etc/default/autofs or /etc/nfsmount.conf on clients
** replaced ldap:// URLS in auto.master with just the filename (for OS X and LDAPS compatibility)
* 2019-10 (maj1): filespace migrated to new NetApp filer “wilbur” running DataONTAP 9 (in the new DC.CL.CAM.AC.UK domain). This finally enabled the AES encryption types for Kerberized NFS. Therefore, [https://www.cl.cam.ac.uk/local/sys/filesystems/mac/ Kerberized NFSv3 access from macOS] finally works. It also added support for NFSv4.1 on the filer and the simpler auto-mount tables for userfiles now also work with NFSv4. However, remote access from Linux still appears to be substantially faster with NFSv3. This also ended the practice of splitting home directories across eight client-visible volumes.
=== Things yet to do ===
==== Urgent ====
* (re)move uid/gid entries <1005 from departmental LDAP tables and [[Moving the UID/GID of a user|migrate these on filer]]
* resolve LDAP GID collisions 507,3600-3610
* Enable authentication for Linux LDAP servers, such that laptops (especially with macOS) can be securely configured to use it. This may be done via Kerberos, and may be as simple as adding an LDAP ServicePrincipalName and keytab (RT#115877). (Alternative: get and install TLS certificate for LDAPS servers.)
==== Important ====
* review historic departmental filer namespace, in particular to
** phase out /usr/groups, which has not been available on macOS since “El Capitan” (move to /groups, /auto/groups, /filer/groups, or something else?)
** review/remove the need for client-side symbolic links entirely (automounting under /auto was originally introduced because when earlier automounting directly under / caused problems: Linux clients used to hang at “ls /” when one of the mounted NFS servers hung; current behaviour ought to be retested; symbolic links were introduced to hide from users the move of the automounts from / to /auto)
** align NFS and CIFS pathnames
==== Worth considering ====
* Fix known bugs in LDIF generating code:
** allows gid collisions (don't use "cat" to merge tables!)
== References ==
* [https://mysupport.netapp.com/documentation/docweb/index.html?productID=61539 Data ONTAP 8.1.2 7-Mode Documentation]

Latest revision as of 12:02, 6 October 2020

Markus Kuhn, Martyn Johnson, February 2012 to September 2020

This is an evolving report by the ad-hoc Filer Working Group, who started in early 2012 to review the use and configuration of the departmental filer, with a view to remove historically-grown complexities and to make it easier and provide better documentation for users of self-managed machines to access the filer. For more information about the filer, there are also the departmental filespace user documentation, some notes on the NetApp file server by and for sys-admin. Many of the suggestions of this review have been implemented by late 2019, but (as of September 2020) there remain some outstanding issues regarding UID/GID cleanup, the client-side filer namespace, and easier deployment of the automounted NFS configuration on self-managed Unix/Linux/macOS/WSL2 machines.

The Computer Laboratory has operated a centrally provided NFS file store for Unix/Linux systems continuously since the mid 1980s. This service hosts the commonly used home directories and working directories of most users, and is widely used by research groups to collaborate, via group directories for shared project files and software. It also forms the interface to the departmental mail and web servers.

Servers

Until late 2019, the departmental NFS service was provided by a NetApp FAS3220 storage server "elmer", running under Data ONTAP Release 8.2.5 7-Mode. This server also provided access to the same filespace to Windows clients via the SMB/CIFS protocol, and provides a mechanism for the coexistence of files governed by either POSIX permission bits and Windows access control lists in the same directory. Elmer also hosted disk images for virtual machines, which are accessed predominantly over NFS, with some legacy use of the block-level iSCSI protocol. An additional FAS2040-R5 server "echo" (SN: 200000186549) running under Data ONTAP Release 8.2.5 7-Mode handles off-site backup using SnapVault.

User authentication is provided by one of two Active Directory domains: AD.CL.CAM.AC.UK (old) and DC.CL.CAM.AC.UK (new). Each is served by three Microsoft Active Directory domain servers running under Windows 2008R2 (?). They provide Kerberos KDC and an LDAP services. In addition, there are four separate Linux LDAP servers (ldap{,-serv{1,2,3,4}}.cl.cam.ac.uk) that serve passwd, group and automount tables for Linux (and some self-managed macOS) clients.

In spring 2019, the department bought a new NetApp filer “elly” running under DataONTAP 9.5P6, which runs multiple virtual filers, of which “wilbur” took over as the main departmental NFS/SMB file server from “elmer”. While previously “filer.cl.cam.ac.uk” was an alias (DNS CNAME) for “elmer.cl.cam.ac.uk”, it is now a DFS server, performing a similar layer of namespace indirection for SMB clients as automounting does for NFS clients. However, at the moment, “filer.cl.cam.ac.uk” simply passes through transparently the SMB shares exported by “wilbur.cl.cam.ac.uk” (aka “wilbur.dc.cl.cam.ac.uk”).

Review

The Computer Laboratory's IT Advisory Panel initiated on 28 October 2011 an ad-hoc working group to review the provision of departmental file space, headed by Markus Kuhn. The initial focus of this review will be the configuration and use of the existing filer "elmer", with a particular view on identifying and eliminating

  • obstacles that currently prevent remote NFS access by private Linux home computers (something that has long been available to Windows users);
  • problems in the way NFS and CIFS access to the same files is handled.

This project should also provide an opportunity to rid the configuration of the filer from historic baggage and to streamline and simplify its use by departmentally administered Linux machines. The working group may later extend its remit and welcomes suggestions to that end.

The initial fact-finding phase of the review reported here is being conducted by Markus Kuhn and Martyn Johnson and focussed so far on namespace management, authentication, and access from outside the department.

Namespace management

This section still describes the Data ONTAP 7 volume/qtree setup (elmer), which has been replaced since in late 2019 with a simpler filer-side single NFS-exported namespace on Data ONTAP 9 (wilbur). However the mentioned issues of the client-side differences between the Linux and Windows namespaces and the use of symbolic links on Linux remain.

NetApp's Data ONTAP 7G operating system requires filer administrators to structure the storage space at several levels. Familiarity with these will help to understand some of the historic design decisions made.

  • An aggregate is a collection of physical discs, made up of one or more RAID-sets. It is the smallest unit that can be physically unplugged and moved intact to a different filer. Both filers have multiple aggregates for two main reasons: restrictions on the maximum size of an aggregate, and separation of discs of different size and age to facilitate their eventual replacement. Discs can be added to an aggregate on the fly, but never removed, so the only way to retire discs is to empty the aggregate.
  • A volume is a major unit of space allocation within an aggregate. Typically, they have reserved space, though it is possible to over-commit if one really wants to. Many properties are bound to a volume, e.g. language(?). Significantly, a volume is the unit of snapshotting – each volume has its own snapshot schedule and retention policy.
  • A qtree ("quota tree") is a magic directory within the root directory of a volume which has a quota attached to it and all its descendants. (This is merely for quota; there is no space reservation associated with a qtree.)

When we first got the filer in 2002, the aggregate layer did not exist in Data ONTAP, and a volume was just a collection of discs. This meant that the number of volumes was small and fixed. We would have liked to give each user filespace its own qtree, but the then existing hard limit of 255 qtrees on a volume made this impossible. (Today, Data ONTAP 7G allows up to 4,995 qtrees per volume.) A single qtree for all home directories would have been uncomfortably large for the tape backup system in use at the time, whose unit of backup was the qtree. As a compromise, Martyn Johnson then created eight qtrees called homes-1 to homes-8, all located in volume 1, along with various qtrees for research group filespaces (and various other functions) spread across several volumes. This can be seen in the elmer volumes, which are mounted on lab-managed Linux machines under /a/elmer-vol*:

 $ ls /a/elmer-vol1
 grp-cb1  grp-op1  grp-se1  grp-th1  homes-2  homes-5  homes-8  sys-rt
 grp-da1  grp-pr1  grp-sr1  grp-th2  homes-3  homes-6  sys-1
 grp-nl1  grp-rb1  grp-sr9  homes-1  homes-4  homes-7  sys-pk1
 $ ls /a/elmer-vol3
 grp-dt1  grp-nl4  grp-rb4   grp-sr3  grp-sr7  sys-lg1  sys-ww1
 grp-dt2  grp-nl9  grp-rb9   grp-sr4  grp-sr8  sys-li1
 grp-nl2  grp-rb2  grp-sr11  grp-sr5  grp-th9  sys-li9
 grp-nl3  grp-rb3  grp-sr2   grp-sr6  sys-acs  sys-pk2
 $ ls /a/elmer-vol4
 misc-clbib  misc-repl  sys-bmc  www-1  www-2
 $ ls /a/elmer-vol5
 WIN32Repository  grp-sr10  grp-te1     scr-1  scr-3  scr-5
 grp-rb5          grp-sr12  misc-arch1  scr-2  scr-4  www-3
 $ ls /a/elmer-vol6
 MSprovision  grp-nl8  grp-rb6  sys-ct  sys-rt2  www-4
 $ /a/elmer-vol8
 grp-ai1  grp-dt8  grp-dt9  grp-nl7
 $ /a/elmer-vol9
 ah433-nosnap  iscsi-nosnap1  misc-nosnap1

As a result of this compromise, the pathname of a (super)home directory on the filer, such as

 vol1/homes-1/maj1/
 vol1/homes-5/mgk25/

now includes a qtree identifier (e.g., homes-1) that the user cannot infer from the user identifier, and which we therefore would ideally hide from users. Users should instead see simple pathnames such as /homes/maj1. Therefore, a two-stage mapping system between filer pathnames and user-visible pathnames was implemented for NFSv3:

  • Server-side mapping: Firstly, the filer's /etc/exports file (see /a/elmer-vol0/etc/exports in lab-managed Linux machines) uses the -actual option as in "/vol/userfiles/mgk25 -actual=/vol/vol1/homes-5/mgk25" to export each superhome of a user under a "userfiles" alias pathname that lacks the qtree identifier.
  • Client-side mapping: Secondly, autofs is then used to individually mount the unix_home directory of each user under a more customary location in the client-side namespace, using mount entries such as "elmer:/vol/userfiles/mgk25/unix_home on /auto/homes/mgk25" or "elmer:/vol/vol3/grp-rb2/ecad on /auto/groups/ecad". Finally, symbolic links such as "/homes -> /auto/homes", "/usr/groups -> /auto/groups", and "/anfs -> /auto/anfs" to give access via customary short pathnames.

This solution is historically grown and was motivated by three considerations:

  • When new users are created, their home directory should become instantly available on all client machines, which meant that the mapping needed to eliminate the qtree identifier from the pathname had to be performed on the server, as there was no practical way to push out such changes in real-time to all client machines.
  • There was already an existing historic automount configuration infrastructure and customary namespace in place on lab-managed Unix clients when the filer was installed, and the aim has been to maintain both.
  • The filer's combined NFS/CIFS capability encouraged the creation of a single common home directory for each user, but the idiosyncrasies of how both Unix and Windows use a home directory made it necessary to provide each user with a separate home directory for each platform. This, along with the desire to keep all of one user's data inside a single directory led to a "superhome" for each user, each containing a "unix_home" and "windows_home" subdirectory, which is then mounted to the respective customary location on each platform.

This arrangement is far from ideal, and causes a number of problems:

  • Quota restrictions (as reported on lab-managed Linux machines by "cl-rquota" are reported in terms of qtree identifiers (e.g., homes-5, scr-1, www-1). These have hardly any useful link with the pathnames that the user is accustomed to, making it difficult to guess which subdirectory needs cleaning up if one runs out of quota.
  • Some research groups historically had to fragment their group space across multiple qtrees, which complicates understanding quotas and namespace.
  • Some users have quota in other user's home directory (namely those sharing the same of the eight home-* qtrees), but others do not, which can lead to confusion when users try to collaborate using shared directories in someone's home directory.
  • An elaborate system of symbolic links and autofs configuration is needed to create the customary client-side name-space, which requires substantial setup and tweaking on lab-managed machines that was never documented or supported for implementation on private machines.
  • The scheme relies on the -actual option in the filer's /etc/exports. The current -actual setup of exporting /vol/userfiles/ only works for NFSv3, because "NFSv4 clients will not see an exported path using the actual option unless the export path is only one level deep and is not /vol" [1]. As a result, we currently cannot access /vol/userfiles with NFSv4, but this could be fixed by simply moving these /vol/userfiles mounts out of /vol. NFSv4 promises substantial performance and ease-of-tunneling advantages, in particular for remote access (only one single TCP connection to filer needed, "delegation" to maintain consistency of a local file cache, etc.).
  • Having the unix_home and windows_home located in the same qtree has required the use of a "mixed" access control policy, where files can flip between POSIX permissions and NTFS-style access control lists, which has caused confusion and some disasters (see section "Access control" below), and which seems no longer recommended by NetApp.

There are two reasons why fixing this is non-trivial and hasn't been done long ago:

  1. The autofs configuration required for client-side mapping is disseminated via LDAP from an LDIF master file that is currently generated by a very complex set of historically grown scripts that are considered unmaintainable and not fully understood by any member of sys-admin. This system is overdue for reimplementation.
  2. The filer has no more efficient means to move files from one qtree into another than simply copying. Therefore, reorganising the way in which home and research-group directories are distributed across qtrees would take several days to complete, which could be disruptive, but is manageable if done in phases. However, more significantly, this copying of all home-directory files but would also cause significant "churn" in the backup system, essentially doubling for a long time the amount of disc space required on the backup system.

An additional problem is that the namespace under Linux and under Windows differs substantially, for example

 /homes/mgk25                        =  \\filer\userfiles\unix_home\mgk25
 /auto/userfiles/mgk25/windows_home  =  \\filer\mgk25
 /user/groups/ecad                   =  \\filer\groups\ecad
 /anfs/www                           =  \\filer\www

This is a significant nuisance and source of confusion that hinders communication between Linux and Windows users. The Linux namespace was designed to be backwards compatible with existing and historically grown pre-NetApp practice. The Windows filespace was constrained by the fact that Windows "shares" allow only for a flat namespace. This has led to different solutions. Arguably, the Windows namespace, because it is flat, newer, and less influenced by historic practice is far more desirable. The main reason that the Windows namespace has not yet been implemented as well under Linux (/filer/...) is to keep the /filer prefix free for a comprehensive redesign of the Linux namespace that not only addresses the incompatibility with Windows share names, but also addresses many of the other issues raised above.

Outline of a possible solution:

  • Since the constraints that led to the historic set of qtrees no longer apply, new users (and selected existing power users) should have their home directories assigned to either a single common qtree, or to individual per-user qtrees (to be decided). A single common qtree has the advantage of allowing easy collaboration in one's home directory and the disadvantage that it remains difficult to find where files are that count towards ones overfull quota. With a per-user qtree, this advantage and disadvantage are swaped. (The current scheme has both aspects as a disadvantage!).
  • For most existing users, the existing qtree scheme should remain in place until they are a minority small enough to make the backup churn manageable that is caused by reassigning all of them as well.
  • The way the client-side namespace mapping is configured needs to be rewritten from scratch and carefully documented, in a way that can easily applied to private Linux machines as well. This could be done by creating a new /filer namespace and running the old and the new system in parallel for a short time.

Authentication / Kerberos

For many years, NFS access to the departmental file servers was restricted to lab-managed machines, because of the inherently insecure AUTH_SYS authentication scheme used, whereby the file server simply trusts the numeric user ID communicated by the client as long as the NFS packet originated from a permitted IP address and from a source port address belowe 1024, which is reserved for the superuser.

This changed a few years ago with the implementation of "Kerberized NFS" (RPCSEC_GSS) on Linux. New lab-managed Linux clients now routinely authenticate their users to the NFSv3 server via Kerberos credentials, using the Lab's Active Directory server as the key distribution centre (KDC). As a result, it has become possible to give users of such clients root access without giving them the ability to impersonate others on the filer. The same Kerberos setup is also used by Windows/CIFS clients.

Several users have already connected their home Linux PC to the departmental Kerberos KDC, for the purpose of easy "ssh -K" remote logins with forwarded ticket to Kerberized lab-managed machines. This merely requires tunneling port 88 via ssh or VPN, along with configuring /etc/krb5.conf.

However, just connecting to the KDC and acquiring a forwardable ticket may not be sufficient for NFS access. At present, lab-managed machines appear to require not only a Kerberos ticket for the logged-in user, but also a host key (in /etc/krb5.keytab), and an associated host entry on the Active Directory KDC. No equivalent is required for Windows CIFS access to the filer, and it remains unclear what that hostkey is exactly used for (authenticationg the rpcbind or mount protocols?) and whether the need for it can be removed or circumvented. It would be desirable to avoid having to register each private machine that wants to access the filer via NFS to first have to be registered in the Active Directory. Giving each user a host key for their personal machine would also be possible, but to avoid continuous additional sys-admin workload, this should ideally be automated as a self-service facility (like the MAC registration).

Access control

The NetApp filespace can be configured, at qtree level, to implement either "POSIX/NFS", "NTFS/CIFS", or "mixed" access control mechanics. We currently use the "mixed" mechanics, because both a user's unix_home and windows_home are located in the same qtree. In a "mixed qtree" each file is governed by the semantics associated with the last protocol that modified its metadata. Access by the other protocol to that file will display crudely approximated permission settings that often do not reflext the actually enforced rights. This has been a rather mixed blessing and causes a number of problems:

  • Neither NFS nor CIFS users have access to an unambiguous and complete access-control state of a file, it is not even possible to say for sure whether a file is currently goverened by POSIX or NTFS semantics.
  • There was once a proprietary Windows tool to give a full view, but that seems no longer supported.
  • There has been a lot of confusion caused by crudely approximated displayed access permissions.
  • Some users have accidentally caused substantial damage to the permission settings of their POSIX files by accidentally using the Windows ACL inheritance settings, which the cause the client to recursively change the permissions settings of an entire filetree.

It now seems much preferably that each user had a unix_home in a qtree with POSIX-style access semantics and a windows-home in a qtree with NTFS-style ACLs, and that for each group directory a decision is made what the primary accessing platform will be, with access-control semantics fixed accordingly, and that mixed qtrees are best avoided entirely. They seemed a good idea at the time.

LDAP

The department operates an LDAP server that serves several administrative functions on lab-managed Linux machines. In particular it also provides

  • the distributed user and group databases (augmenting /etc/passwd, /etc/group)
  • automount maps and entries for client-side NFS namespace management

Details of the setup are explained in /usr/groups/admin/ldap-server/README and the full content of the LDAP database can be seen in /usr/groups/admin/ldap-server/ldif/master.ldif.

User and group database

In order to translate numeric user and group IDs into meaningful names, make the "~userid" shell expansion work, etc., the "getent passwd" user database of the local machine needs to be linked to the departmental LDAP server that has information about all users. This can be easily done (configure /etc/nsswitch.conf and /etc/ldap.conf).

However, there is still a minor problem/risk involved with using the departmental LDAP server (which affects not only home PCs but also lab-managed machines!):

The departmental LDAP server currently still serves a lot of numeric user and group IDs below 1000, which are actually reserved for use by the local operating system installation (and there are particularly many collisions with FreeBSD and Ubuntu Linux system users and groups).

The only solution is to reassign the remaining user and group IDs to higher numbers, such that the departmental LDAP server no longer serves any user and group IDs below 1000. (Not serving any below ~1010 might be a better idea, because a home PC might already have uids 1000, 1001, 1002 assigned to local users, such as family members.)

In 2011/12, there was a campaign to reassign group identifiers below 500, which fixed many of the collisions that had previously occured with Ubuntu Linux system groups. Unfortunately, these were merely moved above 500, where they now collide with either FreeBSD groups, or personal groups associated with CL pseudo users.

While Linux distributions and OSX generally do not use UIDs/GIDs above 499 (and in practice rarely any above ~130), this is not the case with FreeBSD, where the range 0-999 is reserved for that purpose, and is densely populated in /etc/password and /etc/group

Another problem that needs to be addressed is that the ranges occupied by users and groups must not overlap, because it is desirable to allocate for each user an associated personal group of the same name and number. At present, a number of pseudo-users in the 5xx range collide with groups and therefore the pseudo-user cannot have a personal group associated with it.

The following Unix Perl scripts can use used to find LDAP UID/GID collisions:

 /homes/mgk25/proj/filer/ldap_uids.pl
 /homes/mgk25/proj/filer/ldap_gids.pl

In 2015-07-09, we finally agreed on a new UID/GID allocation policy, which is now applied for new users and groups, but many existing entries still have to be fixed.

Summary

Why does Windows/CIFS home access "just work" and Linux/NFS not?

Users of home PCs running Windows have for a long time been able to access departmental filespace simply by activating a VPN connection to the Computer Laboratory's Cisco PPTP server, and typing "\\filer\..." into the "Run" or "Search" box of their desktop. There is no need to be integrated into the domain; Windows will simply show a pop-up box and ask for the departmental Kerberos password before the files become visible. OS X also can use CIFS, but each share has to be mounted manually, which is inconvenient.

Why are things not equally simple under Linux? There are several reasons:

  • Before summer 2015 we had no VPN service supported for Linux. The commonly-used alternative, OpenSSH, does not support the forwarding of the NFSv3 protocol, where the port number used by the mount protocol is dynamically negotiated via rpcbind.
    • Solution 1: Enable NFSv4, which has no separate mount protocol and is easy to tunnel over ssh by forwarding tcp port 2049.
      • Prerequisite 1: AUTH_SYS must be disabled first for all generally accessible clients, because a NetApp-specific security vulnerability of AUTH_SYS is particularly trivial to exploit via NFSv4.
      • Prerequisite 2: All remaining AUTH_SYS clients (for cron, web server, etc.) must be on a dedicated server VLAN (already done?) and the filer be protected from source-IP spoofing of associated source IP addresses (already done?).
    • Solution 2: Using the new UIS-provided CL VPN
  • Once NFS works, the user will be faced with only a half-mapped namespace very different from what is customary on lab-managed machines.
  • To convert numeric user/group IDs to user-friendly names, access to a passwd/groups database is needed (e.g., via LDAP). A minor obstacle/risk is that existing historic LDAP entries still collide with numbers used by operating system distributions (e.g., Ubuntu Linux, FreeBSD, OSX).

Why can't we just use CIFS from all platforms

It is possible to mount filesystems via CIFS from Linux via Linux CIFS VFS or the older (and no longer maintained?) smbfs.

However, using CIFS from Linux substantially increases the risks associated with the existing "mixed" access-control mechanism: each CIFS write access can destroy POSIX access-control information. (What about using CIFS from Linux in POSIX-style qtrees?)

As a result, CIFS seems hardly more attractive than the existing options of using sshfs or webdav.

Samba and OS X implement a POSIX extensions to CIFS, but is that supported by our NetApp filer?

Implementation progress

See also: Project Anoja

Things already achieved

  • 2014-07-22 (mgk): understood and documented why the existing NFSv3/LDAP setup does not work under OS X 10.9 [2][3]
  • 2014-09-13 (pb): Implemented cl-tgt to help with Kerberized access from unattended processes (cron jobs, compute cluster, etc.)
  • 2014-11 (pb): Phased out AUTH_SYS NFS access from all clients to which non-sys-admins have access.
  • 2014-11-10 (maj): Switched on NFSv4 again (options nfs.v4.enable on)
  • 2014-11-11 (maj): set idmapd domain correctly on elmer to make name translation work for NFSv4
  • 2014-11-12 (maj): Understood why /vol/userfiles fails on NFSv4
  • 2015-07: new CL VPN service enables NFSv3 access to Linux home users
  • 2015-07-09 (mgk): new UID/GID allocation policy agreed and implemented (for new groups and pseudo-users)
  • 2016-06-10 (mgk): published NFS filer access from laptops and home computers with instructions for automounting without LDAP
  • 2016-07-07 (mgk): published explanation for why under OS X 10.10 and OS X 10.11 Kerberized NFS is incompatible with Active Directory
  • 2016-09-20 (Apple): macOS Sierra 10.12 now supports AES encryption type in Kerberized NFS (but our AD.CL.CAM.AC.UK domain does not yet, as it still runs at domain functional level Windows Server 2003)
  • 2017-03-19 (mgk): published ugid-scan/find tool to help migrating UIDs
  • 2017-03-21 (mgk): wrote recipe for Moving the UID/GID of a user
  • 2017-06-06 (gt19): AD.CL.CAM.AC.UK domain functional level raised to Windows Server 2008 to enable Kerberos AES encryption types
  • 2017-07-12 (maj1): upgraded echo/enid to 8.2.4P6 7-Mode, reports AD.CL.CAM.AC.UK domain functional level as 2008 R2, however echo still does not request AES tickets
  • 2018-01-09 (maj1): upgraded all four NetApp controllers to 8.2.5 7-Mode
  • 2019-08 (maj1): redesigned the /usr/groups/admin/autofs-maps/generate_ldif script that generates the automount LDIF tables for the Linux LDAP servers. This fixed among other things:
    • presence of many obsolete/counterproductive NFS options, which should be either dropped or moved into /etc/default/autofs or /etc/nfsmount.conf on clients
    • replaced ldap:// URLS in auto.master with just the filename (for OS X and LDAPS compatibility)
  • 2019-10 (maj1): filespace migrated to new NetApp filer “wilbur” running DataONTAP 9 (in the new DC.CL.CAM.AC.UK domain). This finally enabled the AES encryption types for Kerberized NFS. Therefore, Kerberized NFSv3 access from macOS finally works. It also added support for NFSv4.1 on the filer and the simpler auto-mount tables for userfiles now also work with NFSv4. However, remote access from Linux still appears to be substantially faster with NFSv3. This also ended the practice of splitting home directories across eight client-visible volumes.

Things yet to do

Urgent

  • (re)move uid/gid entries <1005 from departmental LDAP tables and migrate these on filer
  • resolve LDAP GID collisions 507,3600-3610
  • Enable authentication for Linux LDAP servers, such that laptops (especially with macOS) can be securely configured to use it. This may be done via Kerberos, and may be as simple as adding an LDAP ServicePrincipalName and keytab (RT#115877). (Alternative: get and install TLS certificate for LDAPS servers.)

Important

  • review historic departmental filer namespace, in particular to
    • phase out /usr/groups, which has not been available on macOS since “El Capitan” (move to /groups, /auto/groups, /filer/groups, or something else?)
    • review/remove the need for client-side symbolic links entirely (automounting under /auto was originally introduced because when earlier automounting directly under / caused problems: Linux clients used to hang at “ls /” when one of the mounted NFS servers hung; current behaviour ought to be retested; symbolic links were introduced to hide from users the move of the automounts from / to /auto)
    • align NFS and CIFS pathnames

Worth considering

  • Fix known bugs in LDIF generating code:
    • allows gid collisions (don't use "cat" to merge tables!)

References