Filer review: Difference between revisions
m (moved Filer (background information) to Filer review: shorter) |
(Clarify material on aggregates and q-trees) |
||
Line 18: | Line 18: | ||
NetApp's [http://www.tech.proact.co.uk/netapp/data_ontap_intro.pdf Data ONTAP 7G] operating system requires filer administrators to structure the storage space at several levels. Familiarity with these will help to understand some of the historic design decisions made. | NetApp's [http://www.tech.proact.co.uk/netapp/data_ontap_intro.pdf Data ONTAP 7G] operating system requires filer administrators to structure the storage space at several levels. Familiarity with these will help to understand some of the historic design decisions made. | ||
*An '''aggregate''' is a collection of physical discs, made up of one or more RAID-sets. It is the smallest unit that can be physically unplugged and moved intact to a different filer. | *An '''aggregate''' is a collection of physical discs, made up of one or more RAID-sets. It is the smallest unit that can be physically unplugged and moved intact to a different filer. Both filers have multiple aggregates for two main reasons: restrictions on the maximum size of an aggregate, and separation of discs of different size and age to facilitate their eventual replacement. Discs can be added to an aggregate on the fly, but never removed, so the only way to retire discs is to empty the aggregate. | ||
*A '''volume''' is a major unit of space allocation within an aggregate. Typically, they have reserved space, though it is possible to over-commit if one really wants to. Many properties are bound to a volume, e.g. language(?). Significantly, a volume is the unit of snapshotting – each volume has its own snapshot schedule and retention policy. | *A '''volume''' is a major unit of space allocation within an aggregate. Typically, they have reserved space, though it is possible to over-commit if one really wants to. Many properties are bound to a volume, e.g. language(?). Significantly, a volume is the unit of snapshotting – each volume has its own snapshot schedule and retention policy. | ||
Line 24: | Line 24: | ||
* A '''q-tree''' ("quota tree") is a magic directory within the root directory of a volume which has a quota attached to it and all its descendants. (This is merely for quota; there is no space reservation associated with a q-tree.) | * A '''q-tree''' ("quota tree") is a magic directory within the root directory of a volume which has a quota attached to it and all its descendants. (This is merely for quota; there is no space reservation associated with a q-tree.) | ||
When we first got the filer in | When we first got the filer in 2002, the aggregate layer did not exist in Data ONTAP, and a volume was just a collection of discs. This meant that the number of volumes was small and fixed. We would have liked to give each user filespace its own q-tree, but the hard limit of 255 q-trees on a volume made this impossible. A single q-tree | ||
for all home directories would have been uncomfortably large for the tape backup system in use at the time, whose unit of backup was the q-tree. As a compromise, Martyn Johnson then created eight q-trees called homes-1 to homes-8, all located in volume 1, along with various q-trees for each research group filespaces (and for various other functions) spread across several volumes. This can be seen in the elmer volumes, which are mounted on lab-managed Linux machines under /a/elmer-vol*: | |||
$ ls /a/elmer-vol1 | $ ls /a/elmer-vol1 |
Revision as of 09:14, 16 February 2012
Markus Kuhn, Martyn Johnson, February 2012
- This is an evolving early-draft report by the ad-hoc Filer Working Group, who started in early 2012 to review the use and configuration of the departmental filer. For more information about the filer, there are also the departmental filespace user documentation, some notes on the NetApp file server by and for sys-admin, and its own man pages.
The Computer Laboratory has operated a centrally provided NFS file store for Unix/Linux systems continuously since the mid 1980s. This service hosts the commonly used home directories and working directories of most users, and is widely used by research groups to collaborate, via group directories for shared project files and software. It also forms the interface to the departmental mail and web servers. At present, this NFS service is provided by a NetApp FAS3140-R5 storage server "elmer" (SN: 210422922), running under "Data ONTAP Release 7.3.3". This server also provides access to the same filespace to other operating systems via the CIFS and WebDAV protocols, and provides a mechanism for the coexistence of files governed by either POSIX permission bits and Windows access control lists in the same directory. Elmer also hosts disk images for virtual machines, which are accessed over block-level iSCSI protocol. An additional FAS2040-R5 server "echo" (SN: 200000186549) handles off-site backup using SnapVault.
Review
The Computer Laboratory's IT Advisory Panel initiated on 28 October 2011 an ad-hoc working group to review the provision of departmental file space, headed by Markus Kuhn. The initial focus of this review will be the configuration and use of the existing filer "elmer", with a particular view on identifying and eliminating
- obstacles that currently prevent remote NFS access by private Linux home computers (something that has long been available to Windows users);
- problems in the way NFS and CIFS access to the same files is handled.
This project should also provide an opportunity to rid the configuration of the filer from historic baggage and to streamline and simplify its use by departmentally administered Linux machines. The working group may later extend its remit and welcomes suggestions to that end.
The initial fact-finding phase of the review reported here is being conducted by Markus Kuhn and Martyn Johnson and focussed so far on namespace management, authentication, and access from outside the department.
Namespace management
NetApp's Data ONTAP 7G operating system requires filer administrators to structure the storage space at several levels. Familiarity with these will help to understand some of the historic design decisions made.
- An aggregate is a collection of physical discs, made up of one or more RAID-sets. It is the smallest unit that can be physically unplugged and moved intact to a different filer. Both filers have multiple aggregates for two main reasons: restrictions on the maximum size of an aggregate, and separation of discs of different size and age to facilitate their eventual replacement. Discs can be added to an aggregate on the fly, but never removed, so the only way to retire discs is to empty the aggregate.
- A volume is a major unit of space allocation within an aggregate. Typically, they have reserved space, though it is possible to over-commit if one really wants to. Many properties are bound to a volume, e.g. language(?). Significantly, a volume is the unit of snapshotting – each volume has its own snapshot schedule and retention policy.
- A q-tree ("quota tree") is a magic directory within the root directory of a volume which has a quota attached to it and all its descendants. (This is merely for quota; there is no space reservation associated with a q-tree.)
When we first got the filer in 2002, the aggregate layer did not exist in Data ONTAP, and a volume was just a collection of discs. This meant that the number of volumes was small and fixed. We would have liked to give each user filespace its own q-tree, but the hard limit of 255 q-trees on a volume made this impossible. A single q-tree for all home directories would have been uncomfortably large for the tape backup system in use at the time, whose unit of backup was the q-tree. As a compromise, Martyn Johnson then created eight q-trees called homes-1 to homes-8, all located in volume 1, along with various q-trees for each research group filespaces (and for various other functions) spread across several volumes. This can be seen in the elmer volumes, which are mounted on lab-managed Linux machines under /a/elmer-vol*:
$ ls /a/elmer-vol1 grp-cb1 grp-op1 grp-se1 grp-th1 homes-2 homes-5 homes-8 sys-rt grp-da1 grp-pr1 grp-sr1 grp-th2 homes-3 homes-6 sys-1 grp-nl1 grp-rb1 grp-sr9 homes-1 homes-4 homes-7 sys-pk1 $ ls /a/elmer-vol3 grp-dt1 grp-nl4 grp-rb4 grp-sr3 grp-sr7 sys-lg1 sys-ww1 grp-dt2 grp-nl9 grp-rb9 grp-sr4 grp-sr8 sys-li1 grp-nl2 grp-rb2 grp-sr11 grp-sr5 grp-th9 sys-li9 grp-nl3 grp-rb3 grp-sr2 grp-sr6 sys-acs sys-pk2 $ ls /a/elmer-vol4 misc-clbib misc-repl sys-bmc www-1 www-2 $ ls /a/elmer-vol5 WIN32Repository grp-sr10 grp-te1 scr-1 scr-3 scr-5 grp-rb5 grp-sr12 misc-arch1 scr-2 scr-4 www-3 $ ls /a/elmer-vol6 MSprovision grp-nl8 grp-rb6 sys-ct sys-rt2 www-4 $ /a/elmer-vol8 grp-ai1 grp-dt8 grp-dt9 grp-nl7 $ /a/elmer-vol9 ah433-nosnap iscsi-nosnap1 misc-nosnap1
As a result of this compromise, the pathname of a (super)home directory on the filer, such as
vol1/homes-1/maj1/ vol1/homes-5/mgk25/
now includes a q-tree identifier (e.g., homes-1) that the user cannot infer from the user identifier, and which we therefore would ideally hide from users. Users should instead see simple pathnames such as /homes/maj1. Therefore, a two-stage mapping system between filer pathnames and user-visible pathnames was implemented for NFSv3:
- Server-side mapping: Firstly, the filer's /etc/exports file (see /a/elmer-vol0/etc/exports in lab-managed Linux machines) uses the -actual option as in "/vol/userfiles/mgk25 -actual=/vol/vol1/homes-5/mgk25" to export each superhome of a user under a "userfiles" alias pathname that lacks the q-tree identifier.
- Client-side mapping: Secondly, autofs is then used to individually mount the unix_home directory of each user under a more customary location in the client-side namespace, using mount entries such as "elmer:/vol/userfiles/mgk25/unix_home on /auto/homes/mgk25" or "elmer:/vol/vol3/grp-rb2/ecad on /auto/groups/ecad". Finally, symbolic links such as "/homes -> /auto/homes", "/usr/groups -> /auto/groups", and "/anfs -> /auto/anfs" to give access via customary short pathnames.
This solution is historically grown and was motivated by three considerations:
- When new users are created, their home directory should become instantly available on all client machines, which meant that the mapping needed to eliminate the q-tree identifier from the pathname had to be performed on the server, as there was no practical way to push out such changes in real-time to all client machines.
- There was already an existing historic automount configuration infrastructure and customary namespace in place on lab-managed Unix clients when the filer was installed, and the aim has been to maintain both.
- The filer's combined NFS/CIFS capability encouraged the creation of a single common home directory for each user, but the idiosyncrasies of how both Unix and Windows use a home directory made it necessary to provide each user with a separate home directory for each platform. This, along with the desire to keep all of one user's data inside a single directory led to a "superhome" for each user, each containing a "unix_home" and "windows_home" subdirectory, which is then mounted to the respective customary location on each platform.
This arrangement is far from ideal, and causes a number of problems:
- Quota restrictions (as reported on lab-managed Linux machines by "cl-rquota" are reported in terms of q-tree identifiers (e.g., homes-5, scr-1, www-1). These have hardly any useful link with the pathnames that the user is accustomed to, making it difficult to guess which subdirectory needs cleaning up if one runs out of quota.
- Some research groups historically had to fragment their group space across multiple q-trees, which complicates understanding quotas and namespace.
- Some users have quota in other user's home directory (namely those sharing the same of the eight home-* q-trees), but others do not, which can lead to confusion when users try to collaborate using shared directories in someone's home directory.
- An elaborate system of symbolic links and autofs configuration is needed to create the customary client-side name-space, which requires substantial setup and tweaking on lab-managed machines that was never documented or supported for implementation on private machines.
- The scheme relies on the -actual option in the filer's /etc/exports, which works for NFSv3, but unfortunately apparently not for NFSv4. As a result, we cannot switch to NFSv4 with the current setup, and we are loosing out on the substantial performance and ease-of-tunneling advantages that NFSv4 would have in particular for remote access (only one single TCP connection to filer needed, "delegation" to maintain consistency of a local file cache, etc.).
- Having the unix_home and windows_home located in the same q-tree has required the use of a "mixed" access control policy, where files can flip between POSIX permissions and NTFS-style access control lists, which has caused confusion and some disasters (see section "Access control" below), and which seems no longer recommended by NetApp.
There are two reasons why fixing this is non-trivial and hasn't been done long ago:
- The autofs configuration required for client-side mapping is disseminated via LDAP from an LDIF master file that is currently generated by a very complex set of historically grown scripts that are considered unmaintainable and not fully understood by any member of sys-admin. This system is overdue for reimplementation.
- The filer has no more efficient means to move files from one q-tree into another than simply copying. Therefore, reorganising the way in which home and research-group directories are distributed across q-trees would take several days to complete, which could be disruptive, but is manageable if done in phases. However, more significantly, this copying of all home-directory files but would also cause significant "churn" in the backup system, essentially doubling for a long time the amount of disc space required on the backup system.
An additional problem is that the namespace under Linux and under Windows differs substantially, for example
/homes/mgk25 = \\filer\userfiles\unix_home\mgk25 /auto/userfiles/mgk25/windows_home = \\filer\mgk25 /user/groups/ecad = \\filer\ecad /anfs/www = \\filer\www
This is a significant nuisance and source of confusion that hinders communication between Linux and Windows users. The Linux namespace was designed to be backwards compatible with existing and historically grown pre-NetApp practice. The Windows filespace was constrained by the fact that Windows "shares" allow only for a flat namespace. This has led to different solutions. Arguably, the Windows namespace, because it is flat, newer, and less influenced by historic practice is far more desirable. The main reason that the Windows namespace has not yet been implemented as well under Linux (/filer/...) is to keep the /filer prefix free for a comprehensive redesign of the Linux namespace that not only addresses the incompatibility with Windows share names, but also addresses many of the other issues raised above.
Outline of a possible solution:
- Since the constraints that led to the historic set of q-trees no longer apply, new users (and selected existing power users) should have their home directories assigned to either a single common q-tree, or to individual per-user q-trees (to be decided). A single common q-tree has the advantage of allowing easy collaboration in one's home directory and the disadvantage that it remains difficult to find where files are that count towards ones overfull quota. With a per-user q-tree, this advantage and disadvantage are swaped. (The current scheme has both aspects as a disadvantage!).
- For most existing users, the existing q-tree scheme should remain in place until they are a minority small enough to make the backup churn manageable that is caused by reassigning all of them as well.
- The way the client-side namespace mapping is configured needs to be rewritten from scratch and carefully documented, in a way that can easily applied to private Linux machines as well. This could be done by creating a new /filer namespace and running the old and the new system in parallel for a short time.
Authentication / Kerberos
For many years, NFS access to the departmental file servers was restricted to lab-managed machines, because of the inherently insecure AUTH_SYS authentication scheme used, whereby the file server simply trusts the numeric user ID communicated by the client as long as the NFS packet originated from a permitted IP address and from a source port address belowe 1024, which is reserved for the superuser.
This changed a few years ago with the implementation of "Kerberized NFS" (RPCSEC_GSS) on Linux. New lab-managed Linux clients now routinely authenticate their users to the NFSv3 server via Kerberos credentials, using the Lab's Active Directory server as the key distribution centre (KDC). As a result, it has become possible to give users of such clients root access without giving them the ability to impersonate others on the filer. The same Kerberos setup is also used by Windows/CIFS clients.
Several users have already connected their home Linux PC to the departmental Kerberos KDC, for the purpose of easy "ssh -K" remote logins with forwarded ticket to Kerberized lab-managed machines. This merely requires tunneling port 88 via ssh or VPN, along with configuring /etc/krb5.conf.
However, just connecting to the KDC and acquiring a forwardable ticket may not be sufficient for NFS access. At present, lab-managed machines appear to require not only a Kerberos ticket for the logged-in user, but also a host key (in /etc/krb5.keytab), and an associated host entry on the Active Directory KDC. No equivalent is required for Windows CIFS access to the filer, and it remains unclear what that hostkey is exactly used for (authenticationg the rpcbind or mount protocols?) and whether the need for it can be removed or circumvented. It would be desirable to avoid having to register each private machine that wants to access the filer via NFS to first have to be registered in the Active Directory, although doing so would also be possible, but would cause continuous additional sys-admin workload.
Access control
The NetApp filespace can be configured, at q-tree level, to implement either "POSIX/NFS", "NTFS/CIFS", or "mixed" access control mechanics. We currently use the "mixed" mechanics, because both a user's unix_home and windows_home are located in the same q-tree. In a "mixed q-tree" each file is governed by the semantics associated with the last protocol that modified its metadata. Access by the other protocol to that file will display crudely approximated permission settings that often do not reflext the actually enforced rights. This has been a rather mixed blessing and causes a number of problems:
- Neither NFS nor CIFS users have access to an unambiguous and complete access-control state of a file, it is not even possible to say for sure whether a file is currently goverened by POSIX or NTFS semantics.
- There was once a proprietary Windows tool to give a full view, but that seems no longer supported.
- There has been a lot of confusion caused by crudely approximated displayed access permissions.
- Some users have accidentally caused substantial damage to the permission settings of their POSIX files by accidentally using the Windows ACL inheritance settings, which the cause the client to recursively change the permissions settings of an entire filetree.
It now seems much preferably that each user had a unix_home in a q-tree with POSIX-style access semantics and a windows-home in a q-tree with NTFS-style ACLs, and that for each group directory a decision is made what the primary accessing platform will be, with access-control semantics fixed accordingly, and that mixed q-trees are best avoided entirely. They seemed a good idea at the time.
LDAP
In order to translate numeric user and group IDs into meaningful names, make the ~login shell expansion work, etc., the "getent passwd" user database of the local machine needs to be linked to the departmental LDAP server that has information about all users. This can be easily done (configure /etc/nsswitch.conf and /etc/ldap.conf).
However, there is still a minor problem/risk involved with using the departmental LDAP server (which affects not only home PCs but also lab-managed machines!):
The departmental LDAP server currently still serves a lot of numeric user IDs below 1000 and numeric group IDs below 500, which are actually reserved for use by the local operating system installation (and there are particularly many collisions with Ubuntu Linux system users and groups).
The only solution is to reassign the remaining user and group IDs to higher numbers, such that the departmental LDAP server no longer serves any user IDs below 1000 and group IDs below 500. (Setting these limits ~10 higher might be a good idea, because a home Linux PC might already have uids 1000, 1001, 1002 assigned to local users, such as family members.)
There is already an ongoing campaign to reassign group identifiers, which is mostly slowed down by the need to chown a large number of files on the filer, along with the inadequate scripting interface of our user database (Microsoft's Active Directory).
Summary
Why does Windows/CIFS home access "just work" and Linux/NFS not?
Users of home PCs running Windows have for a long time been able to access departmental filespace simply by activating a VPN connection to the Computer Laboratory's Cisco PPTP server, and typing "\\filer\..." into the "Run" or "Search" box of their desktop. There is no need to be integrated into the domain; Windows will simply show a pop-up box and ask for the departmental Kerberos password before the files become visible.
Why are things not equally simple under Linux? There are several reasons:
- There is no VPN service supported for Linux. The commonly-used alternative, OpenSSH, does not support the forwarding of the NFSv3 protocol, where the port number used by the mount protocol is dynamically negotiated via rpcbind.
- Solution 1: Enable NFSv4, which has no separate mount protocol and is easy to tunnel over ssh by forwarding tcp port 2049.
- Prerequisite 1: AUTH_SYS must be disabled first for all generally accessible clients, because a NetApp-specific security vulnerability of AUTH_SYS is particularly trivial to exploit via NFSv4.
- Prerequisite 2: All remaining AUTH_SYS clients (for cron, web server, etc.) must be on a dedicated server VLAN (already done?) and the filer be protected from source-IP spoofing of associated source IP addresses (already done?).
- Solution 2: Offer and document a VPN gateway that is easy to set up under Linux. (Replacing the VPN service may also benefit Windows users, as there are security and configuration concerns with the existing very old setup.)
- Solution 1: Enable NFSv4, which has no separate mount protocol and is easy to tunnel over ssh by forwarding tcp port 2049.
- "Kerberized NFS" currently requires setting up a Kerberos host key for the machine, in addition to the Kerberos password required from the users.
- Solution: Investigate what the host key is actually required for, and whether we can eliminate that need. (Windows seems to require no equivalent.)
- Once NFS works, the user will be faced with only a half-mapped namespace very different from what is customary on lab-managed machines.
- Solution: design a simple and stable configuration for client-side mapping of the namespace. This may continue to involve an LDAP-disseminated autofs configuration, but might as well be just a list of additional entries to /etc/fstab. In the interest of efficiency, it may be desirable to have just one single NFS mount from the filer, along with local type=bind mounts to shape the desired namespace, in the absence of a comparable mechanism on the filer. Ideally, there should just be a single mount to /filer, with all remaining mapping being done on the filer.
- To convert numeric user/group IDs to user-friendly names, an LDAP connection is needed. A minor obstacle/risk is that existing historic LDAP entries still collide with numbers used by Linux distributions (e.g., Ubuntu).
Why can't we just use CIFS from all platforms
It is possible to mount filesystems via CIFS from Linux via Linux CIFS VFS or the older (and no longer maintained?) smbfs.
However, using CIFS from Linux substantially increases the risks associated with the existing "mixed" access-control mechanism: each CIFS write access can destroy POSIX access-control information. (What about using CIFS from Linux in POSIX-style q-trees?)
As a result, CIFS seems hardly more attractive than the existing options of using sshfs or webdav.