MapR Direct Access NFS

MapR Direct Access NFS


The Network File System (NFS) protocol provides remote access to shared disks across networks. An NFS-enabled server can share directories and les with clients, allowing users and programs to access les on remote systems as if they were stored locally. NFS has become a well-established industry standard and a widely used interface that provides numerous bene ts, including avoidance of data duplication to accommodate multiple users and applications and better administration and security of data.

NFS Bene ts with the MapR Converged Data Platform

The MapR Converged Data Platform is the only big data platform that leverages the full power of NFS. The POSIX-compliant platform can be exported via NFS to perform fully random read-write operations on les stored in the MapR Platform.

MapR Direct Access NFSTM makes big data radically easier and less expensive to use. MapR allows les to be modi ed and overwritten, and enables multiple concurrent reads and writes on any le. Here are some examples of how MapR customers have leveraged NFS in their production environments:

Easy data ingestion

Popular online gaming company changed data ingestion from a complex Flume cluster to a 17-line Python script.

Database bulk import/export with standard vendor tools

Fortune 100 company saved millions on data warehouse costs by leveraging MapR to pre-process data prior to loading into data warehouses and leveraged bulk imports via NFS.

Ability to use existing applications/tools

Large credit card company uses MapR volumes as user home directories on the MapR NFS gateway servers and allows its users to continue to leverage standard Linux commands and utilities to access and process data.

MapR NFS Implementation

Combining HDFS APIs with NFS

Each node in the MapR cluster has a FileServer service, whose role is similar in many ways to the DataNode in HDFS. In addition, there can be one or more NFS Gateway services running in the cluster. In many deployments the NFS Gateway service runs on every node in the cluster, alongside the FileServer service.

A MapR cluster can be accessed either through the HDFS API or through NFS:


To access a MapR cluster via the Hadoop Distributed File System (HDFS) API, the MapR client must be installed on the client node. MapR provides easy-to-install clients for Linux, Mac, and Windows. The HDFS API is built on Java, so in most cases client applications are developed in Java and linked to the Hadoop-core-*.jar library.


To access a MapR cluster over NFS, the client mounts any of the NFS Gateway servers. There is no need to install any software on the client, because every common operating system includes an NFS client. In Windows, the MapR cluster becomes a drive letter (e.g., M:, Z:, etc.), whereas in Linux and Mac the cluster is accessible as a directory in the local le system (e.g., /mapr). Note that some lower-end Windows versions do not include the NFS client.

The HDFS API is designed for MapReduce (with functions such as getFileBlockLocations), so MapReduce jobs nor- mally read and write data through that API. However, the NFS interface is often more suitable for applications that are not speci c to Hadoop. For example, an application server can use the NFS interface to write its log les directly into the cluster and also to perform random readwrite operations.

NFS High Availability

MapR provides high availability for NFS in the MapR Converged Enterprise Edition. The administrator uses a simple MapR interface to allocate a pool of Virtual IP addresses (VIPs), which the cluster then automatically assigns to the NFS Gateway servers. A VIP automatically migrates from one NFS Gateway service to another in the event of a failure, so that all clients who mounted the cluster through that VIP can continue reading and writing data with virtually no impact. In a typical deployment, a simple load balancing scheme such as DNS round-robin is used to uniformly distribute clients among the di erent NFS Gateway servers (i.e., VIPs).a

Random Read/Write

The MapR Platform includes an underlying storage system that supports random reads and writes, with support for multiple simultaneous readers and writers. This provides a signi cant advantage over HDFS-based data plat- forms, which only provide a write-once storage system (similar to FTP).

Support for random reads and writes is necessary to provide true NFS access, and more generally, any kind of access for non-Hadoop applications. NFS is a simple protocol in which the client sends the server requests to write or read n bytes at o set m in a given le. In a MapR cluster, the NFS Gateway service receives these requests from the client and translates them into the corresponding RPCs to the FileServer services. The server-side in the NFS protocol is mostly stateless-there is no concept of opening or closing les.

HDFS-based Data Platforms and NFS

HDFS le operations typically involve a le open(), followed by sequential writes, and end with a le close() operation. The le must be closed explicitly for HDFS to pick up the changes that were made. No new writes are permitted until one reopens the le again.

NFS protocol, on the other hand, follows a di erent model to work with les, creating a technology mismatch with HDFS.

Firstly, NFS protocol on the server side is stateless and does not include a le open/close primitive that can be used to indicate to HDFS that the write operation is complete. Therefore, in order to make the data permanent on HDFS, the NFS Gateway on HDFS has to be tweaked to make a guess and arti cially close the le after a speci ed timeout. After the le closure however, any write arriving from the NFS client is not written to HDFS, making the system susceptible to data loss.

Secondly, even if the end application on the NFS client side writes in sequential order, the local operating system and NFS client typically reorder the writes that get passed on to the NFS server. Therefore, packets that the NFS server receives from the client are almost always guaranteed to be out of sequence- which does not t well with how HDFS expects its writes to be sequential. Therefore, to re-sequence incoming data, the NFS gateway has to be tweaked again to temporarily save all the data to its local disk (/tmp/.hdfs-nfs) prior to writing it to HDFS. Such a setup can quickly become impractical; as one needs to make sure the NFS gateway’s local directory has enough space at all times. For example, if the application uploads 10 les with each having 100MB, it is recommended for this directory to have 1GB space in case a worst-case write reorder happens to every le.

Because of this bottleneck,

  • HDFS NFS cannot truly support multiple users, because the gateway may runout of local diskspace very quickly.
  • The system performance becomes unusable because all NFS traffic is staged on the gateway’s local disks. Infact, HDFS NFS documentation recommends using the HDFS API and WebHDFS “when performance matters.”

The drastic limitations mentioned above, coupled with the fact that existing applications cannot perform random read-writes on HDFS, make NFS support on HDFS poor and unusable.


The MapR Converged Data Platform uniquely provides a robust, enterprise-class storage service that supports ran- dom reads and writes, and exposes the standard NFS interface so that clients can mount the cluster and read and write data directly. This capability makes big data (including Hadoop and Spark) much easier to use, and enables new classes of applications.