How to Create Your Own NAS With GlusterFS

Spread the love

GlusterFS is a system that provides network storage which has the ability to be made fault-tolerant, redundant and scalable. It’s a great option for applications that need access to large files, such as scientific grade storage solutions. What the file system does is aggregates and memory sources through a single global namespace into a pool of storage and it is accessible through multi-file level protocols.

The great thing about GlusterFS is that it is very easy to use and maintain. Here’s how you can set up your own NAS with GlusterFS.

What You Need:

1. Set Up Your Network

Your best bet is connecting GlusterFS to Gigabit Ethernet and a huge array of servers, plus storage devices. If you don’t have these on hand, two computers or VMs are usually sufficient, particularly if you are just getting the hang of it.

2. Install Your Server

Glusterfs is included in the repository of many Linux distros. Before installation, you can first compare the version numbers between the website and your distro. Keep in mind you might have to manually update the clients. If you have a pretty recent version, you can install the server by typing (in Debian-based distro):

sudo apt-get install glusterfs-server

3. Switch to Static IP and Adding/Removing Volumes

Open up the file “etc/network/interfaces”:

sudo nano /etc/network/interfaces

and remove the line (if present) ifacetho0 inet dynamic, then add the lines:

auto eth0
iface eth0 inet static
address 192.168.0.100
netmask 255.255.255.0
gateway 192.168.0.1
broadcast 192.168.0.255
network 192.168.0.0

Restart your machine and make sure the network is working. If it does, type in the following:

gluster volume create testvol 192.168.0.100:/data

Typing this will create a volume “testvol” which will be stored on the server. Your files will then be located in the “/data” directory which is in the root system and what GlusterFS considers a brick.

To verify that it works, type:

gluster volume start testvol

You can remove the volume later on by typing both:

gluster volume stop testvol

and

gluster volume delete testvol

4. Mounting the Volume Locally

You can do this easily by finding:

mkdir /mnt/gluster

Then, use the command below to mount it.

mount.glusterfs 192.168.0.100:/ testvol /mnt/glusterfs
 
echo "It works" > /mnt/gluster/test. txt

Make sure it works before proceeding.

5. Sharing It Over NFS

More recent versions automatically give NFS access to volumes. You still need to add a portmap package to the server in order to make it work though. To do you this, all you need to do is to add a mount point:

sudo mkdir /mnt/nfsgluster

and type:

sudo mount -t nfs 192.168.0.100:/ testvol /mnt/nfstest/ -o tcp,vers=3

To make a client mount the share on boot, add the details of the GlusterFS NFS share to /etc/fstab in the normal way. For our example, add the line:

192.168.0.100:7997:/testvol / mnt/nfstest nfs defaults,_netdev 0 0

That’s it!

Conclusion

Once you’re set up, you can add a new server by following the above steps. Make sure you give your new server a different IP address. To check the status of your new server, type:

gluster peer probe 192.168.0.101
gluster peer status

If you’d like to work with names rather than IP addresses for your servers, you need to add them to the hosts file on your admin machine. All you have to do is edit /etc/hosts with your text editor and add a line (e.g. 192.168.0.101) for each server.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.
By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time. Subscribe


Sarah Li Cain

Sarah is a professional blogger and writer who specializes in all things tech, education and entrepreneurship. When she isn’t writing awesome things for her clients or teaching cute kids how to write, you can find her meditating, doing yoga, and making illustrations for her children’s books.

Comments are closed