Home-hosting

2021-02-12 00:00

Okay, I don’t think the term “home-hosting” actually exists, so I’m just going to be using it here to refer to act of hosting a server on a machine in your home that publicly faces the internet.

Anyway! I’m super happy right now, mostly happy at the fact that I’ve done something this technical, well, technical for me! I’ve managed to move my server hosting from DigitalOcean to my home. This means what you are reading right now is actually coming from an old 2008 Macbook Pro with a broken screen from the basement of my house!

I have nothing against DigitalOcean, they are great, super friendly, and the support was awesome when I had questions. I would definitely recommend them to anyone who is starting out with their first virtual private server. Especially if you don’t want to deal with port-forwarding!

I haven’t removed the virtual private server (VPS) from DigitalOcean yet, because I want to let my home-hosted server run for about a week or two to test the stability of it. After that, it should be safe to remove the VPS, and if anything becomes unstable, then I’ll probably move back to DigitalOcean for hosting again.

Page contents

Choosing a Linux distribution

Back when I was on DigitalOcean, I chose Ubuntu 20.04 because there were so many guides on Ubuntu for doing various tasks. Since, then, I’ve learned a lot about getting a server up and running. I probably wouldn’t be able to pull off some enterprise network infrastructure, but I don’t need that kind of setup. I just wanted to start off simple by getting a web server going, and serving that from my basement to the internet.

I decided to switch it up and go out of my comfort zone a bit, well, kind of… For servers, I always went with a Ubuntu server, again, because of all of the guides. I figured I would try out Debian, since that’s what Ubuntu is based on, and more importantly, it’s because I use Debian as my daily-driver on my personal laptop.

I figured Debian would be fine for a server because it’s focus is on stability, and when I’m not home, I don’t want to have to worry about my server crapping out, so Debian it was!

Dealing with the broken backlight

Usually, when I install Linux on a machine, I have the luxury of seeing what I’m configuring and what options I’m choosing during the installation process. With a broken backlight, this is a little harder. At first, I tried connecting the laptop to a TV through an HMDI cable, but nothing was coming up. As a test, I tried a Ubuntu live USB to test if a desktop environment would work with the TV, and it did. I wasn’t sure why Debian’s graphical or command-line installer wasn’t showing up on the TV, so I decided to pull out my phone’s flashlight!

Before any actual “flashlighting” happened, I plugged my ethernet cable and a Debian installation USB stick into the server. Since the server is running on a 2008 Macbook Pro, I had to hold the “Option” key during the bootup to access the device selection screen. This is where my phone’s flashlight came in.

I flashed the screen with the flashlight, which is only bright enough to show about 5cm of the screen, so it was really hard to see, and even the lit-up part was still very dim. I’m thankful I have good eye sight. I noticed two icons show up on what looked like a light background. One was a grey hard drive icon with the name of my Macbook Pro’s hard drive, and the other icon was this yellow or orange icon that resembled some kind of external hard drive or USB, with the name “EFI … something”. I honestly forget what it said after “EFI”, so we’ll call it “EFI something” from now on haha. I’m also not sure if this is what Apple considers a BIOS, but whatever, it allowed me to choose bootup devices using the arrow keys and the “Enter” key.

Installing Debian

After selecting a boot device, I noticed the familiar Debian setup, so I knew all was good. Everything was normal, except for setting up an encrypted LVM. I have my personal laptop encrypted if it’s stolen, but for the server, I couldn’t do this, because I wanted to be able to restart the server remotely, and LVM encryption would require me to type in a decryption password before starting the server, which wouldn’t work, because I would need to decrypt the server before anything could connect to it. Because of this, I’ve decided to not keep anything overly-personal or sensitive on it, just in case of a burglary, and so I could restart the server after security updates.

After I got to the final screen, which asks you which desktop environment you want, if you want a print server, or if you want an SSH server setup after the installation. I think, under the hood, this is the Linux program tasksel, but I might be wrong.

I ended up not installing a desktop environment, unchecking the printing server option, and checking off the SSH server option.

Connecting to the server from my home network

Before I forwarded my ports, I wanted to see if I could just SSH into the server to set it up. I opened my router’s configuration page and logged into my router. I looked for any new devices on my network, and found the local IP address of the server.

On my personal laptop, I tried to SSH into the server using ssh m455@xxx.xxx.x.x on a whim, where m455 was my under-privileged, regular user account on the server, and xxx.xxx.x.x was the server’s local IP address. To my success, I was prompted with a password for the m455 user! I put in the password, and saw that my Bash prompt’s host changed! I was now SSHed into the server, and able to set it up remotely and didn’t need to drain my poor phone’s battery from all of the flashlight usage haha!

Dealing with server hibernation and disconnects

The first thing that happened while I was reading Debian’s documentation was being disconnected from my server. I was kind of disappointed, but realized it wasn’t because of connection instability, it was because the laptop went to sleep! I totally thought it would only go to sleep if I had some kind of graphical power manager, but it turns out systemd systems have a sleep, suspend, hibernate, and a few other services that run at startup that cause your system to sleep when inactive.

After a bit of internet-searching, I found out I can turn off various services with systemctl mask x, where mask is a service. I forget exactly what this does, but it points to /dev/null somewhere, and … something, something. Okay, I totally forgot how it works, but I do know it disables things, and that’s all I needed to know!

The following command stopped the server from sleeping, hibernated, suspending, or whatever it was doing:

systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target

After doing this, I stepped away from the SSH connection and made some coffee, I came back 10 minutes later, and the connection was still there, so I knew the command worked.

Dealing with the server’s loud fan and high temperature

One thing I noticed, though, was that the server fan was really loud, and pretty hot. I ran htop to see if anything was peaking CPU-wise, but the server was only using ~150mb of RAM, so that didn’t make sense.

After some more internet-searching, I found out that the loud fans and high temperatures were a common issue with Macbook Pros running Linux. Conveniently, I had learned that someone had written software called mbpfan to fix this issue, and that mbpfan was available in Debian’s repositories, so I apt installed mbpfan, and successfully installed it.

I wasn’t sure how to use it, so I took a look at mbpfan’s GitHub repository page, and soon found out that all I had to do was add the mbpfan.service file from that repository to my systemctl startup services, and add the following kernel modules to /etc/modules:

I then ran systemctl enable mbpfan.service, systemctl daemon-reload, systemctl start mbpfan.service, and then systemctl reboot, just in case kernel modules require system reboots.

To my luck, the laptop/server was super quite, and wasn’t running hot like it was before! Thanks mbpfan’s developers!

I SSHed back into the server using my username and password, and continued the setup.

Making system administration convenient

Now that the hardware and the connection were behaving the way I wanted, I installed a few packages I knew I would need for security, convenience, and text-editing when doing administrative tasks with the following commands as root:

apt update
apt upgrade
apt install sendmail mutt vim curl htop rsync nginx ufw fail2ban

Because I am running this server from home, I wanted some extra security, so ufw and fail2ban seemed like good candidates. ufw would act as a firewall to only allow specific ports to be open, and fail2ban would prevent repetitive, password-guessing or brute force attacks on my service, by denying connections for a period of time after x failed attempts.

To make things more convenient, while doing system administration as root, I added the following line to my root user’s ~/.bashrc:

export PATH="$PATH:/usr/local/bin:/usr/local/sbin:/usr/sbin:/usr/bin"

Allowing ports with ufw

In case of a disconnect, I wanted to make sure ufw didn’t lock me out of the server remotely, so I ran the following command:

ufw allow 22

After this, I figured I’m going to be running this as a web server, so I should add HTTP and HTTPS ports as well:

ufw allow 80
ufw allow 443

Then to enable these settings, and have them applied on system reboots, I ran the following commands:

systemctl enable ufw
systemctl start ufw
ufw enable

I am honestly not sure if the ufw enable is necessary here with the systemctl enable ufw command already being ran, but I did it just in case.

Disabling password logins over SSH

Next, I didn’t want to have to put in my password every time, so I began to setup SSH key authentication by creating an authorized_keys file in my ~/.ssh directory, where my public SSH keys would be stored on the server:

mkdir ~/.ssh
touch ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

On my local machine, I then generated an SSH key with the following command:

ssh-keygen -b 4096 -t rsa -f ~/.ssh/home-server

And sent the public key to my server using:

scp -i ~/.ssh/home-server.pub m455@local-ip:~/

After doing this, I logged back into the server, and moved the contents of the home-server.pub file to the ~/.ssh/authorized_keys file I made earlier using the following commands on the server:

cat home-server.pub >> ~/.ssh/authorized_keys
rm home-server.pub

The reason I did this, is because SSH compares keys in ~/.ssh/authorized_keys when you try to log into a server using asymmetric key authentication.

Before disabling password logins over SSH, I needed to test the key authentication, so I didn’t lock myself out of my server. I did this by specifying the private key to use when logging into the server using the following command:

ssh -i ~/.ssh/home-server m455@xxx.xxx.x.x

If everything went well, it shouldn’t prompt me for a password, which it didn’t! Perfect! Now I knew I could go and secure SSH more.

Hardening SSH

After reading several articles related to SSH hardening, I found some settings in /etc/ssh/sshd_config that I needed to change, and looked up what each of them do. The GSSAPIAuthentication and KerberosAuthentication were the only two settings that I am still fuzzy about haha:

PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
KerberosAuthentication no
GSSAPIAuthentication no
X11Forwarding no
PermitUserEnvironment no
LoginGraceTime 20
PermitRootLogin no
MaxAuthTries 3
AllowUsers m455

I also commented out any #AcceptEnv ..., because I set PermitUserEnvironment to no.

Then to bring these changes into action, I ran systemctl restart SSH

Next step was fail2ban. By default, fail2ban has some defaults for hardening SSH, but I added another rule by adding the following to /etc/fail2ban/jail.d/jail-debian.local:

[sshd]
port = 22
maxretry =3

To enable this, I restarted fail2ban using the following command:

systemctl restart fail2ban

This little addition to fail2ban blocks attacker’s IP addresses from connecting to my server after three failed login attempts over ssh.

I found some information on the internet about being able to randomize ban time lengths, but they mentioned that you could just uncomment a few things to enable those kinds of bans. Unfortunately, on Debian, I don’t see those settings, so I didn’t add them in case I’m using an older version of fail2ban on Debian. I’ll probably have to look those up later.

Setup a web server

The next step is actually getting something visible from the internet up, a web server! Well… it won’t be visible yet, because I haven’t gotten to port forwarding yet. For my setup, I am using nginx.

First I create the required directories as root for my m455 user:

mkdir -p /var/www/m455.casa/html
chown -R m455:m455 /var/www/m455.casa/html
chmod -R 755 /var/www/m455.casa

Then I add a test website at /var/www/m455.casa/html/index.html as the m455 user.

I then make an nginx server block (I think that’s what these things are called?) for the website in /etc/nginx/sites-available/m455.casa with the following contents:

server {
  root /var/www/m455.casa/html;
  index index.html index.htm index.nginx-debian.html;

  server_name m455.casa;

  location / {
    try_files $uri $uri/ =404;
  }
}

I guess the index.htm and index.nginx-debian.html are optional, but I throw them in there anyway.

Next I enable the server block with the following command:

ln -s /etc/nginx/sites-available/m455.casa /etc/nginx/sites-enabled/m455.casa

After this, I uncomment the following command from /etc/nginx/nginx.conf, which I don’t full understand, but apparently it fixes several caching problems? haha:

server_names_hash_bucket_size 64;

I also stop nginx from showing the its version number on 404s as an extra security measure by uncommenting the following in /etc/nginx.conf:

server_tokens off

Basically, if there are any zero-day vulnerabilities, this will hide what version I’m on if I was using an vulnerable version of nginx.

Run nginx -t to test my nginx configuration, and if there aren’t any issues, I restart nginx with systemctl restart nginx.

Next, to see if my website is actually working, I find the local/internal IP address of the server on my network, and type it into my browser. The test page we made earlier should be visible.

Configuring the router

Before I could try to view the test page from an external IP address, we have to forward ports. This took me a while to figure out, because I didn’t realize we were behind a double NAT.

This took several tries and a bit of reading to understand what was going on, but basically I had to forward port connections from my most-external router, to an internal router, and finally to the server.

Specifically, I had to:

The reason I had to change to ports 1443 and 1080 mid-way through the port-fowarding for connections on ports 80 and 443 are because ports 80 and 443 on my internal router point to my router login page, which causes things to break. To avoid confusing configurations on my server, and allow myself to use regular ports like 22, 443, and 80, I just changed changed the internal router’s interpal port mapping to change the port numbers back to normal when they point back to my server.

If I were to make an awful diagram, it would look like this:

            ___
        ___(   )_
    ___(         )_
   (   interwebz   )
    ---------------
           |
 connects through ports
      3222 443 80
           |
           V
   +--------------+
   | first router |
   +--------------+
           |
first router forwards ports
     3222  443   80
           |
           as
           |
       22 1433 1080
           |
           V
   +---------------+
   | second router |
   +---------------+
           |
second router forwards ports
       22 1433 1080
           |
           as
           |
       22  443   80
           |
           V
   +--------------+
   |   my server  |
   +--------------+

In case you couldn’t tell, the interwebz shape is a cloud.

After doing this, I was able to go to a website like icanhazip.com to look up my external IP address. I then took the external IP address, and tried to visit it in my browser. To my happy surprised, it worked! I could see the test HTML page I made at /var/www/m455.casa/html/index.html earlier!

Pointing my domain name to my server’s IP address

Next was to make my m455.casa domain point to my IP address!

… but wait! I have a dynamic IP address that changes all the time! How the hell am I going to do that?

Well let me tell you! We are going to make my server contact my domain name registrar, namecheap.com every 5 minutes to tell it my IP address!

Luckily Namecheap had a handy article on how to update your IP address from a browser, which means all we need to do is an HTTP request. The command-line tool curl can do that, because I download webpages and websites with it all the time!

First I had to go to the Namecheap dashboard, and click the “Domain List” button from the sidebar. Then I clicked “MANAGE” beside my m455.casa domain name.

From there, I had to go down to the “NAMESERVERS” section and change it to “Namecheap BasicDNS”, instead of the “Custom DNS” option I had selected that had allowed me to point to DigitalOcean’s name servers.

I then clicked the “Advanced DNS” button at the top of the webpage where I could add DNS records.

Here, I added the following entries:

The reason I chose 127.0.0.1 as the “Value” for the records is because this value will be overwritten by my server’s external IP address later.

After adding each entry, Namecheap will show you a “Dynamic DNS Password” in the “DYNAMIC DNS” section. I found this section by scrolling down below the DNS record-setting section. Oh yeah, I had to switch the slide bar beside “Status” to turn on Dynamic DNS so I could set the records above.

Writing a script to update my DNS records

Okay, so, to make a script to update my DNS records with my current IP address, I SSHed into my server using its local IP address for now.

In my m455 user’s home directory, I made a script called ip-update.sh with the following contents:

#!/bin/sh

domain=m455.casa
password=that-weird-long-dns-password-namecheap-gave-me

curl -s "https://dynamicdns.park-your-domain.com/update?host=@&domain=$domain&password=$password&"
curl -s "https://dynamicdns.park-your-domain.com/update?host=*&domain=$domain&password=$password&"

Next, I had to make this run every 5 minutes, because I honestly don’t know how often my IP address changes, and 5 minutes of downtime doesn’t seem to bad if it ever wasn’t in sync. Plus, I don’t want to spam Namecheap.com too much with excessive curl requests haha.

To do this, I ran crontab -e as my m455 user to open its cronjobs. Cronjobs allow you to run things at specific intervals, and in my case, I wanted to run that ip-update.sh script every 5 minutes. To do this, I add the following contents to my m455 user’s cronjobs:

# Update Namecheap with my current IP every 5 minutes
*/5 * * * * /home/m455/ip-update.sh >/dev/null 2>&1

The reason I have >/dev/null 2>&1 there is so my system doesn’t send me local mail to my m455 user about the output of the command every 5 minutes.

I then saved the crontab file, and exited. For fun, I ran the script with ./ip-update.sh, and went to check my DNS records on Namecheap to see if the 127.0.0.1s were replaced with a new IP address, and they were! I was so happy, but I was also super sad quickly after, because when I went to m455.casa in my browser, it said the page didn’t exist.

After a bit of researching and asking friends, I learned that it takes time for DNS records to propagate to other internet service providers. I also found a little excerpt on Namecheap.com that mentioned it can take up to one or two days to propagate.

I ended up checking my page every 10 minutes to see if anything on m455.casa would show up, but nothing did. I think all I added was a <h1>hello there</h1> in the index.html, so it wouldn’t be hard to notice!

About 45 minutes passed by, and I grew impatient, and finally, around the 45-minute mark, I saw my silly <h1>hello there</h1>! I was actually doing it! Serving a website from a computer in my home!

Getting a valid TLS/SSL certificate for HTTPS access

Next, I eagerly added m455.casa to my SSH ~/.ssh/config file so I could just run ssh m455.casa, instead of ssh -i ~/.ssh/my-identity-file -p 3222 m455@m455.casa.

I then tried running ssh m455.casa, and was greeted by Debian’s MOTD on my server! I was so happy I managed to do all of this.

All I had to do now was setup Let’s Encrypts certbot by running the following command on my Debian server as root to get the certbot tool:

apt install python3-acme python3-certbot python3-mock python3-openssl python3-pkg-resources python3-pyparsing python3-zope.interface python3-certbot-nginx

Then I ran the following command to setup certificates for my domain with the following command as root:

certbot --nginx -d m455.casa

I actually got the information on how to do this from a DigitalOcean guide. Thank you again, DigitalOcean!

I then tried visiting https://m455.casa in a private browser tab, so it didn’t have the cache from when I tried to visit http://m455.casa in my browser early, and the good ol’ lock in my browser’s address bar was green! The certificate worked!

That’s my story of how I learned how to “home-host”. I hope you enjoyed that little adventure. I had fun writing it! A little too much fun… I should go back and edit this post later to check for mistakes haha. For now, I’ll post it for your enjoyment. Mistakes and all!