Creating a test Nextcloud virtual machine
Taking this task one step at a time I started by creating a docker nextcloud server on a virtual machine following this tutorial.
Upgrading
First of all SWITCH ROUTER DNS SERVER
Linux
This might not be necessary anymore. I think I fixed it by linking to the github repository instead of the AUR.
- Remove zfs-linux
yay -Rns zfs-linux-lts - Upgrade all packages
yay - Clone the zfs-linux repository
git clone https://aur.archlinux.org/zfs-linux-lts.git - Update the dependencies to match Linux version inside
PKGBUILD - Install zfs-linux
cd zfs-linux-ltsmakepkg -si
You might need to make sure you have the correct
zfs-utilsinstalled.
Docker Containers
Nextcloud is the annoying one here. Unless you want to take the risk, you need to upgrade to each major version, incrementally.
You can do this by changing the base image in the nextcloud/DockerFile file to nextcloud:{version}-fpm-alpine.
You should upgrade to the latest minor version of the current major version first. Once you have done that you can upgrade all the images:
sudo docker-compose pull # to upgrade the images
sudo docker-compose pull nextcloud:{version}-fpm-alpine # because docker-compose can't tell that it is used by the nextcloud image
sudo docker-compose down # to destroy all the existing containers
sudo docker-compose up --force-recreate --build -d # to rebuild all the containers with the newest images
sudo docker-compose exec -u 1000 -it nextcloud php /var/www/html/occ upgrade -vvv # to run the upgrade script
sudo docker-compose exec -u 1000 -it nextcloud php /var/www/html/occ maintenance:mode --off # to turn the site back on
sudo certbot certonly -d scarif.space,www.scarif.space,tower.scarif.space,labs.scarif.space,comms.scarif.space,office.scarif.space,rec.scarif.space,radio.scarif.space,intel.scarif.space --force-renewal
sudo cp /etc/letsencrypt/live/scarif.space/privkey.pem /opt/ssl/scarif.space.key
sudo cp /etc/letsencrypt/live/scarif.space/fullchain.pem /opt/ssl/scarif.space.crt
sudo docker-compose restart nginx
Provided that all went smoothly, you should do it all again if there are more major versions for nextcloud.
You can re-add the DNS server to 192.168.2.157
Building the VM
Creating the virtual machine was easy. I decided to use debian as the OS (one of the suggested ones in the tutorial), using the net iso.
Make sure to give the VM ample storage as the nextcloud installation takes up 4GB!
It was very annoying to have to retype all of the commands in the virtualbox window so I decided to SSH in. This can be done by updating the network settings of the VM and in adapter 1 under advanced settings and port forwarding add the following rules (The second rule will be used later for accessing nextcloud):
| Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
|---|---|---|---|---|---|
| SSH | TCP | 127.0.0.1 | 2222 | 22 | |
| HTTP | TCP | 127.0.0.1 | 8080 | 80 | |
| HTTPS | TCP | 127.0.0.1 | 44300 | 443 | |
| Once that is set up you can ssh into the vm: |
% ssh {username}@127.0.0.1 -p 2222
The VM didn't recognise my terminal as an ansi terminal and so
backspacewould insert a space which was very annoying. To prevent this type% export TERM=ansi
Configuration
As it is a local installation letsencrypt would have failed so the docker-compose.yml file I created looked like this:
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=jk
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.scarif.local
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
Running Nextcloud
After finishing the installation I needed to update the /etc/hosts file on the host machine adding the following line:
127.0.0.1 nextcloud.scarif.local
After that I was able to access Nextcloud!
Creating a test Monica virtual machine
The first two steps are the same as the Nextcloud process when it comes to setting up the VM and installing docker.
I used the example self signed certificate docker configuration to test it out.
I needed to download the example .env file:
curl -sS https://raw.githubusercontent.com/monicahq/monica/master/.env.example -o .env
The only changes I needed to make to the configuration was to create an APP_KEY using pwgen.
I don't think this was necessary as it looks like it would have been generated automatically if I didn't do it, but then that wouldn't have persisted upon restarting the container.
% sudo pacman -S pwgen
% pwgen -s 32 1 # Copy and paste result into .env file
And changing the APP_URL to https://monica.local:44300.
I also needed to update the host file as before:
127.0.0.1 monica.local:44300
After running docker-compose up -d it all worked.
The difference between apache and fpm proxying
From what I can tell they both use nginx.
The apache option will set up an apache server on the container and then serve the site locally, you then need to use a separate nginx server to proxy to that apache server.
I guess the benefit of that is that, well it's easy to set up and apache and PHP have been buddies forever so they work well together, however there's less control over the configuration and I would imagine it is slower and uses up more system resources.
The second option, fpm, requires configuring the nginx site yourself and using the nginx container to link to it. This more cumbersome to set up, but allows better configuration and is more efficient.
Nope I was wrong, it looks like the fpm option serves the site locally through nginx and then uses the nginx-proxy image to serve the site, so in that sense it works the same way as the apache option.
Benchmarking different approaches
According to this article there are significant performance differences between different approaches. The approaches they used were:
- Official php apache
This uses the official PHP apache image and exposes the container directly on port 80.
- Official php fpm
This used the official PHP fpm image alongside an nginx cotainer that served that container through a proxy.
- Custom fpm
This was a custom built docker image that included PHP fpm and nginx in one container.
I added a new one that put in an nginx reverse proxy container to the custom fpm as that would be needed if I wanted multiple containers serving on the same machine (Monica/Nextcloud/Gitea/etc).
The results I got were similar to the original article:
| Solution | Rate | Longest | Shortest | Size (MB) |
|---|---|---|---|---|
| Official fpm | 143.17 | 0.92 | 0.12 | |
| Official apache | 503.52 | 0.53 | 0.02 | 415 |
| Custom fpm | 2197.80 | 0.12 | 0.03 | 336 |
| Custom fpm proxy | 1992.03 | 0.16 | 0.02 | 392 |
Creating a Nextcloud virtual machine with Rancher
To create the virtual machine I needed to install virtualbox and docker-machine, then I ran the following command:
% docker-machine create -d virtualbox \
--virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso \
--virtualbox-memory 2048 \
ScarifDummy # The name of the virtual machine
After that I only need to run
% docker-machine start ScarifDummy
to boot up the server.
To ssh into the server run
% docker-machine ssh ScarifDummy
Switching consoles is as easy as:
% sudo ros console switch debian
Creating a nextcloud installation
% docker network create nextcloud_network
Save this file to /var/lib/rancher/conf/docker-compose.yml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
#labels:
# - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.scarif.local
- LETSENCRYPT_HOST=nextcloud.scarif.local
- LETSENCRYPT_EMAIL=stofflees@gmail.com
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
Accessing Rancher
I am trying to connect to rancher from the CLI. To do that I need an API key and to do that I need the rancher UI.
This seems to be available as soon as you install rancher but I can't seem to access it.
I have tried opening up the ports of the virtual machine by going to Settings > Network > Adapter 2 and creating a bridged adapter which should have opened up the VM to the host.
I checked the private IP address (hostname -I).
I enabled and started the rancher-server service:
% sudo ros service enable rancher-server
% sudo ros service up rancher-server
I am currently still unable to access the rancher UI.
Building the cloud-config.yml file
The cloud-config.yml file creates an initial environment for the server to automatically build up the containers to spec.
rancher:
console: debian # The main console should be debian as that is most useful
Final plan
After lots of hassle with RancherOS I decided to forgo that option in favour of a minimal Linux distribution (Arch) and using docker inside that.
I am more comfortable with that and would be more comfortable with making it secure.
Set up
As I am using Arch linux the easiest way to get a working mockup was vagrant.
There is an easy to use vagrant arch image which made life very easy. Now I can keep cleaning the install every time I want to try a new configuration.
Once the server is live I will have to be more careful. I intend to create a snapshot on VirtualBox that reflects the server configuration so I can test any new changes I want to make beforehand.
There will be a couple of differences between the production and development environments, mainly to do with passwords and SSL certificates.
Nginx
Most of the docker images have their own server options and I could have used the nginx-proxy docker image to serve most of them, however due to the benchmarks I made earlier it seems that having a manual nginx container would be better and where possible use the application socket (like PHP-fpm or unicorn).
This hopefully will make it more responsive and easier to configure.
I took inspiration from the nginx-proxy cotainer for building the configuration when the container had it's own web server.
I added an extra bit of configuration to ensure all requests are redirected to HTTPS:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
SSL
On local I need to use self-signed certificates and the container for that is omgwtfssl. Hopefully it will just be a dropin replacement to add the letsencrypt.
Dashboard
I'm not 100% happy with this container. I might swap it out for a custom HTML and CSS file as that's all I really need. Need to get my design skills up.
It's pretty limited in what it can do.
The nginx configuration for this one was pretty straight forward as it was just a proxy_pass to the port 5000 on the dashboard container.
There was one annoying issue because if you wanted to go to the dashboard without being logged in it would take you to an unauthorized error page instead of to the login page.
I used my nginx powers to redirect any traffic to the unauthorized page back to the login where it should go.
location /unauthorized {
return 301 https://$host/login;
}
The plan is to change this to just a plain HTML file that I control and can do with as I please.
Nextcloud
Nextcloud was a bit of a pain as I wanted several parts to it, like redis caching and browser document editing.
I used this docker configuration to get Nextcloud with cron configured and other features.
The nginx configuration was pretty straight forward, just copying what was provided by the documentation.
There is one option "HSTS" that will add the domain of the site to a list that modern browsers will check to force the site to be https. I already added a redirect to https and I'm pretty confident that I will not be accessing the server through http so I don't think this is necessary. I'm not sure how it would affect the other subdomains.
External drive
Digital ocean is great, the constant uptime and the speed makes it well worth the extra cost.
The biggest downside is the limited storage.
To solve this I have created a very simple local SFTP server on a Raspberry Pi.
This allows me to keep all of my large files that don't need rapid access on a terabyte hardrive at home (that can also be accessed on the local network), and connect to it through Nextcloud so I have access to it anywhere.
The steps to build are as follows:
- Install Armbian on the Pi.
- In the setup select bash as it will boot up faster when SSHing.
- There is a relatively slow and unnecessary dynamic banner when connecting. To disable go to
/etc/default/armbian-motdand change the first line to:
MOTD_DISABLE="config header tips updates sysinfo"
- Automatically mount the SSD
- Find out the id of the SSD partition with
lsblkandblkid - Update the
/etc/fstabfile
- Find out the id of the SSD partition with
UUID=<ID> /mnt/drive vfat noatime,x-systemd.automount 0 2
- Add ufw firewall rules
Future tasks with External drive
- Send a request to automatically update IP address on VPS
- Automatic backups of local and Nextcloud files
Collabora
Now this one was a nightmare, but would have been a dealbreaker without it.
I was following all the advice and I just couldn't get it to work with my set up.
The main problem was SSL. In some cases it was checking for it, other cases it wasn't. The Collabora server wouldn't realise that it had a domain name with SSL even if the Collabora container didn't create the SSL certificate itself.
Eventually I found the configuration that worked:
collabora:
image: collabora/code
restart: always
cap_add:
- MKNOD
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- DONT_GEN_SSL_CERT="True"
- domain=tower.${DOMAIN}
- cert_domain=office.${DOMAIN}
- server_name=office.${DOMAIN}
- username=${COLLABORA_USER}
- password=${COLLABORA_PASSWORD}
- "extra_params=-o:ssl.enable=false --o:ssl.termination=true"
- "dictionaries=de_DE en_GB en_US es_ES fr_FR it nl pt_BR pt_PT ru ro"
networks:
- nginx
extra_hosts:
- "tower.scarif.local:${LOCAL_IP}"
- "office.scarif.local:${LOCAL_IP}"
The confusing bit was setting the cert_domain and the server_domain even though it wasn't creating a certificate. I'm not 100% surer if the extra parameters are necessary, but I'm too scared to remove them now.
The dictionaries are just there because I wanted to add Romanian.
No extra configuration is needed to it's just a case of rebuilding the container if I want to make changes. It doesn't store any data locally.
It is a huge memory guzzler though.
The extra_hosts parameter is important for local testing and also necessary for the nextcloud config, but it probably isn't necessary on production, however it would make it more responsive.
Make a note that the domain parameter is set to the URL of Nextcloud not Collabora.
Monica
Monica was a bit of a pain.
It is a Laravel app so I was more comfortable with it, but I had trouble with the storage linking and the public directory.
You see, I was using a separate nginx container but I wanted that container to serve multiple sites, which meant it needed to access the public assets that both Nextcloud and Monica put in www/data.
I wanted to have separate folders in the Nginx container for Monica and Nextcloud, but the problem was Laravel symlinks the storage directory with a directory in the public directory and that used absolute paths that would not match the data structure of the Nginx container.
The solution was to copy the full monica docker build and unlink the storage file, and recreate the link with relative paths, and then match those relative paths in the Nginx container.
Gitea
Gitea was fun.
The initial set up was straight forward, yada yada yada.
Where it got interesting was wanting to use SSH to clone repositories without interupting the main SSH of the server.
The documentation for how to do this actually changed whilst I was setting it up and it was easier to follow.
The way it works is that when you add an SSH key through the gitea interface it adds that key to the authorized_keys file with a configuration option that calls the /app/gitea/gitea script file that exists inside the container.
So on the host machine I needed to create a user with the same ID as the git user inside the container and create an executable at /app/gitea/gitea on the host that simply proxies the SSH request to the container via a port exposed by docker.
Pinry
Easy enough, just another proxy set up.
The annoying thing is this is the one container that requires active intervention after it has been built in order to prevent others from creating an account.
I need to create a super user:
docker exec -it scarif_pinry_1 python manage.py createsuperuser --settings=pinry.settings.docker
Cadvisor
Very easy to setup, this provides a detailed overview of the system and containers.
It's not pretty but it does the job.
Certificates
I was dreading this a lot. Not sure quite how to do it with docker.
I decided not to in the end.
I just installed certbot through pacman and installed the digitalocean dns plugin.
Then it was just a case of saving my digital ocean API key to a file and running this command:
certbot certonly \
--dns-digitalocean \
--dns-digitalocean-credentials {path/to/file} \
-d *.scarif.space \
-d scarif.space
That generated a certificate and private key which I could link to in nginx.
Finally I needed to ensure it renewed the certificate which I did using systemd.
I think I have successfully set it up to copy the certificates to the right place when they are generated but I guess I will find out in 3 months.
The service configuration looks like this:
[Unit]
Description=Let's Encrypt renewal
[Service]
Type=oneshot
ExecStart=/usr/bin/certbot renew --quiet --agree-tos --deploy-hook "docker exec scarif_nginx_1 nginx -s reload && cp /etc/letsencrypt/live/scarif.space/fullchain.pem /opt/ssl/scarif.space.crt && cp /etc/letsencrypt/live/scarif.space/private.pem /opt/ssl/scarif.space.key"
Troubleshooting
- Sometimes the
bootstrap.shscript would fail. This was because the system needed to be restarted after upgrading packages. Instead I separated out thebootstrap.shfile. - Docker compose doesn't work? Make sure docker is running.
Tips and tricks
- To build all the containers from a compose file:
docker-compose up -d - To build the containers from a specific file:
docker-compose -f{file_name.yml} up -d - You can include environment variables by having a
.envfile in the same directory as yourdocker-compose.ymlfile (or you can specify it with the--environmentoption) - To rebuild containers from scratch:
docker-compose up -d --build --force-recreate(this will not remove any data!) - To list all running containers:
docker ps -a - To remove all containers you have downloaded:
docker system prune -a - To remove all unused volumes:
docker volume prune - To upgrade all images first run
docker-compose pullfollowed bydocker-compose up -d --build. Be careful this usually breaks something. - To connect to the database you can run the command
docker run -it --network scarif_db --rm mariadb mysql -hostname scarif_db_1 -p - To get an interactive shell for a container run
docker exec -it {container} /bin/sh
TODO
- Set up docker
- Set up monica
- Set up gitea
- Set up nextcloud
- Set up gitea SSH
- Set up collabora for nextcloud
- Set up dashboard
Set up docker dashboard- Set up bookmarking (pinry)
Set up monitoring- Set up email server
- Set up letsencrypt
- Set up accounting
- Set up external storage on pi
- Set up blog
- Set up healthcheck
- Set up firewall
- Set up lychee photo viewer
- Replace dashboard with raw HTML