Files
scarif/README.md
2021-01-01 15:42:26 +00:00

425 lines
19 KiB
Markdown

[toc]
# Creating a test Nextcloud virtual machine
Taking this task one step at a time I started by creating a docker nextcloud server on a virtual machine following [this](https://blog.ssdnodes.com/blog/installing-nextcloud-docker/) tutorial.
## Building the VM
Creating the virtual machine was easy. I decided to use debian as the OS (one of the suggested ones in the tutorial), using the [net iso](https://www.debian.org/distrib/netinst#verysmall).
> Make sure to give the VM ample storage as the nextcloud installation takes up 4GB!
It was very annoying to have to retype all of the commands in the virtualbox window so I decided to SSH in. This can be done by updating the network settings of the VM and in adapter 1 under advanced settings and port forwarding add the following rules (The second rule will be used later for accessing nextcloud):
Name|Protocol|Host IP|Host Port|Guest IP|Guest Port
---|---|---|---|---|---
SSH|TCP|127.0.0.1|2222||22
HTTP|TCP|127.0.0.1|8080||80
HTTPS|TCP|127.0.0.1|44300||443
Once that is set up you can ssh into the vm:
```sh
% ssh {username}@127.0.0.1 -p 2222
```
> The VM didn't recognise my terminal as an ansi terminal and so `backspace` would insert a space which was very annoying.
> To prevent this type `% export TERM=ansi`
## Configuration
As it is a local installation letsencrypt would have failed so the `docker-compose.yml` file I created looked like this:
```yml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=jk
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.scarif.local
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
```
## Running Nextcloud
After finishing the installation I needed to update the `/etc/hosts` file on the host machine adding the following line:
```
127.0.0.1 nextcloud.scarif.local
```
After that I was able to access Nextcloud!
# Creating a test Monica virtual machine
The first two steps are the same as the Nextcloud process when it comes to setting up the VM and installing docker.
I used the [example self signed certificate docker configuration](https://github.com/monicahq/docker/tree/master/.examples/nginx-proxy-self-signed-ssl) to test it out.
I needed to download the example `.env` file:
```
curl -sS https://raw.githubusercontent.com/monicahq/monica/master/.env.example -o .env
```
The only changes I needed to make to the configuration was to create an `APP_KEY` using `pwgen`.
> I don't think this was necessary as it looks like it would have been generated automatically if I didn't do it, but then that wouldn't have persisted upon restarting the container.
```sh
% sudo pacman -S pwgen
% pwgen -s 32 1 # Copy and paste result into .env file
```
And changing the `APP_URL` to `https://monica.local:44300`.
I also needed to update the host file as before:
```
127.0.0.1 monica.local:44300
```
After running `docker-compose up -d` it all worked.
# The difference between apache and fpm proxying
From what I can tell they both use nginx.
The apache option will set up an apache server on the container and then serve the site locally, you then need to use a separate nginx server to proxy to that apache server.
I guess the benefit of that is that, well it's easy to set up and apache and PHP have been buddies forever so they work well together, however there's less control over the configuration and I would imagine it is slower and uses up more system resources.
The second option, fpm, requires configuring the nginx site yourself and using the nginx container to link to it. This more cumbersome to set up, but allows better configuration and is more efficient.
Nope I was wrong, it looks like the fpm option serves the site locally through nginx and then uses the nginx-proxy image to serve the site, so in that sense it works the same way as the apache option.
# Benchmarking different approaches
According to [this article](https://medium.com/@wemakewaves/migrating-our-php-applications-to-docker-without-sacrificing-performance-1a69d81dcafb) there are significant performance differences between different approaches.
The approaches they used were:
* Official php apache
> This uses the official PHP apache image and exposes the container directly on port 80.
* Official php fpm
> This used the official PHP fpm image alongside an nginx cotainer that served that container through a proxy.
* Custom fpm
> This was a custom built docker image that included PHP fpm and nginx in one container.
I added a new one that put in an nginx reverse proxy container to the custom fpm as that would be needed if I wanted multiple containers serving on the same machine (Monica/Nextcloud/Gitea/etc).
The results I got were similar to the original article:
Solution|Rate|Longest|Shortest|Size (MB)
---|---|---|---|---
Official fpm|143.17|0.92|0.12|
Official apache|503.52|0.53|0.02|415
Custom fpm|2197.80|0.12|0.03|336
Custom fpm proxy|1992.03|0.16|0.02|392
# Creating a Nextcloud virtual machine with Rancher
To create the virtual machine I needed to install virtualbox and docker-machine, then I ran the following command:
```sh
% docker-machine create -d virtualbox \
--virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso \
--virtualbox-memory 2048 \
ScarifDummy # The name of the virtual machine
```
After that I only need to run
```sh
% docker-machine start ScarifDummy
```
to boot up the server.
To ssh into the server run
```sh
% docker-machine ssh ScarifDummy
```
Switching consoles is as easy as:
```sh
% sudo ros console switch debian
```
## Creating a nextcloud installation
```sh
% docker network create nextcloud_network
```
Save this file to `/var/lib/rancher/conf/docker-compose.yml`
```yml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
#labels:
# - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.scarif.local
- LETSENCRYPT_HOST=nextcloud.scarif.local
- LETSENCRYPT_EMAIL=stofflees@gmail.com
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
```
# Accessing Rancher
I am trying to connect to rancher from the CLI. To do that I need an API key and to do that I need the rancher UI.
This seems to be available as soon as you install rancher but I can't seem to access it.
I have tried opening up the ports of the virtual machine by going to Settings > Network > Adapter 2 and creating a bridged adapter which should have opened up the VM to the host.
I checked the private IP address (`hostname -I`).
I enabled and started the `rancher-server` service:
```sh
% sudo ros service enable rancher-server
% sudo ros service up rancher-server
```
I am currently still unable to access the rancher UI.
## Building the `cloud-config.yml` file
The `cloud-config.yml` file creates an initial environment for the server to automatically build up the containers to spec.
```yml
rancher:
console: debian # The main console should be debian as that is most useful
```
# Final plan
After lots of hassle with RancherOS I decided to forgo that option in favour of a minimal Linux distribution (Arch) and using docker inside that.
I am more comfortable with that and would be more comfortable with making it secure.
## Set up
As I am using Arch linux the easiest way to get a working mockup was vagrant.
There is an easy to use vagrant arch image which made life very easy. Now I can keep cleaning the install every time I want to try a new configuration.
Once the server is live I will have to be more careful. I intend to create a snapshot on VirtualBox that reflects the server configuration so I can test any new changes I want to make beforehand.
There will be a couple of differences between the production and development environments, mainly to do with passwords and SSL certificates.
## Nginx
Most of the docker images have their own server options and I could have used the [nginx-proxy](https://github.com/nginx-proxy/nginx-proxy) docker image to serve most of them, however due to the benchmarks I made earlier it seems that having a manual nginx container would be better and where possible use the application socket (like PHP-fpm or unicorn).
This hopefully will make it more responsive and easier to configure.
I took inspiration from the nginx-proxy cotainer for building the configuration when the container had it's own web server.
I added an extra bit of configuration to ensure all requests are redirected to HTTPS:
```
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
```
## SSL
On local I need to use self-signed certificates and the container for that is omgwtfssl. Hopefully it will just be a dropin replacement to add the letsencrypt.
## [Dashboard](https://github.com/rmountjoy92/DashMachine)
I'm not 100% happy with this container. I might swap it out for a custom HTML and CSS file as that's all I really need. Need to get my design skills up.
It's pretty limited in what it can do.
The nginx configuration for this one was pretty straight forward as it was just a proxy_pass to the port 5000 on the dashboard container.
There was one annoying issue because if you wanted to go to the dashboard without being logged in it would take you to an unauthorized error page instead of to the login page.
I used my nginx powers to redirect any traffic to the unauthorized page back to the login where it should go.
```nginx
location /unauthorized {
return 301 https://$host/login;
}
```
The plan is to change this to just a plain HTML file that I control and can do with as I please.
## [Nextcloud](https://nextcloud.com/)
Nextcloud was a bit of a pain as I wanted several parts to it, like redis caching and browser document editing.
I used [this](https://github.com/nextcloud/docker/tree/master/.examples/dockerfiles/full/fpm-alpine) docker configuration to get Nextcloud with cron configured and other features.
The nginx configuration was pretty straight forward, just copying what was provided by the documentation.
There is one option "HSTS" that will add the domain of the site to a list that modern browsers will check to force the site to be https. I already added a redirect to https and I'm pretty confident that I will not be accessing the server through http so I don't think this is necessary. I'm not sure how it would affect the other subdomains.
## [Collabora](https://www.collaboraoffice.com/)
Now this one was a nightmare, but would have been a dealbreaker without it.
I was following all the advice and I just couldn't get it to work with my set up.
The main problem was SSL. In some cases it was checking for it, other cases it wasn't. The Collabora server wouldn't realise that it had a domain name with SSL even if the Collabora container didn't create the SSL certificate itself.
Eventually I found the configuration that worked:
```yml
collabora:
image: collabora/code
restart: always
cap_add:
- MKNOD
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- DONT_GEN_SSL_CERT="True"
- domain=tower.${DOMAIN}
- cert_domain=office.${DOMAIN}
- server_name=office.${DOMAIN}
- username=${COLLABORA_USER}
- password=${COLLABORA_PASSWORD}
- "extra_params=-o:ssl.enable=false --o:ssl.termination=true"
- "dictionaries=de_DE en_GB en_US es_ES fr_FR it nl pt_BR pt_PT ru ro"
networks:
- nginx
extra_hosts:
- "tower.scarif.local:${LOCAL_IP}"
- "office.scarif.local:${LOCAL_IP}"
```
The confusing bit was setting the `cert_domain` and the `server_domain` even though it wasn't creating a certificate. I'm not 100% surer if the extra parameters are necessary, but I'm too scared to remove them now.
The `dictionaries` are just there because I wanted to add Romanian.
No extra configuration is needed to it's just a case of rebuilding the container if I want to make changes. It doesn't store any data locally.
It is a huge memory guzzler though.
The `extra_hosts` parameter is important for local testing and also necessary for the nextcloud config, but it probably isn't necessary on production, however it would make it more responsive.
Make a note that the `domain` parameter is set to the URL of Nextcloud _not_ Collabora.
## [Monica](https://www.monicahq.com/)
Monica was a bit of a pain.
It is a Laravel app so I was more comfortable with it, but I had trouble with the storage linking and the public directory.
You see, I was using a separate nginx container but I wanted that container to serve multiple sites, which meant it needed to access the public assets that both Nextcloud and Monica put in `www/data`.
I wanted to have separate folders in the Nginx container for Monica and Nextcloud, but the problem was Laravel symlinks the `storage` directory with a directory in the `public` directory and that used absolute paths that would not match the data structure of the Nginx container.
The solution was to copy the full monica docker build and unlink the storage file, and recreate the link with relative paths, and then match those relative paths in the Nginx container.
## [Gitea](https://gitea.io/en-us/)
Gitea was fun.
The initial set up was straight forward, yada yada yada.
Where it got interesting was wanting to use SSH to clone repositories without interupting the main SSH of the server.
The documentation for how to do this actually changed whilst I was setting it up and it was easier to follow.
The way it works is that when you add an SSH key through the gitea interface it adds that key to the `authorized_keys` file with a configuration option that calls the `/app/gitea/gitea` script file that exists inside the container.
So on the host machine I needed to create a user with the same ID as the `git` user inside the container and create an executable at `/app/gitea/gitea` on the host that simply proxies the SSH request to the container via a port exposed by docker.
## [Pinry](https://github.com/pinry/pinry/)
Easy enough, just another proxy set up.
The annoying thing is this is the one container that requires active intervention after it has been built in order to prevent others from creating an account.
I need to create a super user:
```
docker exec -it scarif_pinry_1 python manage.py createsuperuser --settings=pinry.settings.docker
```
## [Cadvisor](https://github.com/google/cadvisor)
Very easy to setup, this provides a detailed overview of the system and containers.
It's not pretty but it does the job.
# Certificates
I was dreading this a lot. Not sure quite how to do it with docker.
I decided not to in the end.
I just installed certbot through pacman and installed the digitalocean dns plugin.
Then it was just a case of saving my digital ocean API key to a file and running this command:
```
certbot certonly \
--dns-digitalocean \
--dns-digitalocean-credentials {path/to/file} \
-d *.scarif.space \
-d scarif.space
```
That generated a certificate and private key which I could link to in nginx.
Finally I needed to ensure it renewed the certificate which I did using [systemd](https://wiki.archlinux.org/index.php/Certbot#systemd).
I think I have successfully set it up to copy the certificates to the right place when they are generated but I guess I will find out in 3 months.
The service configuration looks like this:
```
[Unit]
Description=Let's Encrypt renewal
[Service]
Type=oneshot
ExecStart=/usr/bin/certbot renew --quiet --agree-tos --deploy-hook "docker exec scarif_nginx_1 nginx -s reload && cp /etc/letsencrypt/live/scarif.space/fullchain.pem /opt/ssl/scarif.space.crt && cp /etc/letsencrypt/live/scarif.space/private.pem /opt/ssl/scarif.space.key"
```
## Troubleshooting
- Sometimes the `bootstrap.sh` script would fail. This was because the system needed to be restarted after upgrading packages. Instead I separated out the `bootstrap.sh` file.
- Docker compose doesn't work? Make sure docker is running.
## Tips and tricks
- To build all the containers from a compose file: `docker-compose up -d`
- To build the containers from a specific file: `docker-compose -f{file_name.yml} up -d`
- You can include environment variables by having a `.env` file in the same directory as your `docker-compose.yml` file (or you can specify it with the `--environment` option)
- To rebuild containers from scratch: `docker-compose up -d --build --force-recreate`
- To list all running containers: `docker ps -a`
- To remove all containers you have downloaded: `docker system prune -a`
- To remove all unused volumes: `docker volume prune`