To become GDPR-Compliant, host Nextcloud Servers in Germany

A small weekend automation project with Terraform & Ansible

Martin Jahr
6 min readOct 4, 2020
Start screen of our Learn to Learn Git Circle (in German)

Motivation

Nextcloud as a communication solution is waiting in my backlog for a long time. Again and again something else came up — now I have a real use case. Our Git for Kids Learning Circle has started for the first time and apart from zoom, it doesn’t really have a communication platform yet. What could be more obvious than to finally test Nextcloud?

In view of the latest data protection legislation, it has also become clear once again that we have to move, we as consumers of the large platforms. As long as there is a profitable market, jurisdiction alone will not bring about change. For this reason — and because community platforms always have a lot to do with personal data — I decided not to look towards AWS this time, but to use a German cloud provider.

The Day of German Unity is coming just in time for a small one-day project.

Project

I chose Hetzner Cloud because there are cheap offers there (just under 6 EUR compared to the comparable 30 EUR at AWS) and above all because there is a Terraform provider who helps me to build up the platform with the tools I am used to. Data centers are located in Germany and Finland, all this selectable via the API.

To make things fast, I will not start with the tar distribution this time, but use Nextcloud’s docker services. There you can find ready-made examples with database, Lets Encrypt encryption and Redis Cache. I will incorporate Ansible for configuration.

Implementation

The target architecture should look like seen in the image. However, there is still a lot to be done based on Nextcloud’s Github example.

Cloud account

First of all the setup of the virtual machine in the Hetzner Cloud. You need an account there, it is free of charge and can be set up quickly. In the first step, we create an API token for Terraform:

Copy the generated value, it will never come back.

Then create on the Linux console an ssh key without password with ssh-keygen. The public key is copied into the ssh config box at Hetzner. We remember the private key in a suitable place (for example in the file ~/.ssh\id_rsa_hetzner).

That’s all we need.

Virtual Machine

The Terraform Script is quickly written:

variable "access_token" {}
variable "env_name" { }
variable "env_stage" { }
variable "system_function" {}
variable "instance_image" {}
variable "instance_type" {}
variable "location" { }
variable "keyname" {}
provider "hcloud" {
token = var.access_token
}
resource "hcloud_server" "server" {
name = format("%s-%s-%s", var.env_stage, var.env_name, var.system_function)
image = var.instance_image
server_type = var.instance_type
location = var.location
labels = {
"Name" = var.env_name
"Stage" = var.env_stage
"Function" = var.system_function
}
ssh_keys = [ var.keyname ]
}
output "server-ip" {
value = hcloud_server.server.ipv4_address
}

Since the Hetzner provider is not part of the Hashicorp standard, a pointer file versions.tf is needed in the same directory.

terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud
}
}
required_version = ">= 0.13".
}

Depending on whether you use Terraform on the command line or in Terraform Cloud (the latter is recommended because it connects nicely to the git repository), you will need a variable file or the variable configuration in the cloud. I use a Terraform script that creates the Terraform Cloud Workspace for me, so this is done quickly.

access_token takes the value from the Hetzner console and keyname becomes the name we gave to the SSH key in the Hetzner console, in this case tfh-server. I take as location Nuremberg (nbg1), as image ubuntu-18.04 and for the machine I choose a cx21 from the Hetzner list - 2 VPCU, 4GB RAM and 40GB local disk should be enough for our small project.

Run Terraform and lo and behold — we get an IP address which we can reach with the generated private key ( ssh -i ~/.ssh/id_rsa_hetzner root@123.45.67.89 ). With a few project configuration tweaks and some scripts in my development environment I save the many parameters and soon only call ssh 123.45.67.89 which I have to do a few more times.

Ansible

The sample code from Nextcloud first needs to be transferred to Ansible. I essentially stick to the structure of the example. There are just a few points where it still hangs, I have to make improvements.

I like to work with the default directories specified in Ansible, so my playbook is scattered over several directories. I won’t explain here how to call a playbook. I have built a script for it, which is not publishable :) But the structure and the following info should be sufficient for experienced ansible users. I’ll spare myself the output of the complete code here and refer to a github copy of my project, created originally in Gitlab.

Main script

The main script nextcloud_server.yml only starts the role in which the complete initialization takes place. Important here is the python3 reference, because the Docker Tools are quite sensitive about this.

- hosts: NEXTCLOUD_SERVER
gather_facts: no
ignore_errors: yes
become: yes
become_user: root
vars:
- ansible_python_interpreter: /usr/bin/python3
roles:
- 001_init_nextcloud_server

Initialization script

The 001_init_nextcloud_server/tasks/main.yml script contains the complete configuration. We install some helper tools, as well as docker-ce and docker-compose and then copy the structure needed for docker-compose. Some data is variable, so the ansible jinja templates are used. The docker-compose.yml.j2 template shows well the components and their dependencies:

  • db - The database remains unchanged except for the password parameter.
  • redis - I had to build a new image to be able to set the password in redis. Without a password, which must be identical in redis and in the App Server configuration, the system won’t boot. Otherwise it is the default redis image.
  • app - unchanged except for the variable data and the redis password in REDIS_HOST_PASSWORD. The modification of the config.php file is not done here, so I don’t have to rebuild this container. The changes are done after the startup, because the docker volumes are also accessible on the host. Not elegant but pragmatic.
  • cron - Unchanged
  • proxy - unchanged in the definition, except for an inserted dependency. Since the proxy always responded with a 503 - “Service not available” error, I had to change the nginx definition. This definition is created by a template nginx.tmpl when the container is started by the nginx server, which is included here in the new build.
  • letsencrypt-companion - Unchanged, there were no problems here.

The last call in the initialization script was actually planned with the start of docker-compose. The server and web access are already running fine with it, apart from some database deadlocks when deleting files, but this seems to be a standard problem.

Important is that at the first login the configured domain is already used for access, because the Nextcloud setup is only completed with the first login and uses the calling context.

However, I still had to fix one inconsistency — I adjusted the config.php of the app server at the end of this project as described above, because the external Nextcloud clients did not work. Reason is that no https protocol was defined in the example, furthermore entries for overwriteprotocol and overwritehost are missing. A restart of the container ensures that these changes are accepted, so that mobile access is now also possible.

Conclusion

My holiday project has been implemented satisfactorily for me and our small initiative will be able to work well with it. For larger production implementations, of course, there is still a lot to be done, e.g. for data backup, system hardening, etc. But here, a “Good Enough Deployment” is sufficient.

A bit annoying are the inconsistencies in the sample code, it took a lot of trial and error checking until it worked. I’m glad that I use Terraform and Ansible, so I was able to quickly rebuild the code whenever I got something fuzzy.

My first terraform project outside the usual AWS biotope worked well. It is also very pleasing that the cost clock at Hetzner only shows 25 cents after one day of operation…

--

--

Martin Jahr

Digital Designer & life-long learner of computers & humans. Now up to create, coach and deliver learning deployment strategies in Germany where things are late.