Automatically Backing Up Zones Stored In ClouDNS
I recently moved DNS providers and migrated my domains to using ClouDNS.
What I didn't write about in that post, though, was setting up local backups of my zones.
This was something that I previously had with DNSMadeEasy (though it had been a bit of a pain to set up), so I wanted to make sure that I retained that ability with ClouDNS.
This post describes how to easily automate fetching of ClouDNS records in BIND format.
* * *
#### Authentication Credentials
The ClouDNS API requires that you present some API specific credentials.
To create these, you
* Login to ClouDNS.net
* Go to `API & Resellers`
* Under `API Users` click `Add New User`
You're then prompted to provide a password and (optionally) an IP address to limit the user to
The option to restrict to specific IPs is a nice addition.
Once you've saved the user, it will show up in the API users table along with it's `auth-id` and (if you set one) the IP address(es) that the user must connect from:
* * *
#### Making Requests
Once you've got an `auth-id` and password, making requests to the API is easy.
The credentials are provided in either the `POST` body or the query string (tip: **never** put them in the query string, they're far more likely to end up getting logged there).
For example, to list the DNS zones that ClouDNS hosts for us, we can do the following:
curl \
-d "auth-id=${CLOUD_ID}&auth-password=${CLOUD_TOKEN}&page=1&rows-per-page=50" \
https://api.cloudns.net/dns/list-zones.json
The response is a list of objects, each describing a configured zone:
[
{
"name": "bentasker.co.uk",
"id": "894172",
"type": "master",
"group": "None",
"hasBulk": false,
"zone": "domain",
"status": "1",
"serial": "2025022423",
"isUpdated": 1
},
Fetching records for a given zone is similarly easy
curl \
-d "auth-id=${CLOUD_ID}&auth-password=${CLOUD_TOKEN}&domain-name=${domain}" \
https://api.cloudns.net/dns/records-export.json
With the format looking like this
{
"status": "Success",
"zone": "$ORIGIN bentasker.co.uk.\n@\t3600\tIN\tSOA\tpns61.cloudns.net.
* * *
#### Backup Script
The following script expects that API credentials are provided in environment variables `CLOUD_TOKEN` and `CLOUD_ID`
#!/bin/bash
#
# Backup records from ClouDNS into a
# BIND format zone file
#
# Backup CloudDNS
curl -s -d "auth-id=${CLOUD_ID}&auth-password=${CLOUD_TOKEN}&page=1&rows-per-page=50" https://api.cloudns.net/dns/list-zones.json | jq -r '.[] | .name' | while read -r domain
do
curl -s -d "auth-id=${CLOUD_ID}&auth-password=${CLOUD_TOKEN}&domain-name=${domain}" https://api.cloudns.net/dns/records-export.json | jq -r '.zone' > ${domain}.zone
done
This uses `jq` to parse the JSON and write out a BIND format file:
$ORIGIN bentasker.co.uk.
@ 3600 IN SOA pns61.cloudns.net. support.cloudns.net. 2025022423 7200 1800 1209600 3600
@ 3600 IN NS pns61.cloudns.net.
@ 3600 IN NS pns62.cloudns.com.
@ 3600 IN NS pns63.cloudns.net.
* * *
#### Revision Control
I've long wrapped my DNS backups in revision control - it means that if I'm making a substantial change, I can trigger a backup and then write a commit message which references the relevant ticket.
To do this, I create a `git` repo with a copy of the backup script in it
git init dns_backup
cd dns_backup
# optional add remote: git remote add origin <url>
# Copy and commit the backup script
cp ~/cloudns_backup.sh ./
git add cloudns_backup.sh
git commit -m "feat: add backup script"
There's then a module in my backups which calls the script and commits any results:
cd ~/dns-backup/
./cloudns_backup.sh
git add .
git commit -m "chore: Auto changes detected"
git push origin main
* * *
#### Conclusion
Getting backups of ClouDNS data up and running is **really** straightforward. When I originally set them up for DNSMadeEasy, I had to spend quite a lot of time turning them into a format that'd be useful, whereas ClouDNS will happily spit a BIND format file straight out.