Virtual Environments: Servers¶
SecureDrop is a multi-server system, and you may need the full server stack available in order to develop and test some features. To make this easier, the project includes a Vagrantfile that can be used to create two predefined virtual environments:
This document explains the purpose of, and how to get started working with, each one.
If you plan to alter the configuration of any of these machines, make sure to review the Testing: Configuration Tests documentation.
If you see test failures due to
Too many levels of symbolic links
and you are using VirtualBox, try restarting VirtualBox.
A compromise between the development and production environments. This configuration can be thought of as identical to the production environment, with a few exceptions:
- The Debian packages are built from your local copy of the code, instead of installing the current stable release packages from https://apt.freedom.press.
- The staging environment is configured for direct SSH access so it’s more ergonomic for developers to interact with the system during debugging.
- The Postfix service is disabled, so OSSEC alerts will not be sent via email.
This is a convenient environment to test how changes work across the full stack.
You should first bring up the VM required for building the app code Debian packages on the staging machines:
make build-debs make staging # Use the proper backend for your developer environment: molecule login -s virtualbox-staging-xenial -h app-staging # or: molecule login -s libvirt-staging-xenial -h app-staging sudo -u www-data bash cd /var/www/securedrop ./manage.py add-admin pytest -v tests/
To rebuild the local packages for the app code and update on Xenial staging:
make build-debs make staging
The Debian packages will be rebuilt from the current state of your local git repository and then installed on the staging servers.
If you are using macOS and you run into errors from Ansible
OSError: [Errno 24] Too many open files, you may need to
increase the maximum number of open files. Some guides online suggest
a procedure to do this that involves booting to recovery mode
and turning off System Integrity Protection (
However this is a critical security feature and should not be
disabled. Instead follow this procedure to increase the file limit.
/Library/LaunchDaemons/limit.maxfiles.plist to the following:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>limit.maxfiles</string> <key>ProgramArguments</key> <array> <string>launchctl</string> <string>limit</string> <string>maxfiles</string> <string>65536</string> <string>65536</string> </array> <key>RunAtLoad</key> <true/> <key>ServiceIPC</key> <false/> </dict> </plist>
The plist file should be owned by
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
This will increase the maximum open file limits system wide on macOS (last tested on 10.11.6).
The web interfaces and SSH are available over Tor. A copy of the the Onion URLs
for Source and Journalist Interfaces, as well as SSH access, are written to the
install_files/ansible-base directory, named:
For working on OSSEC monitoring rules with most system hardening active, update
the OSSEC-related configuration in
install_files/ansible-base/staging.yml so you receive the OSSEC
Direct SSH access is available for staging hosts, so you can use
molecule login -s <scenario> -h app-staging, where
on your environment.
This is a production installation with all of the system hardening active, but
virtualized, rather than running on hardware. You will need to
configure prod-like secrets, or export
ANSIBLE_ARGS="--skip-tags validate" to skip the tasks that prevent the prod
playbook from running with Vagrant-specific info.
You can provision production VMs from an Admin Workstation (most realistic),
or from your host. If your host OS is Linux-based and you plan to use an Admin
Workstation, you will need to switch Vagrant’s default virtualization provider
libvirt. The Admin Workstation VM configuration
under Linux uses QEMU/KVM, which cannot run simultaneously with Virtualbox.
Instructions for both installation methods follow.
Switching to the Vagrant libvirt provider¶
Make sure you’ve already installed Vagrant, as described in the multi-machine setup docs.
Ubuntu 16.04 setup¶
Install libvirt and QEMU:
sudo apt-get update sudo apt-get install libvirt-bin libvirt-dev qemu-utils qemu virt-manager sudo /etc/init.d/libvirt-bin restart
Add your user to the libvirtd group:
sudo addgroup libvirtd sudo usermod -a -g libvirtd $USER
Install the required Vagrant plugins for converting and using libvirt boxes:
vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-mutate
If Vagrant is already installed it may not recognize libvirt as a
valid provider. In this case, remove Vagrant with
sudo apt-get remove
vagrant and reinstall it.
Log out, then log in again. Verify that libvirt is installed and KVM is available:
libvirtd --version kvm-ok
Debian 9 setup¶
Install Vagrant, libvirt, QEMU, and their dependencies:
sudo apt-get update sudo apt-get install -y vagrant vagrant-libvirt libvirt-daemon-system qemu-kvm virt-manager sudo apt-get install -y ansible rsync vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-mutate sudo usermod -a -G libvirt $USER sudo systemctl restart libvirtd
Add your user to the kvm group to give it permission to run KVM:
sudo usermod -a -G kvm $USER sudo rmmod kvm_intel sudo rmmod kvm sudo modprobe kvm sudo modprobe kvm_intel
Log out, then log in again. Verify that libvirt is installed and your system supports KVM:
sudo libvirtd --version [ `egrep -c 'flags\s*:.*(vmx|svm)' /proc/cpuinfo` -gt 0 ] && \ echo "KVM supported!" || echo "KVM not supported..."
Set libvirt as the default provider¶
Set the default Vagrant provider to
echo 'export VAGRANT_DEFAULT_PROVIDER=libvirt' >> ~/.bashrc export VAGRANT_DEFAULT_PROVIDER=libvirt
To explicitly specify the
libvirt provider below, use the command
vagrant up --provider=libvirt /prod/
Convert Vagrant boxes to libvirt¶
Convert the VirtualBox images for Xenial from
vagrant box add --provider virtualbox bento/ubuntu-16.04 vagrant mutate bento/ubuntu-16.04 libvirt
You can now use the libvirt-backed VM images to develop against the SecureDrop multi-machine environment.
Install from an Admin Workstation VM¶
In SecureDrop, admin tasks are performed from a Tails Admin Workstation. You should configure a Tails VM in order to install the SecureDrop production VMs by following the instructions in the Virtualizing Tails guide.
Once you’re prepared the Admin Workstation, you can start each VM:
vagrant up --no-provision /prod/
At this point you should be able to SSH into both
From here you can follow the server configuration instructions to test connectivity and prepare the servers. These
instructions will have you generate SSH keys and use
ssh-copy-id to transfer
the key onto the servers.
If you have trouble SSHing to the servers from Ansible, remember
to remove any old ATHS files in
Now from your Admin workstation:
cd ~/Persistent/securedrop ./securedrop-admin setup ./securedrop-admin sdconfig ./securedrop-admin install
The sudo password for the
mon-prod servers is by
After install you can configure your Admin Workstation to SSH into each VM via:
Install from Host OS¶
If you are not virtualizing Tails, you can manually modify
and then provision the machines. You should set the following options in
ssh_users: "vagrant" monitor_ip: "10.0.1.5" monitor_hostname: "mon-prod" app_hostname: "app-prod" app_ip: "10.0.1.4"
Note that you will also need to generate Submission and OSSEC PGP public keys, and provide email credentials to send emails to. Refer to this document on configuring prod-like secrets for more details on those steps.
To create the prod servers, run:
vagrant up /prod/ vagrant ssh app-prod sudo -u www-data bash cd /var/www/securedrop/ ./manage.py add-admin
A copy of the Onion URLs for Source and Journalist Interfaces, as well as
SSH access, are written to the Vagrant host’s
By default, direct SSH access is not enabled in the prod environment. You will need to log
in over Tor after initial provisioning or set
enable_ssh_over_tor to “false”
./securedrop-admin tailsconfig. See Connecting to VMs via SSH Over Tor or Configuring SSH for Local Access
for more info.