How to use Laravel with Docker

Why we’ve switched from Vagrant to Docker

We’ve recently started moving away from using our Homestead based Vagrant setup when developing Laravel apps due to the fact Docker is lightweight, quicker and we’ve found that many projects we work on, require additional PHP extensions or server configuration which Docker simplifies the management of.

Previously for convenience we were running multiple Laravel sites on one Vagrant box. Vagrant best practise is to use a separate box for each project. The problem with this is the sysadmin overhead, keeping all the separate operating systems up to date, plus the amount of disk space (we use SSD's) that each OS image takes up. The lead us to using a single vagrant box with multiple projects on it.

It's not long before you get a bloated vagrant box with additional dependencies that are not project specific. Typical development workflow would be something like; the lead developer would install the dependencies for a project on their Vagrant box and communicate those changes to the rest of the team via Slack or email. These configuration changes weren’t stored in code, the project having to rely on some sort of documentation. All of a sudden rebuilding the project from scratch would require reviewing documentation, additionally we were no longer all developing on the exact same Vagrant box, which can lead to the “it works on my machine” issues.

Now that we’re using Docker, we can alter our base containers as needed and commit the Docker Compose file to the repository for the Laravel App which means everyone on the project has everything they need in one repo - no documentation to maintain. Collaborators on the project can simply run docker-compose up and be up and running with the exactly the same setup, and as a bonus, we can use the same Docker set up as part of our automated testing workflow (we’ll cover this in a future blog post).

There was one issue though - one of the best features of our Vagrant setup was that it was running multiple Laravel websites, which meant we didn’t need to stop the box and start a different one if we needed to jump into another project for a few minutes. Since we want our Docker setup to be committed to the repository for each project, we wanted to avoid a situation where we were running multiple websites in one container. But seeing as how we still wanted to be able to access more than one website at a time, so we needed to come up with a solution for this.

Core setup

We based the core containers we’re using on Dockervel but made a few changes:

  • We’re using MySQL instead of MariaDB

  • We’re using Ubuntu for the web server instead of Alpine since Alpine doesn’t support some of the extensions we commonly need to use (mainly php5-mssql, ext-xml and php5-ctype)

  • And since we’d need to provide these extensions in the composer image as well, we decided to use composer on the web server image instead of running a separate container for it

Dockervel comes with some helpful aliases, which I updated accordingly so they’d work with our setup, plus I added some additional aliases such as dsh for shell access to the web server, etc. I also added a check so that the aliases work in zsh and dash since we don’t all use bash.

Access multiple apps simultaneously

In order to run multiple Laravel apps at a time, I decided to use an automated nginx reverse proxy, however we still needed to add the virtual host name to our local /etc/hosts file in order for this to work. So we don’t need to do this manually, I created a script which runs the nginx reverse proxy container, then the containers required for Laravel, and then adds a line to your hosts file if it doesn’t already exist:

# Start the proxy
echo 'Starting the nginx reverse proxy'
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
# Start the Laravel containers
echo 'Starting the Laravel containers'
docker-compose up -d
# Set the hosts file
echo 'Setting the hosts file'
VIRTUAL_HOST_TMP=$(grep 'VIRTUAL_HOST=' docker-compose.yml | tail -n1 | awk '{ print $2}')
if docker-machine ip ; then
   IP_ADDRESS=$(docker-machine ip)
if grep -q "$IP_ADDRESS $VIRTUAL_HOST" /etc/hosts; then
   echo "Virtual Host already in hosts file"
   echo $IP_ADDRESS $VIRTUAL_HOST | sudo tee -a /etc/hosts

Here’s how to use our setup

First of all, make sure you have docker and docker-compose installed. I’m on a Mac, so I use the Docker Toolbox which is also available for Windows. It’s easier for Linux users, you simply need to run curl -fsSL https://get.docker.com/ | sh


  • Clone our Dockervel repo: git clone https://github.com/Webscope/dockervel.git

  • cd to the directory: cd dockervel

  • If you’re on Linux, run as su: su

  • Load the aliases: . ./scripts/aliases.sh

  • By default, the Laravel install will have the hostname laravel.docker.local. You can change the VIRTUAL_HOST in docker-compose.yml if you want it to be different, which you would definitely want if this isn’t your first Laravel install using Dockervel

  • Run dstart and you have a server running! Hit http://laravel.docker.local (or the VIRTUAL_HOST you defined in docker-compose.yml in the previous step) in your browser and you will see nginx fault message because there is no www/public/index.php

  • Create a new Laravel project: dcomposer-create

  • Fix the permissions of the www folder: dpermit

  • Copy the .env file cp .env www/.env

  • Run artisan commands: dartisan make:auth

  • Fix permissions as we have introduced new files: dpermit. Now you have a registration system active. Go to http://laravel.docker.local (or the VIRTUAL_HOST you defined in docker-compose.yml) and register a new user to see that the database is working correctly.

Additional useful commands:

  • npm install: dnodejs npm install

  • gulp install: dnodejs gulp install

  • gulp watch: dulp-watch Now there is one container running gulp watch and monitors changes on files according your gulpfile.js

  • For shell access to the web server, use dsh

More blog posts by Katie Graham

When movies are made, one of the most demanding roles on set is the script supervisor, also known as the continuity supervisor. The major objective of the continuity supervisor is to keep the continuity of the wardrobe, props, hair, set dressing, makeup and actors actions during the filming of each scene. This sounds like an impossible task for one person, and it almost always is. A lot of mistakes slip through the cracks and show up as the movie is edited together but generally movie studios don’t mind as they have a psychological trick up their sleeve.
I’ve just landed back home after spending a few days in New Orleans for the annual North American DrupalCon. As usual, I mainly attended Business and DevOps sessions, as well as a number of sessions in a new track called Horizons which covers the edges of Drupal. Horizons is an interdisciplinary track that discusses and showcases the biggest challenges and greatest opportunities in Drupal.
The two pairs of the Knox V2 Google Cardboard we’d ordered had arrived in the office in time for Webscope Labs, so we thought it would be fun to have a play and see what we could do in terms of Virtual Reality. We created a VR tour of our office, it was pretty disconcerting to remove the headset after seeing one part of the office and adjust to being in the area you’re actually in!