Landscape cutout
jeroeng.dev
devopsdockerlaravel

Dockerizing a Laravel application in 15 minutes

Docker is a very cool tool for development. In this post I will explain how you could have your Laravel application running inside a Docker container within 15 minutes. I will skip the step of explaining what Docker is and why you would want to Dockerize your application and instead just link to one of the many articles online. If you do not yet have installed Docker and Docker Compose please do so first.

Docker images can be build from a wide range of base images such as Ubuntu or (preferably) Alpine Linux. Although building your own image from scratch would ensure it has the right fit, it can also be cumbersome. Especially when you want to start hacking on your next side project very quickly. Therefore I created a base image called Alpine Artisan and based it on the awesome Docker Webstack which consists of Alpine Linux, PHP-FPM and Nginx. Alpine Artisan can be found on Docker Hub and GitHub.

I preinstalled a few PHP extensions for you so you can work with for example MySQL and Redis (including Laravel Horizon). A final note I have to make is that the assumption I make is that you have the default Laravel structure. This means that the root directory of your application resembles the structure of a newly created Laravel app, what happens inside the app or resources folder will not matter, it only matters that they are there.

So... Let's start the timer!

Step one: the Dockerfile

Create a new file called Dockerfile in the root of your application. Note that the file has no extension. In that file, add the following statement:

FROM jeroeng/alpine-artisan:web7.3

COPY --chown=php:nginx . /www

FROM is basically an include statement, it pulls another image in on which the current image will be based. The COPY makes sure that your files are always in the container, not only when you (re)build it. The rest of the magic happens inside the Alpine Artisan image, so you do not have to do anything else. Let's move on!

Step two: the docker-compose.yml file

Create a docker-compose.yml next to the Dockerfile. Docker Compose is a nifty tool to build Docker containers and it certainly makes it a lot more fun to do. In your Docker Compose file add the following yaml:

services:
  app:
    build: .
    ports:
      - 80:80
    volumes:
      - .:/www:delegated

  db:
    image: percona:5.7
      environment:
        MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-secret}
        MYSQL_DATABASE: my_database
        MYSQL_USER: jeroen
        MYSQL_PASSWORD: ${MYSQL_ROOT_PASSWORD:-secret}
      volumes:
        - mysql-data:/var/lib/mysql:rw
      ports:
        - 3306:3306

  queue:
      image: redis:5-alpine
      ports:
        - 6379:6379

volumes:
  mysql-data:

If you are unfamiliar with Docker Compose, it might be a lot to take in! Let's break it down. In this file we ask three services to be created: an app, a database and a redis instance. Feel free to leave out any of the latter two if you don't need them. Each service will be a container and is either based on a local Dockerfile or an image on Docker Hub. Lastly, we will persist the data in the database in a volume (it doesn't require further configuration). If you want to persist other data as well you could create additional volumes and mount them to the corresponding containers.

The app container has a build target set to the Dockerfile that we just created and exists in the same folder. When we go live, we can see our application at localhost:80. The database container is set up from a MySQL image and the environment variables we give it are used for the root user. The redis container is very basic, it does not need more configuration.

Step three: your .env file

In order to let your app communicate with redis and the database you should change the following variables in your .env to match the values in the docker-compose.yml

DB_CONNECTION=mysql
DB_HOST=database
DB_PORT=3306
DB_DATABASE=my_database
DB_USERNAME=jeroen
DB_PASSWORD=secret

QUEUE_CONNECTION=redis
REDIS_HOST=queue
REDIS_PASSWORD=
REDIS_PORT=6379

Step four: bring it to life!

If you are unfamiliar with Docker and Docker Compose, these are the essential commands you will need to remember for now:

  • docker-compose up To start your alreay build containers. Add the flag -d to run them detached
  • docker-compose build To build or rebuild the containers, an alternative is the --build flag for the up command
  • docker-compose ps Will give you an overview of the currently (running) containers
  • docker-compose down To stop and remove the containers

Since this is the first time for this setup we will need to build and then launch our containers, I would use the up command with the flags for detached and building of the images: docker-compose up -d --build.

Step five: stop the time!

When the command is finished, your application should be accessible at localhost:80. If this port interferes with any other service you should change the port in docker-compose.yml to for example 9000:80. The latter port is always inside the Docker network (so between containers) and the first outside of it.

Lastly, if you wish to have shell access to any of your containers, use the command docker-compose exec app sh. I also recommend aliassing docker-compose to something shorter.

That's it, I hope you made it within the 15 minutes I promised at the start ;)