Let’s get started with Docker

Essential Docker for ASP.NET Core MVC by Adam Freeman

We are allowed to spent time at work on a Friday afternoon exploring new technologies, so a colleague and I decided to work through this book. Microsoft have recently started supporting Docker running on Windows, and I thought this would be an interesting way to see how well the Windows Docker eco-system has been progressing. Also, this book targets ASP.NET 1.1 and I wanted to see if things were easier with the latest 2.1 version of ASP.NET.

The first two chapters in the book are a really brief introduction to Docker, followed by a list of the docker utility’s commands.

Installing Docker on windows was really easy, requiring us to run an installer. We did have to turn on Hyper-V for Docker to use. This clashed with the Oracle VirtualBox that we typically use for testing, but fortunately I had a spare machine on which I could leave it turned on.

In chapter four of the book you write a fairly simple ASP.NET Core application which you then publish.

dotnet publish --framework netcoreapp2.0 --configuration Release --output dist

This application is then copied across to a docker container as part of the DockerFile

FROM microsoft/aspnetcore:2.0.3
COPY dist /app
WORKDIR /app
EXPOSE 80/tcp
ENV ASPNETCORE_URLS http://+:80
ENTRYPOINT ["dotnet", "dockerplay.dll"]

which we can then use to build a Docker container.

docker build . -t apress/exampleapp -f Dockerfile

The next chapter of the book deals with Volumes and Software Defined Networking. Volumes allow you to define some storage which can be attached to a container – this allows the container to run an application that writes to the file system to store its state, say a database. When we need to rebuild the container we can then re-attach the file system to the new container, and hence not lose any data.

This is where we diverged a little from the book. The book aims at Linux and mySQL, where we wanted to use SQL Server running on windows.

For this we pulled a pre-build image containing SQL Server.

docker pull microsoft/mssql-server-windows-express

And then used a volume to store the state.

docker volume create --name testdata

docker run -d -p 7002:1433 -e sa_password=ffddfdfdfdfd -e ACCEPT_EULA=Y -v testdata:c:\data microsoft/mssql-server-windows-express

The book moves on to SDN and the demo application uses two different network segments – one for the frontend and one for the backend. In the book, a proxy is used to load balance across the three servers that are set up.

Unfortunately there was no haproxy that would run in a Windows container, so we decided to use NGINX. Again. we had to build our own container for this, and I couldn’t build on nano server (because my windows drive had become corrupted)

 

FROM microsoft/windowsservercore
ENV VERSION 1.13.9

SHELL ["powershell", "-command"]
RUN Invoke-WebRequest -Uri http://nginx.org/download/nginx-1.13.9.zip -OutFile c:\nginx-$ENV:VERSION-win64.zip; \
	Expand-Archive -Path C:\nginx-$ENV:VERSION-win64.zip -DestinationPath C:\ -Force; \
	Remove-Item -Path c:\nginx-$ENV:VERSION-win64.zip -Confirm:$False; \
	Rename-Item -Path c:\nginx-$ENV:VERSION -NewName nginx

# Make sure that Docker always uses default DNS servers which hosted by Dockerd.exe
RUN Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord; \
	Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ScreenDefaultServers -Value 0 -Type DWord; \
	Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ScreenUnreachableServers -Value 0 -Type DWord
	
# Shorten DNS cache times
RUN Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name MaxCacheTtl -Value 30 -Type DWord; \
	Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name MaxNegativeCacheTtl -Value 30 -Type DWord

COPY nginx.conf c:/nginx/conf

WORKDIR /nginx
EXPOSE 80
CMD ["nginx", "-g", "\"daemon off;\""]

We had to write a config file that knew about the three instances that we wanted to load balance across

#user  nobody;
worker_processes  1;

error_log  logs/error.log;
error_log  logs/error.log  notice;
error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    upstream myapp1 {
        server dockerplay_mvc_1;
        server dockerplay_mvc_2;
        server dockerplay_mvc_3;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
}

We could run the various commands documented in the book to start the instances and add them to the right network. We could then load balance using the NGINX and we could refresh the web page to see that requests were being served by different machines at different times.

[There is a little too much hardwired in by name for my taste. The SDN inside docker runs a DNS that lets you look up other containers by name to get their IP address]

The next chapter of the book looks at Docker Compose. This gives you a way to wire things up using a single configuration file.

version: "3"

volumes:
  testdata:

networks:
  frontend2:
  backend2:

services:

  sqlexpress2:
    image: "microsoft/mssql-server-windows-express"
    volumes: 
      - testdata:c:\data
    networks: 
      - backend2
    environment:
      - sa_password=fddfdfdfsff
      - ACCEPT_EULA=Y

  dbinit:
    build:
      context: .
      dockerfile: Dockerfile
    networks:
      - backend2
    environment:
      - INITDB=true
      - DBHOST=sqlexpress2
      - DBPORT=1433
    depends_on:
      - sqlexpress2

  mvc:
    build:
      context: .
      dockerfile: Dockerfile
    networks:
      - backend2
      - frontend2
    environment:
      - DBHOST=sqlexpress2
      - DBPORT=1433
    depends_on:
      - sqlexpress2
    ports: 
      - 4020:4020 
      - 4021:4021

  loadbalancer:
    image: nginx
    build:
      context: ..\nginx
      dockerfile: Dockerfile
    ports: 
      - 8112:80
    networks:
      - frontend2

This is a really neat technology, allowing you to scale the various components up and down. Unfortunately for us, we didn’t have an easy way to reconfigure the load balancer when the scaling happens. In the book, the load balancer configuration has “links” and “volumne” lines that allow the compose to pass details of the instantiations of the load balanced service. We didn’t have time to look in to this.

The next chapter in the book looks at Docker Swarm. There was no equivalent on Windows, so we didn’t try it.

The last chapter of the book looks at allowing debugger access into the container. Visual Studio can do this if you run the appropriate components, but we didn’t try too hard to get this working. Later versions of Visual Studio can build containers and automatically configure them to allow debugger access.

I think our main observation was that Windows Docker seems to be a long way behind Docker on Linux.

The book was good as a set of instructions to follow, with the brief explanations helping a little to understand what was going on. Using a book that was a version behind was a good way of forcing us to debug and understand what was happening a little better.

On a related note, there’s an interview that discusses Service Fabric which is used to run loads of the Azure infrastructure.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s