About this Node application
This application contains a package.json, server.js, and a .gitignore file, which are simple enough to be easily done.
.gitignore
node_modules/*
package.json
{ "name": "docker-dev", "version": "0.1.0", "description": "Docker Dev", "dependencies": { "connect-redis": "~1.4.5", "express": "~3.3.3", "hiredis": "~0.1.15", "redis": "~0.8.4" }}server.js
var express = require('express'), app = express(), redis = require('redis'), RedisStore = require('connect-redis')(express), server = require('http').createServer(app);app.configure(function() { app.use(express.cookieParser('keyboard-cat')); app.use(express.session({ store: new RedisStore({ host: process.env.REDIS_HOST || 'localhost', port: process.env.REDIS_PORT || 6379, db: process.env.REDIS_DB || 0 }), cookie: { expires: false, maxAge: 30 * 24 * 60 * 60 * 1000 } }));});app.get('/', function(req, res) { res.json({ status: "ok" });});var port = process.env.HTTP_PORT || 3000;server.listen(port);console.log('Listening on port ' + port);server.js pulls all dependencies and launches a specific application. This specific application is set to store session information in Redis and exposes a request endpoint that will respond to a JSON status message. This is all very standard thing.
One thing to note is that connection information for Redis can be rewrite using environment variables - this will play a role in later migration from development environment dev to production environment prod.
Docker file
For development needs, we will have Redis and Node run in the same container. To do this, we will use a Dockerfile to configure this container.
Dockerfile
FROM dockerfile/ubuntuMAINTAINER Abhinav Ajgaonkar <[email protected]># Install RedisRUN / apt-get -y -qq install python redis-server# Install NodeRUN / cd /opt && / wget http://nodejs.org/dist/v0.10.28/node-v0.10.28-linux-x64.tar.gz && / tar -xzf node-v0.10.28-linux-x64.tar.gz && / mv node-v0.10.28-linux-x64 node && / cd /usr/local/bin && /ln -s /opt/node/bin/* . && /rm -f /opt/node-v0.10.28-linux-x64.tar.gz# Set the working directoryWORKDIR /srcCMD ["/bin/bash"]
Let's understand it one by one,
FROM dockerfile/ubuntu
This time, docker is told to use the dockerfile/ubuntu image provided by Docker Inc. as the benchmark image to build.
RUN /
apt-get -y -qq install python redis-server
The benchmark image contains nothing at all - so we need to use apt-get to get everything that the application needs to run. This sentence will install python and redis-server. The Redis server is necessary because we will store the session information into it, and the necessity of python is that it can be built into the C extension required by the Redis node module through npm.
RUN / cd /opt && / wget http://nodejs.org/dist/v0.10.28/node-v0.10.28-linux-x64.tar.gz && / tar -xzf node-v0.10.28-linux-x64.tar.gz && / mv node-v0.10.28-linux-x64.tar.gz && / mv node-v0.10.28-linux-x64 node && / cd /usr/local/bin && / ln -s /opt/node/bin/* . && / rm -f /opt/node-v0.10.28-linux-x64.tar.gz
This downloads and extracts the 64-bit NodeJS binary.
WORKDIR /src
This sentence will tell docker that once the container has been started, it must do cd /src once before executing the things specified by the CMT property.
CMD ["/bin/bash"]
As a final step, run /bin/bash.
Build and run containers
Now that the docker file is written, let's build a Docker image.
docker build -t sqldump/docker-dev:0.1.
Once the image is built, we can run a container using the following statement:
docker run -i -t --rm / -p 3000:3000 / -v `pwd`:/src / sqldump/docker-dev:0.1
Let's take a look at what's happening in the docker run command.
-i will start the container in interactive mode (compared to -d is in separate mode). This means that once the interactive session is over, the container will exit.
-t will assign a pseudo-tty.
--rm will remove the container and its file system when exiting.
-p 3000:3000 will forward port 3000 on the host to port 3000 on the container.
-v `pwd`:/src
This sentence will mount the current working directory to /src in the host (for example, our project file) container. We hang the current directory as a volume instead of using the ADD command in the Dockerfile, so that any modifications we make in the text editor can be immediately seen in the container.
sqldump/docker-dev:0.1 is the name and version of the docker image to be run. This is the same as the name and version we used to build the docker image.
Since the Dockerfile specifies CMD ["/bin/bash"], as soon as the container is started, we will log in to a bash shell environment. If the docker run command is successfully executed, it will look like the following:
Start development
Now the container is running, before we start writing code, we will need to sort out some standard, non-docker-related things. First, we need to use the following statement to start the redis server in the container:
service redis-server start
Then, install the project dependencies and nodemon. Nodemon observes changes in the project file and restarts the server in time.
npm installnpm install -g nodemon
Finally, start the server with the following command:
nodemon server.js
Now, if you navigate to http://localhost:3000 in your browser, you should see something like this:
Let's add another endpoint like Server.js to simulate the development process:
app.get('/hello/:name', function(req, res) { res.json({ hello: req.params.name });});You will see that nodemon has detected the changes you made and restarted the server:
And now, if you navigate your browser to http://localhost:3000/hello/world, you will see the following response:
Production environment
Containers in the current state are far from being released as products. The data in redis will no longer be persistent when restarting across containers. For example, if you restart the container, all the session data will be wiped out. The same thing will happen when you destroy the container and open a new container, which is obviously not what you want. I will talk about this issue in the productized content of the second part.