Never buy another SSL cert.

tldr; 1 bash <(curl -s bman.io/i/certall)

If you are like me, and you can't stand paying for certificates that are actually less secure than what you can generate on your own, then keep reading.

SSL is embedded into everything on the Internet because people like to feel safe when they shop. I say feel because under the current system its just a security blanket over your eyes.

If you know anything about Secure Sockets Layer (SSL) and how it all works, it would really scare you. It is meant to be "secure" however with the current system of Certificate Authorities (CA) you are basically giving the keys to the encryption to every CA above you. They possess what is called "root keys".

The weakest link

Security is a chain; it's only as strong as the weakest link. The security of any CA-based system is based on many links and they're not all cryptographic. People are involved.

That is a quote from a great article called Ten Risks of PKI: What You're not Being Told about Public Key Infrastructure By Carl Ellison and Bruce Schneier. I highly suggest you read if you want to learn more.

So now that is out of the way, I can get off my soapbox and give you something useful.

The great people at the Electronic Freedom Foundation (sup jager), have provided a service called certbot formerly called Let's Encrypt.

What is certbot? Well, it is a script to generate SSL certificates locally and have them be usable just like the ones you buy. No errors in your browser, no cost, no giving away the keys to your castle.

From the certbot FAQ
Question:

Will Certbot generate or store the private keys for my certificates on Let’s Encrypt’s servers?

Answer:

No. Never.

The private key is always generated and managed on your own servers, not by the Let’s Encrypt certificate authority.

OMG, yes! I get to keep my stuff secure, not pay a cent, and get a green padlock in the browsers? Sign me up!

If you are interested in learning more, please check out the EFF Certbot site here https://certbot.eff.org/.

Bonus script

I went ahead and decided to use this to install certs for every site I had on my server. Of course, I can't just run the command 10-15 times, I have to automate all the things.

So here it is. Use at your own risk.

This will most likely work on any Debian based distribution but I only tested it on Debian 8 (jessie). This is for nginx, so take note. I made one for Apache2 as well before I realized all my sites were proxied through nginx. Unfortunately, I did not keep a copy before moving on.

All you have to do is stick this in a file somewhere called certall.

Next, you would change permissions and give it a run.

chmod +x certall && ./certall  

At that point it will ask if you want to install certbot if you do not have it in your $PATH. Say yes and it bootstraps certbot onto most Linux OS, done.
Note: (You can run the script as root or a regular user, the install is done by an external script which calls sudo anyway).

root@bman:~/ssl/duuit# ./certall  
Certbot is not currently installed or in your $PATH.  
Install now? y  
Bootstrapping dependencies for Debian-based OSes...  

The next part is going to be running what is called a "dry run" which basically just simulates the commands that it will run, looking for any errors.

Creating certificate for a8.lc (--dry-run)  
Creating certificate for bman.io (--dry-run)  
Creating certificate for theearth.space (--dry-run)  
All domains have completed the dry-run.  

When that finishes, you will be prompted again to do the real certificate creation and installation. If you don't see any errors or foul messages above, it is OK to continue. Say yes again and that is it.

If you see no issues above, then it is probably ok to continue.  
Otherwise you can backout and fix the issues now.  
Continue? y  
Creating certificate for a8.lc  
Creating certificate for bman.io  
Creating certificate for theearth.space  
root@bman:~/ssl/duuit#  

Good stuff, now all your certs have been generated and you can find them in /etc/letsencrypt/live/$domain.
All you have to do is install them into your nginx configs. I didn't automate that part so you don't break existing certs that work.

The certbot help says rather than copying, please point your (web) server configuration directly to those files (or create symlinks). During the renewal, /etc/letsencrypt/live is updated with the latest necessary files.
https://certbot.eff.org/docs/using.html#where-are-my-certificates

Note:

Let’s Encrypt CA issues short-lived certificates (90 days). Make sure you renew the certificates at least once in 3 months.

Oh and about those renewals, it's not too hard to do:

certbot renew  

Whew, done. Here is some sample output:

root@bman:~/ssl/duuit# certbot renew  
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/bman.io.conf  
-------------------------------------------------------------------------------
Cert not yet due for renewal

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/a8.lc.conf  
-------------------------------------------------------------------------------
Cert not yet due for renewal

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/theearth.space.conf  
-------------------------------------------------------------------------------
Cert not yet due for renewal

The following certs are not due for renewal yet:  
  /etc/letsencrypt/live/bman.io/fullchain.pem (skipped)
  /etc/letsencrypt/live/a8.lc/fullchain.pem (skipped)
  /etc/letsencrypt/live/theearth.space/fullchain.pem (skipped)
No renewals were attempted.  

You can also throw that in cron to take care of it for you.

crontab -e  
# Automatically renew all the certificates monthly.
1 0 1 * * /usr/bin/certbot renew  

That will run monthly and take care of all your upgrades.

Now for what we have all been waiting for, here is the script:

#!/bin/bash
# Benjamin H. Graham <bman@bman.io> @bhgraham
#
# Script to generate SSL certs for all nginx domains
#
# Usage: ./certall

# My webdir is in /home but the default is /var/www.
web='/var/www';  
nginxsites='/etc/nginx/sites-enabled';

# Check if certbot is already installed, ask to install if not.
hash certbot 2>/dev/null || {  
    echo "Certbot is not currently installed or in your \$PATH.";
    read -p "Install now? " -n 1 -r;
    echo;
    if [[ $REPLY =~ ^[Yy]$ ]]; then
        x=$(mktemp) && \
            echo 'quiet "2"; \
            APT { Get { Assume-Yes "true"; Fix-Broken "true"; }; };' > $x && \
            APT_CONFIG="$x" \
            bash <(curl -s https://dl.eff.org/certbot-auto) 2>/dev/null;
    else
        exit 0;
    fi;
}

for domain in $(ls ${nginxsites}/|cut -d/ -f4-); do  
    hosts="";
    docroot=$(grep root ${nginxsites}/${domain} -m 1|\
        cut -d' ' -f2|cut -d';' -f1|sed "s|/var/www|${web}|g");

    echo "Creating certificate for $domain (--dry-run)";
    for host in $(egrep 'server_name|alias' ${nginxsites}/${domain}|\
        grep -v fastcgi|awk -F"server_name " '{print $2}'|cut -d';' -f1); do
        hosts=${hosts}" -d ${host}";
    done;

    # have domains in server_name and aliases now as $hosts
    certbot certonly --webroot -w ${docroot} ${hosts} --dry-run -t -q;
done;

echo "All domains have completed the dry-run.";  
echo "If you see no issues above, then it is probably ok to continue.";  
echo "Otherwise you can backout and fix the issues now.";

read -p "Continue? " -n 1 -r;  
echo;  
if [[ $REPLY =~ ^[Yy]$ ]]; then  
    for domain in $(ls ${nginxsites}/|cut -d/ -f4-); do
        hosts="";
        docroot=$(grep root ${nginxsites}/${domain} -m 1|cut -d' ' -f2|\
            cut -d';' -f1|sed "s|/var/www|${web}|g");

        echo "Creating certificate for $domain";
        for host in $(egrep 'server_name|alias' ${nginxsites}/${domain}|\
            grep -v fastcgi|awk -F"server_name " '{print $2}'|cut -d';' -f1); do
            hosts=${hosts}" -d ${host}";
        done;

        # have domains in server_name and aliases now as $hosts
        certbot certonly --webroot -w ${docroot} ${hosts} -t -q;
    done;
fi;

detecting Linux.Ekoms.1

So the big news is a malware for Linux designed to take screenshots every 30 seconds that has been found in the wild called Linux.Ekoms.1.

For more information see this link:
http://vms.drweb.com/virus/?i=7924647

To assist in detection and cleaning, I wrote a quick script to test for the malware existence and notify you.

You can run the script like this:

bash <(curl -s bman.io/i/detect_ekoms)  

The code is as follows:

#!/bin/bash
# Quick and dirty check for ekoms existence. - bman@bman.io
check_ekoms() {  
    if [ -e "$HOME/.config/autostart/%exename%.desktop" ]; then
        echo 'Possible infection found. You should run a full scan of all disk partitions.
To clean, you can download a free trial of Dr.Web Anti-virus for Linux here:  
        http://products.drweb.com/linux/?lng=en'; exit 1;
    else
        echo "Linux.Ekoms.1 not found.  System clean."; exit 0;
    fi
}
check_ekoms;  

Forcing IPv4 with APT

So as you may or may not know, I am an adopter of IPv6 and I setup IPv6 on all my servers and desktop machines. Usually I forget it's even enabled until I see DNS resolution in the shell or go to a site that reports my IP address back to me.

Today I was installing i2c and noticed that I could not download their key with SSL over IPv6 (they should fix this).

bman@lightmyfire:~$ wget https://geti2p.net/_static/i2p-debian-repo.key.asc  
--2015-01-12 02:53:22--  https://geti2p.net/_static/i2p-debian-repo.key.asc
Resolving geti2p.net (geti2p.net)... 2a02:180:1:1:2456:6542:1101:1010, 91.143.92.136  
Connecting to geti2p.net (geti2p.net)|2a02:180:1:1:2456:6542:1101:1010|:443...  

After a few minutes, I ctrl-c'ed it and just happened to remember wget suppports -4.

bman@lightmyfire:~$ wget -4 https://geti2p.net/_static/i2p-debian-repo.key.asc  
--2015-01-12 02:55:06--  https://geti2p.net/_static/i2p-debian-repo.key.asc
Resolving geti2p.net (geti2p.net)... 91.143.92.136  
Connecting to geti2p.net (geti2p.net)|91.143.92.136|:443... connected.  
HTTP request sent, awaiting response... 200 OK  
Length: 9127 (8.9K) [text/plain]  
Saving to: ‘i2p-debian-repo.key.asc’

100%[======================================>] 9,127       --.-K/s   in 0.004s  

2015-01-12 02:55:07 (2.02 MB/s) - ‘i2p-debian-repo.key.asc’ saved [9127/9127]  

So that was easy enough, then when I tried to run the old apt-get update and it hangs just like the wget did at this line:

100% [Connecting to dl.google.com (2607:f8b0:4000:80b::200e)]  

Just sitting there, no timeout that I can tell (I get impatient and ctrl-c before it occurs either way).
I would blame Google cloud for this, but when I checked where dl.google.com resolved to, it was to an Amazon AWS CloudFront IP address.

Thankfully, in an update to apt in the last couple of years, they added the ability to force to IPv4 as most other CLI programs can. Here is how to do it:

apt-get -o Acquire::ForceIPv4=true update  

Which results in:

Fetched 214 kB in 9s (22.8 kB/s)  
Reading package lists... Done  
bman@lightmyfire:~$  

Yay, the apt-get update was successful. Now if you want to make the setting persistent for all APT calls, you will need to create a file called /etc/apt/apt.conf.d/99force-ipv4 and make the contents:

Acquire::ForceIPv4 "true";  

This will always force apt to use IPv4 at all times which I do not recommend, however it could help with handling extended routing issues.

As this does not seem to be in a current man page, here is the closest thing to docs I could find.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=611891

Getting started with Docker

So you have docker installed (using my installer hopefully?), and you want to know how to begin.

This should get you started in creating your first Docker instance.

First off, let's install Docker.
Previously, I had posted my Docker installers, so we are going to use those for this exercise. I have broken out the instructions for both Mac and Debian-based when needed.

Installation (sudo required)

Mac

curl -s bman.io/i/install_dockermac|sudo bash  

or

curl -s https://gist.githubusercontent.com/bhgraham/9dced0918dc20edd1484/raw/744e2f3ec355dc9a9310527c77234dbb2a192cde/install_dockermac | sudo bash  

Debian-based

curl -s bman.io/i/install_dockerdeb|sudo bash  

or

curl -s https://gist.githubusercontent.com/bhgraham/ed9f8242dc610b1f38e5/raw/58c162147be40c53a8a35b525e62dea86f49ebec/install_dockerdeb | sudo bash  

GIT

To make use of common best practices, we are going to need to use revision control. We will need a directory to initialize a GIT repository. For our exercise we are going to use the project name, exampledocker.

mkdir exampledocker && cd exampledocker  
git init  

Dockerfile

Using your preferred editor (nano, vim, eg.), you need to create a file named Dockerfile with the following contents:

FROM    debian:stable

MAINTAINER MyName <me@wherever.com>

# Build dependencies
RUN apt-get -y update

# Install some common tools needed.
RUN apt-get install -y -q curl git-core apt-utils sudo libwww-perl vim htop wget

# Setup timezone, notice in this example, we 
# perform multiple operations within the same
# RUN by ending the lines with \, this ensures
# You create a single build step for this operation.
RUN \  
  cp /usr/share/zoneinfo/America/Chicago /etc/localtime && \
  echo "America/Chicago" > /etc/timezone;

# Bash / sh link
RUN ln -sf /bin/bash /bin/sh;

# Say you want a file with predefines set,
# like for a file in /etc, you can do it like so.
RUN echo moo > /tmp/moo

# When the container is run, which directory do 
# you want it dropped into?
WORKDIR  /opt/

# Now we are going to set the command executed when 
# the container is run.
CMD ["/bin/bash"]  

Save that file and then add it to git.

git add Dockerfile  
git commit -m "Initial Dockerfile commit"  

Docker build and run

Now we will test out our Dockerfile by building it with the following command.

docker build -t exampledocker .  

Next you will see a ton of output showing whats going on. It begins like so, but I won't show the guts of it.

Sending build context to Docker daemon  2.56 kB  
Sending build context to Docker daemon  
Step 0 : FROM debian:stable  
debian:stable: The image you are pulling has been verified  
798202714a7c: Downloading 78.93 MB/90.17 MB 7s  

...

Removing intermediate container 66a45bcf45ac  
Step 8 : CMD /bin/bash  
 ---> Running in aa69d44c9867
 ---> e885d8e03847
Removing intermediate container aa69d44c9867  
Successfully built e885d8e03847  

Now that it is built, let's check to be sure it exists with the docker images command.

docker images exampledocker  
REPOSITORY     TAG     IMAGE ID      CREATED         VIRTUAL SIZE  
exampledocker  latest  e885d8e03847  6 minutes ago  203.5 MB

As we have set the command and the directory at the end of the Dockerfile, running the next command should drop you into a bash shell in /opt.

docker run -i -t --rm exampledocker  

The -i is interactive, -t allocates a pseudo-tty and --rm removes the container when you exit. I would recommend using --rm while you are still developing or else it gets messy fast.

The result should look like this:

root@1d14c22b52ab:/opt#  

At this point you have successfully created and run your first container using Docker. Just type exit to quit the container and remove it.

Check to be sure the container ended with the docker ps command. Just look for Exited (0)

docker ps -a | grep dockerexample  

That's it. You have now successfully created and run a docker container on your local machine.

I will be writing a future article and going in depth on collaboration with Docker and using the Docker Hub. Having it in GIT from the start helps. As always, if you have any questions or comments, drop them on the post comments.

If you need any other help you may want to use man or check here:
https://docs.docker.com/reference/commandline/cli/

docker updates 12/2014

Just a couple of links with some things that have come out this month.

Docker 1.4.0 and 1.3.3

Looks like this release has an extreme emphasis on bug fixes and platform stability.

Pachyderm File System

Pfs is a distributed file system built specifically for the Docker ecosystem. You deploy it with Docker, just like other applications in your stack. Furthermore, MapReduce jobs are specified as Docker containers, rather than .jars, letting you perform distributed computation using any tools you want.

https://aws.amazon.com/ecs/preview/

AWS EC2 Container Service

Mentioned this in an earlier post and I have a few more details now. Integrated with Docker Hub and AWS to use Elastic IP's and other infrastructure. Looks like its going to provide cluster management a built-in scheduler that helps you spread your containers out across your cluster. Great for scaling and looks to be a more complete solution than Google has offered.

http://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html

To use Amazon ECS, you need to install a custom version of the cli (amazon-ecs-client) for ECS, as this is the only available client.

docker install and setup for mac and debian

I have a couple of scripts to help get your servers and laptops setup to use docker. install_dockermac is for MacOSX installation, tested on Mavericks and Yosemite.

The second script (farther down the page), is for Debian based systems. I have tested it with LMDE, Debian Wheezy, and Debian Testing. Should work on any debian based system.

I use these to ensure that my environment is identical across my Mac and my Debian machines. No issues reported so far, but please let me know if you see any ways to improve them in the comments below.

install_dockermac

# Quick Install for Docker on Mac OSX
# Author: Benjamin H. Graham <bman@bman.io>
# Must be run as root   
# Usage: curl -s bman.io/i/install_dockermac|sudo bash

cd ~  
echo "Downloading docker binary";  
curl -Ls "https://github.com/boot2docker/osx-installer/releases/download/v1.2.0/Boot2Docker-1.2.0.pkg" -o "Boot2Docker-1.2.0.pkg";  
echo "Installing docker service";  
installer -pkg Boot2Docker-1.2.0.pkg -target /;  
echo "Cleaning up downloaded files";  
#rm Boot2Docker-1.2.0.pkg;
echo "Initializing docker";  
boot2docker init;

echo "Configuring Docker VM to run at startup";  
mkdir -p /System/Library/StartupItems/docker;  
echo "boot2docker start" > /System/Library/StartupItems/docker-start;  
chmod +x /System/Library/StartupItems/docker/docker-start;  
echo "export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375" >> /etc/profile;

# start it up now and test it out
echo "Starting docker...";  
boot2docker start;

echo "Testing docker with hello-world";  
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375;  
docker run hello-world;  
echo "You should see a hello world message above.";

echo "You will need to logout and back in to use docker. Otherwise run this: ";  
echo "DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375";  

Example usage:

curl -s bman.io/i/install_dockermac | bash  

or

curl -s https://gist.githubusercontent.com/bhgraham/9dced0918dc20edd1484/raw/744e2f3ec355dc9a9310527c77234dbb2a192cde/install_dockermac | sudo bash  

install_dockerdeb

# Quick Install for Docker on Debian (LMDE, Mint, Ubuntu, etc)
# Author: Benjamin H. Graham <bman@bman.io>
# Usage: curl -s bman.io/i/install_dockerdeb | bash

# Add debian repo
echo "Adding the debian repository for docker"  
echo deb http://get.docker.io/ubuntu docker main | sudo tee /etc/apt/sources.list.d/docker.list  
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9  
echo "Updating apt repos"  
sudo apt-get update -qq  
echo "Installing docker"  
sudo apt-get install -y lxc-docker

echo "Enabling current user to use docker"  
sudo groupadd docker  
# Add the connected user "${USER}" to the docker group.
# Change the user name to match your preferred user.
# You may have to logout and log back in again for
# this to take effect.
sudo gpasswd -a ${USER} docker  
echo 'DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock"'| sudo tee /etc/default/docker  
echo "export DOCKER_HOST=tcp://127.0.0.1:2375" | sudo tee /etc/profile;  
# Restart the Docker daemon.
echo "Restarting docker daemon"  
sudo service docker restart

echo "Testing docker with hello-world";  
export DOCKER_HOST=tcp://127.0.0.1:2375;  
docker -H tcp://127.0.0.1:2375 run hello-world;  
echo "You should see a hello world message above.";

echo "You will need to logout and back in to use docker. Otherwise run this: ";  
echo "DOCKER_HOST=tcp://127.0.0.1:2375";  
echo "Or you can test again with this: ";  
echo "docker -H tcp://127.0.0.1:2375 run hello-world";  

Example usage:

curl -s bman.io/i/install_dockerdeb | bash  

or

curl -s https://gist.githubusercontent.com/bhgraham/ed9f8242dc610b1f38e5/raw/58c162147be40c53a8a35b525e62dea86f49ebec/install_dockerdeb | sudo bash  

Happy 12/13/14

The date today is too good to waste. This will be the last date like this in my lifetime (next is in 2103 when we start over with 01/02/03), and while 11/11/11 will be unbeatable, this is the last of these for me.

There are others to look forward to up until summer, here is the list:

  • 1/4/15 - 14 15 pretty close
  • 1/5/15 - 15 15 neat
  • 1/6/15 - 16 15 the countdown begins
  • 3/14/15 - Pi Day to 4 decimals
  • 5/1/15 - 5115 same backwards and forwards
  • 5/10/2015 - full date is same backwards and forwards

Just like all numbers, dates are fun. While I will always long for the binary dates of 00 and 01, here's to 4/3/21, the next extremely worthy date.

install any node.js version on debian

any version of node.js on any debian systems from source, the debian way.

People have asked me for a script to install newer versions of node.js, so I felt that it made sense to spend some time rewriting it to post it. This has been tested on Debian Wheezy and Linux Mint Debian Edition, and should work on any debian-based OS.

If you run the script with a commandline argument, like ./install_nodejs 0.10.25 then it will install the 'v0.10.25-release branch of node.js'.
If you run it without an argument, it will default to the latest release version according to http://nodejs.org.

There should be plenty of comments and echo statements to explain every part of the script, but feel free to comment at the bottom of the page if you see any room for improvement.

install_nodejs

#!/bin/bash
#
# Install any release version of node.js.
# Benjamin H. Graham <bman_at_bman.io>
# Download: http://bman.io/i/install_nodejs
# Usage: ./install_nodejs <release version>

# These 2 lines keep apt quiet. Very useful for automated installs.
export DEBIAN_FRONTEND=noninteractive;  
export APTOPT='-y -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold --yes --fix-missing -qq';

# Function to differentiate the script output.
a8echo () { echo -e "-*- $@" >&2; }  
a8echo "This program will build and install node.js for Debian based systems. sudo password required to install deb packages.";

# Set the version to what is supplied or the latest if no version given.
if [ ${1} ]; then  
    version=${1};
else  
    version=$(curl -s nodejs.org|grep 'class="version"'|sed -e 's/.*Version: v\(.*\)<\/p\>.*/\1/');
fi;

# Set variable for architecture to build for.
arch=$(dpkg --print-architecture);

# Main install routine
a8echo "Updating apt repos and installing dependencies.";  
sudo apt-get update -qq && sudo apt-get $APTOPT install git-core curl build-essential openssl libssl-dev python g++ make checkinstall libc-ares2 libc6 libgcc1 libssl1.0.0 libstdc++6 libv8-3.14.5 zlib1g;

a8echo "Downloading node.js GIT repository.";  
git clone --depth 1 -b v${version}-release --single-branch https://github.com/joyent/node.git nodejs-${version} && cd nodejs-${version};

a8echo "Configuring node.js package.";  
# Configure seems not to find libssl by default so we give it an explicit pointer.
./configure --openssl-libpath=/usr/lib/ssl

a8echo "Debian package for node.js v${version}-release" > description-pak;  
a8echo "Building and installing nodejs package for release version, ${version}.";  
sudo checkinstall -D -y --install=yes --pkgname=nodejs --pkgversion=${version} --pkgarch=${arch} --pkgrelease='bman-1' \  
    --maintainer='Benjamin H Graham \<bman@bman.io\>' --default --pkglicense='Node License' \
    --requires=libc-ares2,libc6,libgcc1,libssl1.0.0,libstdc++6,libv8-3.14.5,zlib1g;

a8echo "Testing that nodejs package installed and node and npm binaries are working.";  
node -v; # it's alive!  
npm -v; # it's alive!  
a8echo "Installation complete. As long as you see the version information above, you are good to go.";  

Save the contents as install_nodejs and then run it with:

./install_nodejs <release version number (optional)>

new aws development services

Originally I had planned to attend re:Invent this year with one of the members of my team. Due to unforseen circumstances, I was unable to make it, however I have reviewed some of the services that were added and wanted to make some quick notes.

AWS CodeDeploy

AWS CodeDeploy is a service that automates code deployments to Amazon EC2 instances. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one EC2 instance or thousands.

AWS CodePipeline

AWS CodePipeline is a continuous delivery and release automation service that aids smooth deployments. You can design your development workflow for checking in code, building the code, deploying your application into staging, testing it, and releasing it to production. You can integrate 3rd party tools into any step of your release process or you can use CodePipeline as an end-to-end solution. CodePipeline enables you to rapidly deliver features and updates with high quality through the automation of your build, test, and release process. CodePipeline will be available in early 2015.

So even though they didn't use the buzzwords, this is a Continuous Deployment (CD) tool tied into their infrastructure with an ansible-like orchestrator-type tool to handle managing configuration, scaled deployments and minimal downtime. You can easily do these tasks with any number of CI/CD tools, in-house like Jenkins, or online like Wercker, either way, they get the job done. I can't see using one without the other, and as both are not available yet, I will be revisiting this next year.

AWS CodeCommit

AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. CodeCommit eliminates the need for you to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to store anything from code to binaries, and it supports the standard functionality of Git allowing it to work seamlessly with your existing Git-based tools. Your team can also use CodeCommit’s online code tools to browse, edit, and collaborate on projects. CodeCommit will be available in early 2015.

As this one is not yet available, I can't say what it will be, however, more choices in the hosted git arena can only be a good thing. Google added something similar to GCE earlier this year and this seems like it is just a case of maintaining parity and integration with the other new tools.

When making a decision regarding hosting of your git, the real question comes down to in-house or external. If you have requirements that prevent you from using external, then this won't help you. Either take a look at indefero, gitbucket or gitlab, all are great options, however make sure you have people internally to maintain and support these. Git support tends to be more indepth than most tools, so having strong people on hand will allow this option to succeed.

As to the external hosted route, you have many options again, but the question comes down to social coding or just hosting. For social, github has the lock, but bitbucket seems to be everyones backup preference due to the unlimited private repository offering. Google and Amazon are playing in this arena, and I just don't see how either can get the marketshare for anything but enterprises that want to limit their vendors used.

Amazon RDS for Aurora

Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora joins MySQL, Oracle, Microsoft SQL Server, and PostgreSQL as the fifth database engine available to customers through Amazon RDS.

Once again this is something Google quietly added to GCE this last year. We evaluated this and the main complaint I heard came from someone a bit out of touch with SaaS/PaaS/IaaS, which was that you couldn't edit the config files. This came from the misunderstanding that these types of services are not your grandma's mysql. While you may be able to connect with a mysql-client and use this like you are used to. The service behind it is something new and powerful and the client compatibility is strictly for dev friendliness. In today's landscape, especially with startups, you don't always have someone with time to be DBA or SysAdmin even if they know how. And just like editing that config file, it may seem like you think you know how to tweak this or that, but there are people at AWS and GCE making it work at scale behind the scenes and they are happy to support the service with better attention than you can. I can't speak highly enough of services like this and the other RDS products. Keeping the databases running is the easiest, hardest and the riskiest part of development, its better to have support from the people who know how to do this correctly.

Now we come to my personal favorite, which was again a focus of the recent GCE additions on their roadshow.

Amazon EC2 Container Service

Amazon EC2 Container Service is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you launch and stop container-enabled applications with simple API calls, allows you to query the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features like security groups, EBS volumes and IAM roles. You can use EC2 Container Service to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. Amazon EC2 Container Service eliminates the need for you to operate your own cluster management and configuration management systems or worry about scaling your management infrastructure.

This is what I have waited on, and where AWS blew GCE away. While Google offered up Kubernetes and Orchestrator, there was no fleshed out, ready-for-primetime, integrated tool to begin deploying at scale. I was in the process of creating a working solution for my use when I first saw this. I now wish I had just waited 6 months and migrate all at once. For as much talk at the GCE cloud roadshow about containers, Google just didn't have all the tools in place for the end-to-end solution to JustWork(tm) without spending another couple of weeks to learn all the pieces. Now you can spend a day getting familiar with the DockerFile and begin using this great technology now.

Overall, AWS and GCE seem to be racing to implement the same features in high demand. While Google focused heavily on container technology this year, Amazon rounded out the offerings with a complete suite of services that meet the needs of the typical developer of today. Rackspace has added an offering this year to be the DevOps team for you, but Amazon has given you the tools to do it all on your own and integrated in a way which allows your developers to maintain all aspects from code to server with the help from Amazon.

Feel free to take a look at the full list on the re:Invent new products and services page.

debian: phpmyadmin with nginx no apache

If you ever have a need for phpmyadmin for a debian machine running nginx you may notice a bunch of apache dependencies in the list when you try to install with apt-get. This is foul if all you wanted was phpmyadmin.

Here's how to get around it.

sudo apt-get update  
sudo apt-get install php5-cli php5-fpm fcgiwrap  
sudo apt-get --no-install-recommends install phpmyadmin  

I will note, you do not need both php5-cli and php5-fpm depending on your needs and setup, but you can always remove them later as well, they will satisfy the dependencies without having to install apache2.