Posts

Showing posts from March, 2010

DKIM setup with postfix and OpenDKIM

If sending email is a critical part of your online presence, then it pays to look at ways to enhance the probability that messages you send will find their way into your recipients' inboxes, as opposed to their spam folders. This is fairly hard to achieve and there are no silver bullet guarantees, but there are some things you can do to try to enhance the reputation of your mail servers.

One thing you can do is use DKIM, which stands for DomainKeys Identified Mail. DKIM is a result of a merging between Yahoo's DomainKeys and Cisco's Identified Internet Mail. There's a great Wikipedia article on DKIM which I strongly recommend you read.

DKIM is a method for email authentication -- in a nutshell, mail senders use DKIM to digitally sign messages they send with a private key. They also publish the corresponding public key as a DNS record. On the receiving side, mail servers use the public key to verify the digital signature. So by using DKIM, you as a mail sender prove to yo…

Automated deployments with Fabric - tips and tricks

Fabric has become my automated deployment tool of choice. You can find tons of good blog posts on it if you google it. I touched upon it in 2 blog posts, 'Automated deployments with Puppet and Fabric' and 'Automated deployment systems: push vs. pull'. Here are some more tips and tricks extracted from my recent work with Fabric.

If it's not in a Fabric fabfile, it's not deployable

This is a rule I'm trying to stick to. All deployments need to be automated. No ad-hoc, one-off deployments allowed. If you do allow them, they can quickly snowball into an unmaintainable, unreproducible mess.

Update (via njl): Only deploy files that are checked out of source control

So true. I should have included this when I wrote the post. Make sure your fabfiles, as well as all files you deploy remotely are indeed what you expect them to be. It's best to keep them in a source control repository, and to be disciplined about checking them in every time you update them.

Along the s…

Automated deployment systems: push vs. pull

I've been immersed in the world of automated deployment systems for quite a while. Because I like Python, I've been using Fabric, but I also dabbled in Puppet. When people are asked about alternatives to Puppet in the Python world, many mention Fabric, but in fact these two systems are very different. Their main difference is the topic of this blog post.

Fabric is what I consider a 'push' automated deployment system: you install Fabric on a server, and from there you push deployments by running remote commands via ssh on a set of servers. In the Ruby world, an example of a push system is Capistrano.

The main advantages of a 'push' system are:
control: everything is synchronous, and under your control. You can see right away is something went wrong, and you can correct it immediately.simplicity: in the case of Fabric, a 'fabfile' is just a collection of Python functions that copy files over to a remote server and execute commands over ssh on that server; …

Getting past hung remote processes in Fabric

This is a note for myself, but maybe it will be useful to other people too.

I've been using Fabric version 1.0a lately, and it's been working very well, with an important exception: when launching remote processes that get daemonized, the 'run' Fabric command which launches those processes hangs, and needs to be forcefully killed on the server where I run the 'fab' commands.

I remembered vaguely reading on the Fabric mailing list something about the ssh channel not being closed properly, so I hacked the source code (operations.py) to close the channel before waiting for the stdout/stderr capture threads to exit.

Here's my diff:
git diff fabric/operations.py diff --git a/fabric/operations.py b/fabric/operations.py index 9c567c1..fe12450 100644 --- a/fabric/operations.py +++ b/fabric/operations.py @@ -498,12 +498,13 @@ def _run_command(command, shell=True, pty=False, sudo=False, user=None):      # Close when done      status = channel.recv_exit_status()      …