homeblogPage 49

Puppet Blog - Page 49

Say Hello to Puppet 3

Hi, I’m Eric Sorenson (eric0 on #puppet IRC), and in June 2012 I moved from being a community member and Puppet administrator in the field, to working at Puppet Labs as the Product Owner for our open source projects. At the time, my first goal was to help get a great release of the next major version of Puppet (code-named “Telly”) shipped to the world, which launched late last month. Now with the release of Puppet 3.0.1 — which addressed and fixed the biggest issues that our awesome community of early-adopters found in the 3.0.0 release — it seemed like a good time to blog from the rooftops.

I’m new to Puppet Labs, but I have been running Puppet in large-scale production operations since 2009 and, somewhat naïvely, felt like I had a good idea of what Puppet 3 was supposed to look like. There had been, after all, a few dozen bugs in Redmine over the past couple of years in which the “Target Release” field I’d seen James or Nigel set to “Telly” … it was going to fix all the bad behavior we’d all reported over the years, right? Well, not exactly.

The tough thing about major releases of popular products is that the burden of expectations becomes so great, there’s no way reality can measure up. In some cases (The Phantom Menace, Guns n’ Roses “Chinese Democracy”) when the release does come, it’s universally panned on its own merits; other times, the release might have been fantastic if it had come when it was promised, but the timing was such that it had already gone stale (Duke Nukem Forever).

Mapping the Puppet Forge

A long time ago (well, June of this year) the Puppet Forge was running without a leader. In my role as community manager, I saw the Forge as having this awesome potential to be the resource for user-generated content surrounding the Puppet community. I knew it was getting more attention, but that was mostly anecdotal. My next step was to find some data that could tell a good story.

Puppet Modules are often the first way people learn and start using Puppet. We’ve had our Puppet Forge for a while, but I didn’t feel like I knew a lot about it. When we were getting ready to interview Product Owners for the Puppet Forge and Modules, I decided I wanted to know more to help me prepare for the interview, and maybe give me some insight into usage patterns that I hadn’t thought about.

Like any geek, I love data. I knew we had all sorts of data in our module download logs, but we had not ever really taken the time to transform that data into awesome information. I started with simple awk/sed/grep to find basic information, like what modules were popular. This worked for a time, but then I wanted to know modules by name, find popular authors, and do things like ignore version number changes.

Building Application Stacks with Puppet

Managing Google Compute Engine Instances with Puppet

Puppet is an IT automation language that has traditionally been used to configure individual nodes. Puppet’s declarative language and dependency model is also suitable for describing entire application stacks on top of public cloud offerings.

This post will explain how Puppet can be used to model resources through Google Compute Engine’s API in order to describe application stacks as reusable and composable configuration files.

Google Compute Engine (GCE) is a service offering from Google that allows users to provision virtual machine instances that run on Google’s infrastructure. The one thing that really stands out about this service compared to similar offerings is how fast it is. Machine instances generally take seconds, not minutes, to spin up.

The GCE API allows users to create all of the resources needed to dynamically model application stacks, including: virtual machine instances, networks, firewalls, and persistent disks. It also allows you to specify a lot of the characteristics of a virtual machine instance like the image that should be used, and how much memory and CPU to allocate to that instance.

What this API can’t do is tell a machine how it should be configured. There is no way to say: “Use this image as a starting place, and then configure yourself to be a mysql database.” This is where Puppet comes in. It can be used with GCE in order to configure the roles that should be assigned to created instances. Puppet can also be used to perform ongoing management of those instances.

This blog will take the concept one step further, explaining not only how Puppet can be used to assign roles to compute instances, but also how Puppet can be used to model the management of all of the compute objects in GCE that are used to create an application stack.

Module of the Week: maestrodev/maven – Maven repository artifact downloads

This week’s Module of the Week is a guest post from Carlos Sanchez from MaestroDev.

Purpose Manage Apache Maven installation and download artifacts from Maven repositories
Module maestrodev/maven
Puppet Version 2.7+
Platforms RHEL5, RHEL6

The maven module allows Puppet users to install and configure Apache Maven, the build and project management tool, as well as easily use dependencies from Maven repositories.

If you use Maven repositories to store the artifacts resulting from your development process, whether you use Maven, Ivy, Gradle or any other tool capable of pushing builds to Maven repositories, this module defines a new maven type that will let you deploy those artifacts into any Puppet managed server. For instance, you can deploy WAR files directly from your Maven repository by just using their groupId, artifactId and version, bridging development and provisioning without any extra steps or packaging like RPMs or debs.

The maven type allows you to easily provision servers during development by using SNAPSHOT versions—using the latest build for provisioning. Together with a CI tool, this enables you to always keep your development servers up to date.

Module of the Week: inkling/postgresql – PostgreSQL Management

EDIT 10/24/12: The inkling/postresql module is now owned by Puppet Labs, and has been moved to puppetlabs/postgresql. You can contribute to the module on GitHub here.

Purpose Manage PostgreSQL servers, databases, and users
Module Previously inkling/postgresql, now puppetlabs/postgresql
Puppet Version 2.7+ & PE 2.0+
Platforms Tested on RHEL5, RHEL6, Debian6, Ubuntu 10.04

PostgreSQL is a powerful, high-performance, free, open-source relational database server. It hasn’t always enjoyed quite as much popularity as its cousin, MySQL; MySQL is enormously popular, as evidenced by its inclusion in the ubiquitous LAMP (Linux-Apache-MySQL-PHP) web development stack. However, these days there seems to be some increasing momentum behind PostgreSQL in many circles. At Puppet Labs, we are starting to use it more heavily—in fact, it’s a prerequisite for our new PuppetDB product.

With that in mind, it seemed important for us to make sure that there was a Puppet module out that made PostgreSQL as easy to manage with Puppet as MySQL is. We searched around on the Puppet Forge to see if anyone had undertaken this yet, and found several useful Postgres modules—but it was important to us that the module API would be familiar to users of the puppetlabs/mysql module.

We were particularly impressed with the functionality offered by the inkling/puppet-postgresql module, developed by Kenn Knowles of Inkling Systems, so we reached out to Kenn to see if he’d be amenable to us helping to refactor the module to leverage his existing functionality with an API similar to the puppetlabs/msyql module. He was, so, we did!

So here’s why you should check out the new 0.2.0 release of the inkling/postgresql module:

Why Puppet has its own configuration language

I was O’Reilly’s Velocity conference back in June, giving a talk on hacking Puppet, and Puppet’s configuration language came up a lot. Most people love the language and find it the simplest way of expressing their configurations, but some are frustrated by how simple it is and wish they had a full Turing-complete language like Ruby for specification. I thought it would be worthwhile to discuss why Puppet has a custom language, and dive into some of the benefits and costs.