Saturday, November 12, 2016

#vDM30in30 - 11

I had some time yesterday, so I finally had the chance to start watching some of the puppet-conf presentations that I did not get a chance to attend personally.. As a reminder, you can find the slides and videos here.

I may morph this into a more 'static' page on my site, and dive in with some thoughts about certain presentations..

Attended -

A Roadmap for a Platform: Mixing Metaphors for Fun and Profit – Eric Sorenson, Puppet (videos) (slides)
Proof of Concept to 30K+ Hosts with Puppet - Petersen Allen, Salesforce (videos) (slides)
Cloud, Containers & the Impact on IT - Jeffrey Snover, Microsoft (videos) (slides)

Puppet Troubleshooting - Thomas Uphill (videos) (slides)
Turning Pain Into Gain - A Unit Testing Story - Nadeem Ahmad & Jordan Moldow (videos)(slides)
Watching the Puppet Show – Sean Porter, Heavy Water Operations (videos) (slides)
Avoiding Toxic Technical Debt Derivatives – R. Tyler Croy, CloudBees, Inc. (videos) (slides)
Using HashiCorp's Vault With Puppet – Seth Vargo, HashiCorp (videos) (slides)
Running Puppet Software in Docker Containers – Gareth Rushgrove, Puppet (videos) (slides)
DevOps Where You Wouldn't Have Expected – Thomas Limoncelli, (videos) (slides)
The Future of Testing Puppet Code – Gareth Rushgrove, Puppet (videos) (slides)

Watched -

Service Discovery and Puppet – Marc Cluet, Ukon Cherry (video) (slides)
Enjoying the Journey from Puppet 3.x to 4.x – Rob Nelson, AT&T (videos) (slides)
Collaboration and Empowerment: Driving Change in Infrastructure with Culture – Martin Jackson, Walmart (videos) (slides)
A Look at Looking in the Mirror Actionable Retrospectives - J. Paul Reed (videos) (slides)
External Data in Puppet 4 – R.I. Pienaar (videos) (slides)
Scaling Puppet on AWS ECS with Terraform and Docker – Maxime Visonneau, Trainline (videos) (slides)
Case Study: Puppets in the Government – Kathy Lee (co-author: Glenn Bailey) (videos) (slides)

Want to Watch -

The Truth, Nothing but the Truth: Why Type Systems are Important to Configuration Management – Henrik Lindberg, Puppet (videos) (slides)
Scaling Puppet and Puppet Culture at GitHub - Kevin Paulisse (videos) (slides)
Implementing Puppet within a Complex Enterprise – Jerry Caupain, KPN (videos) (slides)
Moving from Exec to Types and Providers – Martin Alfke, example42 GmbH (videos) (slides)
A Year in Open Source: Automated Compliance With Puppet – Trevor Vaughan, Onyx Point, Inc. (videos) (slides)
Automating Datastore Fleets with Puppet – Joseph Lynch, Yelp (videos) (slides)
Closing the Loop: Direct Change Control with Puppet – Nick Lewis, Puppet (videos) (slides)
Continuous Delivery and DevOps with Jenkins and Puppet Enterprise – Carl Caum, Puppet & Brian Dawson, CloudBees (videos) (slides)
Debugging Diversity – Anjuan Simmons, Assemble Systems (videos) (slides)
Direct Puppet and Application Management for the Puppet Platform – Ryan Coleman, Puppet (videos) (slides)
Puppet 4.x: The Low WAT-tage Edition – Nick Fagerlund, Puppet (videos) (slides)
Puppet Best Practices: Roles & Profiles – Gary Larizza, Puppet (videos) (slides)
Puppet Design Patterns - Lessons From the Gang of Four - David Danzilio (videos) (slides)
Device-Based Modules: Making Them as Simple as a Light Switch – TP Honey, Puppet (videos) (slides)
There is No “I” in DevOps – Bart Driscoll, Dell EMC (videos) (slides)
Writing Custom Types to Manage Web-Based Applications – Tim Cinel, Atlassian (videos) (slides)
Successful Puppet Implementation in Large Organizations – James Sweeny, Puppet (videos) (slides)
Best Practices for Puppet in the Cloud – Randall Hunt, Amazon & Andrew Popp, (videos) (slides)
Pulling the Strings to Containerize Your Life - Scott Coulton (videos) (slides)

Friday, November 11, 2016


#vDM30in30 - 9

I was reading through this week's Docker Weekly, and I saw something that caught my eye... docker-flow.  There was a great video demo about using it.

It serves as a similar role to Registrator, tied to HAProxy, tied with consul template, but directly compatible with docker swarm, and it can bridge to the docker network that you have created to run your swarm containers on.

This looks pretty interesting, and I think I will start using it for my configuration at home, as well as facilitating any demos that I give for our local DevOps Meetup group!

youtube-dl followup - with more docker!

#vDM30in30 - 10

Following up about youtube-dl, I saw in this week's Docker Weekly that someone had a post about running youtube-dl in a container to avoid the messy setup.

I didn't have to much trouble installing it on my mac, with homebrew, but keeping different applications running in docker (possibly with swarm and docker network bridging/proxying)

Wednesday, November 9, 2016

youtube-dl to watch youtube offline

#vDM30in30 - 8

youtube-dl -

As you may know, the puppetconf videos, slides, and photos have been released.  I have a friend who is on slow satellite internet with a data-cap, but he is excited about watching the videos.  I have a useful tool installed on my laptop called youtube-dl.  What this allows me to do is point it at the 'playlist' url for all of the videos from puppetconf, and it will download (in parts) all of the videos.

The below commands (and time and bandwidth) are all that I need to pull down all of the videos.  I can resume it if the connection gets interrupted (happened at about 32/94 for me the first day).  

$ mkdir -p ~/puppetconf2016
$ cd ~/puppetconf2016
$ brew install youtube-dl
$ youtube-dl --yes-playlist

After everything is finished downloading, I can transfer them to a USB drive for sharing with friends, and I can watch them while offline.

Tuesday, November 8, 2016

Hello from the otherside.... (inside the container)

#vDM30in30 - 7


I have used ec2 tags to gather information into facter, and I had wanted to duplicate that same kind of logic using docker containers and labels... Unfortunately it seems like labels are 'external' only to the container.
I found several issues which all end up pointing at this 'introspection' issue.

It looks like the only way currently is to have some sort of external service manage that information for you (etcd/consul/mesos/pass it in via environment variables to your container / kubernettes downward-api) OR to mount the docker socket inside your container and go through the json output (not very secure)

Reasons for getting that information seem to range from enabling service discovery (knowing what actual host/IP you are bound to) to knowing what size JVM heap args to set inside a container based on actual container run environment.

It seems like memory information was added with which allows a readonly mount of the /sys/fs/cgroup location specific to the container...

$ docker run --rm -it --entrypoint /bin/bash puppet/puppet-agent-ubuntu
root@96acc60738a7:/# cat /sys/fs/cgroup/memory/memory.max_usage_in_bytes
root@96acc60738a7:/# exit
$ docker run -m 2048m --rm -it --entrypoint /bin/bash puppet/puppet-agent-ubuntu
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
root@dab851113412:/#  cat /sys/fs/cgroup/memory/memory.max_usage_in_bytes

Monday, November 7, 2016

Find non-utf8 characters in your puppet code!

#vDM30in30 - 6

I was checking out the #puppet channel on irc this morning, and someone came in asking for help with the following error..

err: Could not retrieve catalog from remote server: Error 400 on SERVER: (<unknown>): control characters are not allowed at line 1 column 1

I couldn't find anything puppet specific, but I found several posts referencing sidekiq and having that error, and it seemed to be due to the presence of non UTF-8 characters in their files/code.

I found a few answers on stackoverflow for detecting non UTF-8 chars.. or

And to add in the 'search hiera and puppet and erb' files find option.  Here is the 'exclude chars we thing are not in range' command

find $CODEBASE/ -type f \( -name '*.pp' -o -name '*.yaml' -o -name '*.erb' \) | xargs grep --color='auto' -P -n "[\x80-\xFF]"

Here is the 'exclude chars that do not match the expected range' command

find $CODEBASE/ -type f \( -name '*.pp' -o -name '*.yaml' -o -name '*.erb' \) | xargs grep --color='auto' -P -n "[^\x00-\x7F]"

This is something that you could even add as a validation check to your CI system to prevent you from checking in invalid characters to your puppet codebase.

Sunday, November 6, 2016

Roles and Profiles - an official example

#vDM30in30 #5

After first learning about the roles and profiles pattern, I spent quite a while searching long and hard for some good examples of the pattern.

Today, I stumbled onto some official docs in the PE 2016.4 pages that give a well thought out example of setting up jenkins masters, and the refactoring process that led them to the profile that they use now.

Saturday, November 5, 2016

Cyclomatic Complexity - A puppet lint plugin by Danzilio

#vDM30in30 Post #4!

One of the awesome things to come out of the Contributor's summit was David Danzilio and Henrik Lindberg collobrating on an Assignment Branch Prediction algorithm for Puppet code.  David recently released it as a plugin for puppet-lint!

I recently gave a little micro-demo of the plugin at our local DevOps Meetup.

There are two ways to run the plugin:  gem install it and run the command, or add it to your Gemfile in your module, and it will automatically be added to your rspec lint check.

The problem that I had with demoing the plugin was that the warning/error threshold level could only be set in your Rakefile.  So I had two options... find a sufficintly complex module, or lower the complexity warning level in the Rakefile for an existing module.

Fortunatly, there exists a complex module in the voxpupuli ecosystem, puppet-staging!

$ git clone
$ cd puppet-staging
$ find ./ -name '*.pp' | xargs puppet-lint | grep 'branch condition'
./manifests/file.pp - WARNING: assignment branch condition size is 76.69 on line 0

./manifests/extract.pp - WARNING: assignment branch condition size is 47.37 on line 0

If you wanted to modify the module to include those checks, and set a lower complexity threshold, you can make changes similar to the below.

$ git diff
diff --git a/Gemfile b/Gemfile
index 0571378..31f4ffa 100644
--- a/Gemfile
+++ b/Gemfile
@@ -22,6 +22,7 @@ group :test do
   gem 'puppet-lint-classes_and_types_beginning_with_digits-check',  :require => false
   gem 'puppet-lint-unquoted_string-check',                          :require => false
   gem 'puppet-lint-variable_contains_upcase',                       :require => false
+  gem 'puppet-lint-metrics-check',                                  :require => false
   gem 'metadata-json-lint',                                         :require => false
   gem 'puppet-blacksmith',                                          :require => false
   gem 'voxpupuli-release',                                          :require => false, :git => ''
diff --git a/Rakefile b/Rakefile
index d00f247..0a00e25 100644
--- a/Rakefile
+++ b/Rakefile
@@ -18,6 +18,7 @@ exclude_paths = %w(
 PuppetLint.configuration.ignore_paths = exclude_paths
+PuppetLint.configuration.metrics_abc_warning = 1
 PuppetSyntax.exclude_paths = exclude_paths

 desc 'Run acceptance tests'

Which will result in the following output when you run rake lint

$ bundle exec rake lint
examples/deploy.pp:0:abc_size:WARNING:assignment branch condition size is 2.24
examples/extract.pp:0:abc_size:WARNING:assignment branch condition size is 11.66
examples/file.pp:0:abc_size:WARNING:assignment branch condition size is 7.07
examples/init.pp:0:abc_size:WARNING:assignment branch condition size is 1.0
examples/scope_defaults.pp:0:abc_size:WARNING:assignment branch condition size is 3.32
examples/staging_parse.pp:0:abc_size:WARNING:assignment branch condition size is 12.53
manifests/deploy.pp:0:abc_size:WARNING:assignment branch condition size is 24.37
manifests/extract.pp:0:abc_size:WARNING:assignment branch condition size is 47.37
manifests/file.pp:0:abc_size:WARNING:assignment branch condition size is 76.69
manifests/init.pp:0:abc_size:WARNING:assignment branch condition size is 6.71

manifests/params.pp:0:abc_size:WARNING:assignment branch condition size is 25.73

Overall, it is a good idea to start applying different ideas that have been historically present in software development to Infrastructure As Code.  Thinking about the complexity of our code gives you insight into the testability and maintainability of your codebase.

Thursday, November 3, 2016

DevOps Nov3rd Followup links

#vDM30in30 number 3!

Here are some links of interest from my DevOps Meetup talk tonight!

Slides and Videos are now posted at - Schedule
Videos from puppetconf 2015 - Gareth's presentation with examples of running a full version of puppetserver/puppetdb/puppetdashboard via docker.
Troubleshooting Puppet - -
Hashicorp Vault with Kubernettes -
Seth Vargo didn't really have any actual slides, everything was based on examples that he has in this git repo..

Here is a link to my slides.. you can clone the repo and open the html file in your browser.

Getting double use out of your Serverspec tests with Sensu

Hey Everyone! Second post for #vDM30in30!

I wanted to mention something that I learned at the Watching the Puppet Show presentation at #PuppetConf 2016, given by Sean Porter.

I had heard of Sensu before, but didn't know most of the details about it's operation.  You can find out about sensu at

The most interesting part of the talk was that there exists a Sensu Serverspec integration... This allows you to use your Acceptance tests written in Serverspec as monitoring rules for your agents/nodes.  This helps keep you from rewriting the same 'logic' in both your acceptance tests and in your monitoring scripts.  If it is important enough to test for, you probably want to know if your operational app is behaving in the correct manner as well!

Tuesday, November 1, 2016

TIL - xargs tricks!

Hello everyone!  Today is the first day of  #vDM30in30 (For details, check out  I was inspired to participate by rnelson0 in the #voxpupuli channel on freenode.

I have been using xargs for a while, but I learned something new!  Today I Learned (TIL) about the --replace-string or -I argument.

What IS xargs?  Straight from the xargs man page

 xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a back‐slash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input.   

What can xargs do for us?  If you have the result of a command that you wish to pass in as arguments to another command, xargs is a more natural way to flow from one command to the other using a pipe operator ( | ) instead of embedding the results in a for loop.

A good set of xargs examples can be found here.  The feature that I didn't know about was the -I, or --replace-str argument. From the man page...

       -I replace-str
              Replace occurrences of replace-str in the initial-arguments with names read from standard input.  Also, unquoted blanks do not terminate input items; instead  the  separator is the newline character.  Implies -x and -L 1.

What that allows you to do is to use the value from stdin in an arbitrary (and possibly repeated) position in your command.

If you wanted to create a backup file of all files in a directory except for a few that you wish to exclude, you can do something like the following..

$ ls
a  b  c  dime  penny  quarter
$ mkdir archive
$ ls | grep -v -e dime -e penny -e archive # This will print the list of files, excluding dime, penny, and the archive directory
$ ls | grep -v -e dime -e penny -e archive | xargs -I{} cp {} archive/{}.bak
$ ls archive/
a.bak  b.bak  c.bak  quarter.bak

You specify the symbol for the -I argument, {} in this case, which will be replaced by the results from the previous command.

Where else have I used xargs lately?  If you want to find all of the files of a particular type that contain a particular string, and edit them one by one:

$ find ./ -name '*.pp' | grep -v params | xargs grep  params -l | xargs -n 1 vi

What that will do is find all files that match the pattern of *.pp, remove any files that contain params in the name (grep -v is an inverse match), xargs will pass all of the listed files into the grep command,  -l prints the name of any matching files, then the final xargs will send each file, one at a time, into vi for editing.

Before someone learns about xargs, they may end up embedding a bash for loop into their commands, something like

$ for i in $(ls | grep -v archive); do cp $i archive/$i.bak; done

With a short example like that, it may not be to hard to follow, but as your examples get much longer, it can get hard to tell which parts of the command belong where.  It is usually much easier to read left to right.

From left to right using xargs..
$ ls | grep -v archive | xargs -I{} cp {} archive/{}.bak