Friday, September 25, 2015 Photon!

Hi everyone!

I finally got a chance to play with my Particle Photon.

Photon chipset (left), Photon dev board (center), Electron (right)

I've had some prior experience with microcontrollers - and I decided to give the photon a try.  It's a pretty neat platform.  It comes set up with an onboard firmware/OS that once set up, will connect out to Particle's servers, and be accessible though their API.  

Right out of the box, I hooked it up to power, downloaded the Particle app to my iPhone, and pointed my iPhone's wifi at the broadcast from the Photon.  I was able to use the app to configure the photon with my home Wifi settings.  

My Phone connected, and I selected pin D7 (the device has a built in LED on that pin).  I told the pin to behave as a digitalWrite, this would allow me to toggle it between LOW and HIGH.  When toggled low, the LED was off.  When toggled high, the LED was on.  Success!

I happened to have some buttons wired up to my small breadboard, and I wanted to verify the digitalRead action of another pin.  I hooked pin D0 up to my button (which was connected between GND and VDD with a pull-up resistor).  

The entire time, the D0 indicator on the app showed HIGH.  Unplugged, plugged in, plugged in with button pressed or depressed.  Is this not working?  I tried connecting directly from D0 to GND, and D0 to VDD.  Every time it reported HIGH.  

I tried my Analog pin 0 (A0) and set it to analogRead.  it reported 4095 (what I think is the equivalent to HIGH on the D0 pin).  Hooked it up to ground, and the value didn't change!

Eventually in my plugging and unplugging of things, I ended up tapping on the A0 button on the iphone app, and I saw it's value change.    I plugged D0 back into ground, and tapped D0 on the app, and it changed from HIGH to LOW.  Ah ha!  So it turns out that the app doesn't live update the values, you have to tap the button to see the update!  

So... TL:DR;  The iPhone Particle app doesn't live update, you have to tap on a pin set to digitalRead or analogRead to read the new value of the pin!

Now, time to start actually building something with it!

Friday, September 18, 2015

s3 ETag and puppet

Hello faithful readers!

Today, I ponder a problem... how to know if I have the latest version of a file from an s3 bucket, and how to get that logic into puppet?

Amazon s3 buckets can contain objects, which contain an ETag... which is sometimes the md5, and sometimes not..

In trying to figure out how to know if I should download an object or not, I was inspired by the s3file module out on the puppet forge.

Notably, this slick shell command to pull out the ETag and compare it to the md5sum of the file..
"[ -e ${name} ] && curl -I ${real_source} | grep ETag | grep `md5sum ${name} | cut -c1-32`"
However, I saw that in my use case, we used aws s3 cp /path/to/file s3://bucketname/key/path/ and our uploads happen as multipart uploads at the low level.  this makes our ETag break into multiple parts for files much smaller than the rest of the internet (I think at either 5mb or 15mb boundaries), which makes it not so keen for the above situation.

also, we don't have a publicly available bucket to curl, so we have to replace the curl -I call with a aws s3api head-object --bucket bucketname --key key/path/file.

While playing with the s3api command, I looked at a file I had uploaded previously.  When I uploaded it, I had used s3cmd's sync option.  I saw the following output.

$ aws s3api head-object --bucket bucketname --key noarch/epel-release-6-8.noarch.rpm {    "AcceptRanges": "bytes",     "ContentType": "binary/octet-stream",     "LastModified": "Sat, 19 Sep 2015 03:27:25 GMT",     "ContentLength": 14540,     "ETag": "\"2cd0ae668a585a14e07c2ea4f264d79b\"",     "Metadata": {        "s3cmd-attrs": "uid:502/gname:staff/uname:~~~~/gid:20/mode:33188/mtime:1352129496/atime:1441758431/md5:2cd0ae668a585a14e07c2ea4f264d79b/ctime:1441385182"    }}

When I manually uploaded the file using aws s3 cp, I saw the following header info on the s3 object
$ aws s3api head-object --bucket bucketname --key epel-release-6-8.noarch.rpm {    "AcceptRanges": "bytes",     "ContentType": "binary/octet-stream",     "LastModified": "Sat, 19 Sep 2015 03:39:13 GMT",     "ContentLength": 14540,     "ETag": "\"2cd0ae668a585a14e07c2ea4f264d79b\"",     "Metadata": {}}

So, the MD5 IS in there... IF I use a program/command that sets it in the metadata.  

Back to my problem at hand... how would I know if I already downloaded the file?  What are my options?
  1. Figure out some sort of script to calculate the md5 sum of the file.  This would require me assuming the number of multi-part chunks
  2. Always make our upload process either not use multi-part uploads (keeping the ETag equal to the MD5sum) or use some tool or manually set the metadata to contain an md5sum
  3. Pull out the ETag value when I download the file, and store an additional file with my downloaded s3 file (e.g. if I have s3://bucket/foo.tar.gz, pull out it's ETag and write it to foo.tar.gz.etag after a successful download)
All three have disadvantages.

#1 - This script has to assume or calculate multi-part upload chunk size.  It is explored in some of the answers at this StackOverflow post

#2 - This requires our uploads to have happened a certain way.  If someone circumvents the standard process for uploading (uses a multipart upload, or uploads with the aws s3 cp instead of aws s3api put-object), then we would always download the item. 

# 3 - This requires us saving a separate metadata file alongside the file download.  If the application we are downloading the file for is sensitive to additional files hanging around, this may interfere.  there IS a big plus in this case though.  

aws s3api get-object has a helpgul argument.... 

--if-none-match (string) Return the  object  only  if  its  entity  tag
       (ETag) is different from the one specified, otherwise return a 304 (not
If I use that, the s3 call would skip downloading the file if the ETag matches.  

Also, with option 3... how do I do it in puppet?  Do I have some clever chaining of exec resources?  Do I create a resource type that utilizes the aws-ruby sdk?

Hrm, another post with ideas and options but no solutions.... I'll have to ponder how I should move forward with this, and fuel a future blog post!

Thanks for reading!

Monday, September 7, 2015

that's yum mmmy!

Hello everyone!

I've been pretty sparse lately, and today's topic is pretty light, but Today I Learned (TIL) that puppet has a yum repository resource!  While I was trying to figure out if they had a augeas lens for the type of ini file that the format is stored in, I was very happy to find a native type in base puppet!

One feature that caught my eye is that there is a value of s3_enabled...


(Property: This attribute represents concrete state on the target system.)
Access the repository via S3. Valid values are: False/0/No or True/1/Yes. Set this toabsent to remove it from the file completely.
Valid values are absent. Values can match /^(True|False|0|1|No|Yes)$/i.

This has me intrigued, since we are working in AWS.  Does this mean that we can create a repository as flat flies in an s3 bucket?

I found this amazing article on setting up s3 based yum repos using IAM authorization...

So, does it work!?

Yes and no.... According to this Pull Request, The plugin only supports IAM Signature version 2, which is only in place for older AWS regions.  Newer regions only support IAM signature version 4.. Regions like China, Frankfurt, etc..   So only certain areas will be able to use the plugin as is.

I have tried duplicating the steps of the article on both a N. Virginia and a Frankfurt instance, and everything worked correctly in N. VA, and I had the same error as the pull request (400) when trying to use both yum-s3-iam and yum-s3-plugin.    I have a really rough github project here which I used to aid my setup.  I had to 'yum install -y git puppet3', clone my repository, and run my init script, then run sudo puppet apply ~/s3-repo-sandbox/s3_plugin.pp to apply my changes.  I had pre-setup a bucket named dawiest-repo, in which I placed a noarch repository created similar to the above article.

Assuming you can get the above plugin to work for your region..... how can you make sure your repos are set up before any packages are installed?  You could directly add require parameters to each package that needs it, but it is probably better to use the 'spaceship operator' to collect all the references and create the require entries for you!
Although stages can handle this and so can specific yum repo dependencies, better is to declare the relationship generically.
Just put Yumrepo <| |> -> Package <| provider != 'rpm' |> in your puppet manifest.
node default {
  Yumrepo <| |> -> Package <| provider != 'rpm' |>
This makes it so that all yumrepo types will get processed before any packages that don't have 'rpm' as their provider. That latter exclusion is so that I can use the (for example) epel-release RPM package to help install the yum repo.

Also, this article give a good description of the process needed for setting up gpg keys for your repo!

Friday, September 4, 2015

Hi everyone...

Felt like installing macvim... according to the internet, it would be as simple as 'brew install macvim'.

bash-3.2$ brew install macvim
macvim: A full installation of is required to compile this software.
Installing just the Command Line Tools is not sufficient.
Xcode can be installed from the App Store.
Error: An unsatisfied requirement failed this build.

Oh, it wants the full XCode package, but I've just installed the CLI tools...   lets see if there is a binary package in Cask?

bash-3.2$ brew cask install macvim --override-system-vim
==> Downloading
######################################################################## 100.0%
==> Symlinking App '' to '/Users/admin/Applications/'
==> Symlinking Binary 'mvim' to '/usr/local/bin/mvim'
🍺  macvim staged at '/opt/homebrew-cask/Caskroom/macvim/7.4-77' (1906 files, 35M)

Bingo... thanks Cask folks!