pianobar +1: Pandora CLI

If you’re like me, you jam to pandora while writing code. Currently my favorite station is “Indie Electronic Radio.” Are you also a home row hero? @Nathan, I’m lookin at you. 🙂

As a web developer I make heavy use of the browser, often multiple browsers. My current go-to is Chrome, but I’ll often run admin tools in Safari and / or run tests in Firefox.

One problem with my workflow is that I constantly create and destroy tabs and sometimes accidentally close the browser tab that pandora is running in. How do you get over this annoying little productivity suck?

Continue reading “pianobar +1: Pandora CLI”

pianobar +1: Pandora CLI

TV-less January

tl;dr: We watched No TV for January 2016 and got more sleep, read books, went on dates, worked on puzzles and are pretty stoked with the results.

My wife, Nicole, suggested some time in late December that we try not to watch any TV in January. Since our boys were born, we’ve both really enjoyed the time between when they fall asleep and when we head to bed. It’s our quiet time. We typically watch a few episodes of TV. I’ll work from my laptop and she’ll knit. In terms of our media consumption, we watch more TV than movies. Here are some of our favorites (some are hers, some are mine, but most we watch together):

  • Bones
  • Californication
  • Downton Abbey
  • Glee
  • Graceland
  • Grey’s Anatomy
  • Homeland
  • House of Cards (2013)
  • House of Lies
  • How I Met Your Mother
  • How to Get Away With Murder
  • Limitless
  • Madam Secretary
  • Modern Family
  • Mr. Robot
  • Nashville
  • New Girl
  • Parenthood (2010)
  • Person of Interest
  • Quantico
  • Rookie Blue
  • Royal Pains
  • Scandal (2012)
  • Scorpion
  • Silicon Valley
  • Sons of Anarchy
  • Suits
  • Switched at Birth
  • The Big Bang Theory
  • The Blacklist
  • The Good Wife
  • The Mindy Project
  • The Newsroom (2012)
  • Vice
  • White Collar

Fewf. Quite a list! These are all shows that we watched while they were still airing, or continue to air. We’ve also watched lots of series on Netflix and Amazon Prime Instant Video. Most recently Breaking Bad. Roughly annually I re-watch The Wire and she’ll re-watch Greys Anatomy.

All this to say: We like to watch TV.

That said, we both agree that watching TV is not the best use of our time. There are hundreds of other things we could be doing to relax and enjoy our time together. Additionally, our kids were starting to watch a few shows here and there for entertainment and we would love to minimize that. So we agreed to give it a shot: No TV for all of January. That includes Netflix, Amazon Prime, Hulu, Youtube, downloads etc. We made one special exception to go see StarWars as a date night (we recently watched the first 6 episodes back to back).

Looking back on the month, I’d say it was a huge success. There were a few moments when the kids were sick and/or it was raining/snowing out side that we struggled to stick to our guns.

What did we do instead? I’m glad you asked!

Read/listened to lots of books. As many of you know, I’m a big fan of audiobooks and podcasts. During January I listened to:

  • The Martian
  • Eat That Frog!
  • Never Go Back (Jack Reacher)
  • Spy the Lie
  • Elon Musk: Inventing the Future
  • Essentialism: The Disciplined Pursuit of Less
  • Persuasion IQ
  • You Are a Badass

I keep my reading list fairly up to date if you’re interested in any of these: https://www.goodreads.com/user/show/34788469-cj-avilla

We started listening to the second season of the Serial podcast.

The awesome thing about listening to audio books is that you can: speed them up if the story is moving too slowly, and multi-task.

Puzzles. This adventure jump started a puzzle kick. The kids started doing tons of puzzles. Nicole and I also started working through a massive puzzle.

Date nights. We’ve been out to more dates in January than the better half of 2015.

Sleep. Not binge watching TV until 3AM does wonders for your sleep cycle. I’ve even gotten up to go to the gym a few mornings at 5:30!

I’m definitely looking forward to watching The Martian movie in February, but I think for the most part we’ve broken the habit of jumping straight for the TV. Nicole plans to continue a modified version of the No TV rule moving forward.

February is a social media cleanse 🙂 no Facebook, Twitter, LinkedIn. Look forward to a report!

 

TV-less January

So you wanna work remote, huh?

Increasingly, companies are either fully remote or are at least considering remote workers. There are really two approaches to becoming a remote employee. In “The Four Hour Work Week“, Tim Ferriss talks about how to transition to remote work in your current job. This will focus on getting a job, and some specifics about remote jobs. This is post is not about transitioning from onsite to offsite with the same company. This post is not about why remote is good or bad for the company or employee. The reasons people wish to work remote differ a lot based on situation and circumstance.

General Developer Job Search Advice

Job hunting is a roller coaster. There will be some lag time between sending your first applications and hearing back, then you will hear some positive news and some negative news. As things progress the news will be more intensely positive or negative. Don’t toss your cookies ;). When things aren’t going well it might help to vent to an understanding friend.

Write a blog. Putting yourself out there is super hard. You will likely experience imposter syndrome and at times feel like you have nothing valuable to say. Push through and f’ing do it. The biggest benefit is that employers will see that you’re interested and passionate about something. Every time you write a bit of code that isn’t cookie cutter, make a note / shoot yourself and email because that’s likely a great topic for a blog post. I recommend either using Tumblr or WordPress. No need to roll your own (we all know you want to roll your own blogging engine. It’s so you can procrastinate, don’t!).

Attend and SPEAK at meetups. What do you think the most valuable resource you have is? Mad ruby skills? You can build things at scale? You’re a micro-service artisan? Sweet! Those are great, but none of that comes close to how valuable your NETWORK is. Who you know and who you’re connected to is hands down the most valuable resource in building an incredible career. One of the best ways to expand your social graph is meetups. Especially if you don’t live in SF / NYC meetups are a great way to meet other people who are also working remotely.

Give back. Everyone has some body of knowledge that would be of value to someone else if only they shared. Engage on Twitter, StackOverflow, LinkedIn. Volunteer at a rails bridge, nodeschool, or hour of code in your town. You never know when one of the people you help now will be a hiring manager later ;). More motivation: giving back feels damn good.

Job hunting is a numbers game. At any point time there is a finite set of open jobs and a finite number of people searching for jobs. Indeed has a couple neat tools for analyzing the job market, here’s one of my favorites: http://www.indeed.com/jobtrends/unemployment. This shows the ratio of openings per job hunters. In the majority of cities and job markets today, there are more people looking for jobs, than there are job openings. Lets say there are 100 openings and 200 unemployed people. Some of those people will apply to 1 job others will apply to more. In the end companies will receive some number of applicants. Your goal should be to play the odds. If we assume that companies will all get an average of 50 applications, you want to be damn sure you apply to 51 places. It’s super hard to get an idea of exactly what that application rate is, so you must shoot as high as possible. Its turtles all the way down, folks.

turtles-all-the-way-down

The more applications you send, the more responses and initial screenings you’ll have. The more screenings you pass the more onsite interviews you’ll be invited to. The more onsite interviews you get, the more offers will stack up in your inbox. The more offers you get the more picky you can be about working where you want to.

 

 

Remote Job Specific Search Advice

Optimistically assume that all companies are willing to try remote. Even if a job posting does not include “Remote” still apply. There are some cases where a job posting will be explicit about not allowing remote. Skip only those that mention “No Remote.” Apply to all others.

Use remote job sites and resources. Have some links:

Consider working onsite temporarily. One reasons companies don’t like to hire remote workers is lack of trust. If the team has never worked with you how can they trust that you’re not just sitting at home binge watching House of Cards? How can they trust that you’re faithfully executing your duties and working as hard as you can for them? One possibility is that you work in person, onsite with the company for a period. Maybe 1-3 months. Suggesting this upfront will go a long way.

Consider working onsite for short periods. When working remotely its super easy to be forgotten. You don’t want that. If you’re forgotten, the next step is to be laid off and forgotten forever. One way to stay top of mind is to plan monthly or quarterly trips to the office. Spend a few days or a week at a time working onsite. This will help you build the onsite relationships with your coworkers and could refresh and reinvigorate your passion for whatever projects you may be working on.

Make a plan. If you’ve never worked remotely, it may come as a surprise to you that the environment you create for work is critical to being productive. When I switched from working onsite to working remotely, I had already started a membership at a co-working space and squared away Internet and teleconferencing equipment.

I have been working remotely at my new gig for about 6 months or so. In the beginning I worked onsite for 6 weeks. I visit the office in San Francisco a few days a month.

I’d love to hear your feedback and help you with your search.

Happy Hunting!

So you wanna work remote, huh?

Extreme Validation

I’ve observed an interesting trend in some new companies over the past few years. Companies like IFTTT (consolidates small pieces of functionality exposed via tons of different apis), Buffer (aggregates the creation functions available on social media apis), Coverhound (aggregates searchable insurance platforms), Zapier etc. All integrate tons of third party APIs to provide a consolidated platform. I happen to currently work on one of these integration projects. This type of integration development comes with it an interesting little problem: validation.

We integrate with many APIs that provide similar but not exactly the same data and we need to validate user input against not only our business rules, but also optionally many 3rd party rules. One of the biggest downsides to some integrations is that often third parties will only sync data once daily rather than immediately when data changes. The problem is that if the user changes some data that violates the business rules of the company that syncs once per day, by default the user will have a long delay between changing the data and when the validation fails.

One of the most common things a developer does is validate user input. Whether it’s an email, photo or file, you’ve likely got some special validations about size, dimensions or format. Let’s say that your business requires that the User address or lat/lng is present and is in a valid format. If you’re using Rails there are simple built in ActiveRecord validations or you could write a custom one off method to validate this rule.

When integrating and syncing to multiple different APIs that all have requirements specific to their business the problem gets much more interesting. Lets spice up our example and consider that we want to sync our users data to partner a: GCorp and/or partner b: LBoss.

GCorp requires the address is present and follows RFC 5774.

LBoss requires that the lat/lng is present and falls within the standard decimal (-90 to 90, and -180 to 180).

We anticipate and would love to integrate with more parties in the future, and not all users will setup syndication with both partners. So in some cases we want to be strict about requiring certain information from customers, and in other cases we’re more flexible with what we will accept.

After several iterations this is roughly the model that I came up with to solve this problem:

First: The base validation system is built out of `Validator` objects that consist of a set of `Validation`s. Each `Validation` is a callable that either returns nothing, or a `ValidationError` or an array of `ValidationError`s.

Second: Each user account has a collection of `Notification`s. The purposes for our exercise will be to display to the user a list of all the issues with their data.

Third: Signals. For each third party integration will register a set of signal handlers that fire when the important models change. In our case when the User model is saved and the address changes that will fire a signal handler that we have registered to run the User specific validators for GCorp. We will have a separate signal handler for LBoss validators.

If any of the validators fail in the signal handlers, a Notification is created so that we can flash the user with third party specific validation information.

The flow is something like this:

Users updates data for model X -> POST to our server -> Update model X -> Signal handlers for each partner run for model X -> if any validations fail `Notification`s are created -> Response includes usual 200 ok. Subsequent requests for the User account will include the associated notifications for all failed validations (we have a separate mechanism for busting this cache).

Validation
encapsulates logic for the business rule
Validator
encapsulates logic for running and collecting results of Validations
ValidationError
encapsulates data about failed validation
Notification
created when validation fails
Signal/Callback
convenient mechanism to run validations in a decoupled way allowing different rules to run depending on which third parties the user has integrated with

The biggest takeaways are: If you’re trying to validate business rules for your business in addition to integrated third parties, one approach might be to split those third party validations out into their own module, and run them in some post save / post delete  phase via either a signal, trigger, or callback. (signals are great for this in django, in rails I would consider using some of the active record callbacks).

Extreme Validation

Rails + Sitemap + Heroku + AWS

tl;dr Generate the sitemap files, push them to AWS and set up a route that redirects to those files from Rails.

While exploring google web master tools and inspecting some aspects of Insider AI SEO, I recognized a missing piece of the puzzle: sitemap! There are a few options out there for generating sitemaps for Rails, most of which generate a set of XML files and drop them in your public directory. This wont work for Insider AI as it has dynamic blog content that I want mapped so that it’s indexed by search engines. If you’ve worked much with Heroku, you know that it’s not a static file server. In fact, if you generate or attempt to store uploaded files on Heroku, they’ll get stomped out :(.

Goal: Generate dynamic sitemaps.
Problem: Heroku doesn’t play nice with generated static files.
Solution: Upload generated sitemaps to AWS.

The gem I landed on is called sitemap_generator. In the wiki on their github page there are some examples for getting up and running with Fog and CarrierWave.

These solutions were a bit heavy weight for me, so I ended up modifying this code. To eventually have a nice solution for generating sitemaps and uploading them to AWS.

Here’s everything you need to know:

1. Sign up for AWS
2. Create an IAM User (note the KEY_ID and ACCESS_KEY)
3. Create a bucket on S3 (note the bucket name as BUCKET)
4. Add a policy to the bucket to allow uploading (they have a policy generator, or you can use this overly promiscuous one)

{
	"Version": "2012-10-17",
	"Id": "Policy1",
	"Statement": [
		{
			"Sid": "Stmt1",
			"Effect": "Allow",
			"Principal": {
				"AWS": "*"
			},
			"Action": "s3:*",
			"Resource": "arn:aws:s3:::YOUR_AWS_BUCKET_NAME/*"
		}
	]
}

5. Add these gems to the Gemfile (I use figaro for key management)

# Gemfile
gem 'aws-sdk', '< 2.0'
gem 'figaro'
gem 'sitemap_generator'

7. Install figaro (creates config/application.yml and git ignores it, safety first!)

figaro install

8. Make the keys and bucket name available to the env. config/application.yml

AWS_ACCESS_KEY_ID: KEY_ID
AWS_SECRET_ACCESS_KEY: ACCESS_KEY
AWS_BUCKET: BUCKET

9. Create config/sitemap.rb to define what gets mapped

# config/sitemap.rb
SitemapGenerator::Sitemap.default_host = "https://insiderai.com"
SitemapGenerator::Sitemap.create_index = true
SitemapGenerator::Sitemap.public_path = 'public/sitemaps/'
SitemapGenerator::Sitemap.create do
  add '/welcome'
  add '/blog'
  add '/about'
  Post.find_each do |post|
    add post_path(post), lastmod: post.updated_at
  end
end

10. Create lib/tasks/sitemap.rake to define the rake task for refreshing the sitemap

require 'aws'
namespace :sitemap do
  desc 'Upload the sitemap files to S3'
  task upload_to_s3: :environment do
    s3 = AWS::S3.new(
      access_key_id: ENV['AWS_ACCESS_KEY_ID'],
      secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
    )
    bucket = s3.buckets[ENV['AWS_BUCKET']]
    Dir.entries(File.join(Rails.root, "public", "sitemaps")).each do |file_name|
      next if ['.', '..'].include? file_name
      path = "sitemaps/#{file_name}"
      file = File.join(Rails.root, "public", "sitemaps", file_name)

      begin
        object = bucket.objects[path]
        object.write(file: file)
      rescue Exception => e
        raise e
      end
      puts "Saved #{file_name} to S3"
    end
  end
end

11. Redirect requests for your sitemap to the files stored on AWS. (Needs improvement, but works)

# config/routes.rb
get "sitemap.xml.gz" => "sitemaps#sitemap", format: :xml, as: :sitemap

# app/controllers/sitemaps_controller.rb
class SitemapsController < ApplicationController
  def sitemap
    redirect_to "https://s3.amazonaws.com/#{ ENV['AWS_BUCKET'] }/sitemaps/sitemap.xml.gz"
  end
end

Hope this helps! Let me know if you get stuck somewhere and I’ll do my best to help you out 🙂

Rails + Sitemap + Heroku + AWS

Where the F is JST coming from?!?

If you’ve built a Rails + Backbone app you know that a common way to use templates is by writing files with the .jst.ejs file extension. Additionally, you may take for granted the fact that these templates are precompiled _.template style for you. As you know they are made available in the JST namespace via properties with the same name as the file (including directory).

Recently I received these questions: “Where does JST come from? Which javascript file is adding that namespace?”

I had to stop and think on it for a bit. Where do these come from. Are they added to application.js? No. Are they injected into the HTML as template script tags? No! The secret is in the asset pipeline. The sprockets gem has a JST Processor which slurps up the .jst files and transpiles them into .js files! In development you’re assets directory gets fatter by the number of directories in your app/assets/templates. In production these all get concatenated into application-fingerprint.js. Each generated JS file contains an IIFE which memoizes the definition of the JST namespace, then appends to it the result of running the ejs compilation step which returns the compiled template function.

First checkout the JST Processor

Then read every line (100) of the ejs gem

This gist is pretty good at explaining the resulting IIFE.

Going even deeper down the rabbit hole!

How does the asset pipeline even know what to do with the .ejs file? Where is that section of the pipeline added?

It turns out that the sprockets gem is yet another step ahead of us. Checkout the EJS template and the EJS Processor. Sprockets will look for the EJS ruby constant to be defined, and if it is will call EJS.compile when evaluated.

So now you know! When Sprockets loads and starts processing a file with the extension jst.ejs it will call the EJS processor, which will call the EJS template which will call into the EJS gem to get back the compiled result of the ejs template. Then the result is processed by the JST Processor which wraps the compiled template in an IIFE and sets up the JST namespace.

Where the F is JST coming from?!?

Push Database to Heroku using Dropbox

One question I’m often asked is how to get started quickly in production with the data populated during development. Putting aside the fact that this is generally a bad idea. I’d like to discuss a few options and show you how I moved my 118GB postgres database to heroku.

One really great option, if you’ve got a reasonable amount of data, is to use the seed_dump gem. Filling out Seed files is often a pain. Especially if you’ve got a ton of complex data. That said, it’s extremely valuable when you have some predefined datasets that must be in the database before getting started. seed_dump is a tool that will export your current database into ruby statements that can be used in your seed.rb file. It works like this:

Add to Gemfile

# Gemfile
gem 'seed_dump'

Run this to get the output

rake db:seed:dump

Another option when using postgres is to use heroku’s import/export tools. In the documentation you’ll see that heroku recommends using AWS to store your database file. AWS is a good option especially for Heroku as they live in the same ecosystem. The easiest way for me to get my locally exported postgres database into heroku was actually via Dropbox. I simply put the compressed export into a Dropbox folder, copy the public link and use that as the basis for restoring.

Here are the steps you’ll need.

The -Fc flag here will compress the dump so that you aren’t given plaintext SQL statement output.

pg_dump -Fc --no-acl --no-owner -h localhost mydb > mydb.dump
mv mydb.dump ~/Dropbox/backups/

Wait a few hours for the file (~8GB in my case) to upload to Dropbox.

Once the file has been uploaded to Dropbox, you can right click and select share. The following dialog will contain a public link to the file.

IMPORTANT: The link shown has a query string of `?dl=0` change this to `?dl=1`

heroku pg:backups restore 'https://www.dropbox.com/s/somehash/mydb.dump?dl=1' DATABASE_URL

Hopefully this helps someone out there! 🙂

Push Database to Heroku using Dropbox