Crash Reporting with Bugsnag, Play 2.6 and Scala

Over the past year I’ve transitioned several personal projects away from Groovy on Grails to Scala on the Play Framework.  I won’t go into detail on that process, except to say that it’s been a net positive change.   There have been challenges though, and one of them has been finding a suitable crash report aggregator.  Historically, I’ve used the free tier of New Relic as it combines both crash reporting with performance and uptime monitoring.  Unfortunately, New Relic doesn’t support Akka 10. Support is supposedly in the works, but Akka 10 was released nearly a year ago and I prefer not be forced to defer infrastructure upgrades because my analytics integrations will break.

And so I decided to look for a replacement.  My basic criteria is this:

  • Must have a perpetually free tier with reasonable usage limits. (These are unproven personal projects after all)
  • Supports JVM languages, ideally Scala.
  • Includes full stack traces
  • Usable interface

After several hours of research I settled on Bugsnag.   I had previously been impressed by it when I used it in an Android app but had initially written it off because it lacked Scala support. Having failed to find a better alternative I decided to give it a shot.  And I’m so glad I did!

The integration process was extremely simple and took about maybe minutes from account creation to logging my first app exception.  The Bugsnag Java documentation is pretty straightfoward: Add the dependency, instantiate a Bugsnag instance and log events.  You’ll likely want to spend some time tuning your setup but it really is as simple as that to get started.  In the context of my Play / Scala app, here’s what I did:

Add the dependency to build.sbt:

libraryDependencies += "com.bugsnag" % "bugsnag" % "3.+"

Create a custom error handler:


import javax.inject.{Inject, Provider, Singleton}

import com.bugsnag.callbacks.Callback
import com.bugsnag.{Bugsnag, Severity}
import play.api.{Configuration, Environment, OptionalSourceMapper, UsefulException}
import play.api.http.DefaultHttpErrorHandler
import play.api.mvc.{RequestHeader, Result}
import play.api.routing.Router

import scala.concurrent.Future

class BugsnagErrorHandler @Inject()
(env: Environment,
 config: Configuration,
 sourceMapper: OptionalSourceMapper,
 router: Provider[Router]) extends DefaultHttpErrorHandler(env, config, sourceMapper, router) {

  private val bugsnag = new Bugsnag("ce66137b4a563bee1e03d23108cc9383")
  // only report exceptions in staging and production environments
  bugsnag.setNotifyReleaseStages("staging", "production")

  private def notifyBugsnag(request: RequestHeader, exception: UsefulException) = {
    val callback: Callback = report => {
      // include requestId and exceptionId if available:
      request.headers.get("X-Request-ID").map { requestId =>
        report.addToTab("request", "requestId",requestId)
      report.addToTab("request", "exceptionId",
    bugsnag.notify(exception, Severity.ERROR, callback)

  override def onProdServerError(request: RequestHeader, exception: UsefulException): Future[Result] = {
    notifyBugsnag(request, exception)
    super.onProdServerError(request, exception)

The important piece above is the onProdServerError override.  From there it’s just a matter of passing the data to Bugsnag via bugsnag.notify(…).  When you’re implementing this for the first time you may find it useful to override onDevServerError instead, so your local dev exceptions will be reported.

I’ve also added some extra detail to my logs by adding a report callback.  It’s not necessary, but if your app infrastructure generates request id’s etc. it’s handy to include them in the crash reports so you can use elastic search etc. to get additional detail about what went wrong.

Enable the custom error handler in application.conf:

play.http.errorHandler = ""

And that’s it!  With the above setup you’ll get stack traces from staging and production environments but not local dev builds etc.

Object Oriented Proguard using Marker Interfaces

There’s no question that Proguard is an indispensable tool when it comes to Java development (Android development in particular) but because Proguard’s configuration is completely text based, there’s a lot of room for human error.  To make matters worse, the compiler can’t tell us if we got something wrong.  Instead we’re left to discover the effects at runtime.

This is where marker interfaces come in.  Just in case you aren’t familiar, a marker interface is an empty interface used to provide hints about how a given class should be handled in a given context.  You could, for example, use a marker interface to signal that a certain class should be serialized using a specific method.

Marker interfaces can also be leveraged to make Proguard a little more OO friendly.  Let’s say we have a package  Inside this package are a number of models used for REST communication via JSON and we’d like to reuse the Java class’ field names in our JSON objects.  Assume that all of our models extend from FooModel.

Normally, we’d either add a rule not to obfuscate anything in the package, or not to obfuscate anything that extends  But maybe next week we need to implement  While it’s not hard to copy/paste a rule for this scenario, as mentioned above, there’s no compile time check to tell us whether we fat fingered the package/class names, and it just adds clutter to what’s typically an already cluttered config file.

A better solution is to create a marker interface:


 * Empty interface to allow quick disabling of all obfuscation of implementors.
public interface Unobfuscable {}

And add a rule into Proguard for that interface:

-keep interface
-keep class * implements { *; }

Note the first rule that protects the interface it’s self.  This is necessary because Proguard will otherwise remove the interface in it’s shrink phase, effectively disabling the second rule.

With these pieces in place, we can reuse our Proguard rule anywhere we need simply by implementing the Unobfuscable interface.  We can apply it to a base class such as FooModel, causing extending models to inherit FooModel’s obfuscation rules, or we can apply it to only a few specific classes.  Using this approach we can create additional interfaces that map to different obfuscation rules as needed.

It’s now also clearer to other developers that special behavior is present and we be confident our rule contains no typos as we apply it to new classes.

Signing Android APK’s with CircleCI

I don’t have to tell you that CircleCI is an amazing CI platform, but it does have it’s pain points, and if you’re trying to setup an Android build job, getting your APK artifact(s) signed might be one of them.

In most other CI environments, particularly those that are hosted in-house there are numerous ways to securely store sensitive files for use at build time.  When it comes to CircleCI no such facilities currently exist.  CircleCI’s own advice on the subject while straightforward is not exactly step by step instructions: download what you need from a “secure location” during the build using secure environment variables.

There are probably hundreds of ways to do this, but I was looking for a relatively easy yet moderately secure solution.  The approach I came up with which I will describe below is to use a private share link to the signing keystore on Dropbox to download the private key at build time.  I initially tried with Google Drive first but was unable to successfully download Google Drive share links with either wget or curl.  While it’s possible for someone someone to intercept or guess the download URI, the likelihood is low enough that it’s not a concern for me.  If you are worried however, these instructions can easily be adapted to a work with a secure remote host and SCP instead of Dropbox.  Having said that, here are the basic steps to get things working with the Dropbox method:

  1. Upload your private key to Dropbox and generate a share link.
  2. Add environment variables to your CircleCI project: SIGNING_KEY_URI = <dropbox link uri>
  3. Create a download shell script in your project.  You can name and put this file wherever you want, but for the sake of this example let’s create it in misc/ with this content:
    # use curl to download a keystore from $KEYSTORE_URI, if set,
    # to the path/filename set in $KEYSTORE.
        echo "Keystore detected - downloading..."
        # we're using curl instead of wget because it will not
        # expose the sensitive uri in the build logs:
        curl -L -o ${KEYSTORE} ${KEYSTORE_URI}
        echo "Keystore uri not set.  .APK artifact will not be signed."
  4. Add the $KEYSTORE environment variable to your CircleCI config:
        KEYSTORE: ${HOME}/${CIRCLE_PROJECT_REPONAME}/signing.keystore
  5.  Call the download script from your CircleCI config:
        - bash ./misc/
  6. Use the $KEYSTORE environment var in  the signingConfig section of your build.gradle:
    signingConfigs {
        release {
            storeFile file(System.getenv("KEYSTORE"))
            storePassword System.getenv("KEYSTORE_PASSWORD")
            keyAlias System.getenv("KEY_ALIAS")
            keyPassword System.getenv("KEY_PASSWORD")
  7. Profit!

For a fully functional, real-world example of a project using this configuration, check out nick branch of Androidplot on Github.

How to Setup a Mac Mini with a Bluetooth Keyboard

NOTE: This article shows how to setup a Bluetooth Keyboard, however it should also be possible to use these steps to enable the soft keyboard and avoid the need for a physical keyboard at all.

Recently I was tearing my hair out trying to figure out how to do this and every forum thread I came across claimed it was impossible.  The problem is that when you intially boot the mini, it will detect a bluetooth mouse but for whatever reason *will not* detect a bluetooth keyboard.  Once you get to the initial account creation screen, a keyboard is required to enter a name and password, otherwise you cant proceed.

I came across a method for setting up OSX Server headlessly with VNC, but since I wasnt running Server it didnt help.  After reading through what I could find through Google searches, I decided to do some experimentation of my own.

My first first idea was to try and copy text from wherever I could find copy-able text and then past that into the name / password fields.  I did find some copy-able text on the error dialog that appears after attempting to submit the empy account /password form, but to my dismay, I discovered that the paste option was disabled for input fields.

My next idea was to try and plug in a headset and use the text to speech options that come up when you right click the form input fields.  Unfortunately, even with a headset plugged in these options never lit up.  While I was idly clicking around, I  noticed a Substitutions option:


Inside the resulting dialog was a Text Preferences button which took me to the Text Preferences section of System Preferences:


If you look closely, you’ll see a back arrow in the top left corner of the screen, which will take you to the main System Preferences screen.  From there, its a simple matter of going to the keyboard section and adding a bluetooth keyboard.  Success!


How quickly do Android app users upgrade?

Recently, I wanted to find out what percentage of users would upgrade once a new version of an app became available.  I didn’t find any good answers to this question so I decided to try to answer the question myself.  Looking at the historical stats available for a few different apps (all < 100k installs) the data is definitely skewed to the left. Here’s a pretty representative example:

Screen Shot 2014-08-20 at 12.10.13 PM

Those grey indicators show when a new upgrade was pushed to the Play Store.  There’s actually 2 upgrades in this image with ~30 days of history showing on the one on the right.  After 30 days the number of upgrades becomes negligible.  While I didn’t go so far as to create a distribution plot, I think it’s reasonable to say that based on the available data roughly 65% of all upgrades occur within the first 3 days and 90%+ within the first 2 weeks of a new release.  It’s probably also reasonable to conclude that the average android user upgrades his/her apps within 3 days of the upgrade becoming available.  If I were ambitious I might even take a stab at estimating the % of users that have auto upgrade enabled but I’ll leave that for another day. It’s worth pointing out that these statistics show what percentage of upgrades will occur within a given time frame but do not take into account the number of derelict installs (users who still have the app installed but for whatever reason never upgrade).  You can roughly figure out how many “derelict” users you have by comparing the sum of installs over a 30 day period (or longer if you prefer) against the total number of active users the Play Console claims your app has at the end of that period.  Or if you’re impatient and willing to trust my conclusions you can use 3 day statistic:

D = "derelict users"
S = sum of day1, day2 and day3 upgrades
T = Play store reported "active" installs as of day3 above

D = (T*1.54) - S

Deep dive code reviews with Bitbucket

The Problem

Bitbucket is great.  I use it as often as possible both for my own private work and for collaborative efforts.  It’s easy to use and has most of the features small teams are likely to need.  One shortcoming however is it’s code review functionality.  Back in 2012 Bitbucket introduced “lightweight” code reviews but when they say lightweight they really mean it.  Essentially,  code reviews are bound to to pull requests so your only opportunity to review code is when it is originally written or when it changes.

At a high level this seems reasonable; if all pull requests are reviewed then that means that every line of production code has been reviewed, approved and safe for release, right?  The problem is that pull requests are generally small and homogenous, addressing one bug or new feature in isolation.  They can look great in and of themselves while still being a bad fit for the overall design of the product or module to which they belong.

My favorite illustration of products that evolve this way has to be the one of the cat with the elephant trunk and the human hand sticking out of it’s back.  When the pull request containing the “hand” functionality was being reviewed it probably made perfect sense.  Same with the “elephant trunk” piece, especially to reviewers unfamiliar with the architect’s design goals and may not have known that the mammal being built was intended to resemble a cat.  This sounds contrived I know, but it happens all the time.  Just think back to the last time you found yourself reviewing code for a product on which you don’t typically work.

The argument could be made that the architect should be catching these kinds of issues but the reality is that it doesn’t always happen.  Or maybe you just got a code drop from a contractor that insisted on developing in stealth mode up until the last minute and now you need to critique the whole thing.  Whatever path got you there, sometimes you just need to review an entire product or module.  There are tools to do this such as Atlassian’s Crucible and Codifferous but they may end up costing money (particularly if your repos are private) and may require you to run your own servers…not the ideal scenario for private work or small, unfunded collaborations.

The Solution

Here’s a workflow that allows any file in a Bitbucket repository to be reviewed as part of a single code review and requires no extra tools:

  1. Architect (or repo owner, etc.) branches the repository of interest at the desired revision and name it something like FullCodeReview_07_10_2014.
  2. Architect checks out the new branch and defines the scope of the review by adding comments to the source files.  I’d suggest formatting the comments like this:
    // CREVIEW <Reviewer's Initials>: <Reviewer's comment(s)>

    (The utility of the CREVIEW prefix becomes obvious in step 8)

  3. Architect pushes changes back into the remote repo and issues a Pull Request back to whatever branch the code review branch was created off of, adding code review participants to that Pull Request.
  4. Participants either check out the code and add comments to new source files or sections (if they want to increase the scope of the code review) or add comments on the sections tagged via comments by the Architect directly to the Pull Request using Bitbucket’s built in code review tools.
  5. Participants push changes made to the source back to the remote repo for FullCodeReview_07_10_2014.  Changes should generally consist only of CREVIEW comments but could also include actual code changes if the goal of the review is to make changes as you go.
  6. Repeat steps 4-5 as many times as necessary to conclude the review.
  7. If the goal of the code review was to generate work, create bugs, stories or whatever your team uses to track work to be done and then either tag and delete the code review branch or if you aren’t worried about branch clutter leave the branch there and go back to working on another branch.  If you don’t care about preserving the history of the code review you could also just delete the FullCodeReview_07_10_2014 branch.
  8. If the final state of the code review branch includes changes that should be kept, the Architect should first remove all CREVIEW comments (this should be very easy to do since they all share the same unique prefix) then commit and then merge the original Pull Request.  At this point it should be safe to just delete the FullCodeReview_07_10_2014 branch.

There is a caveat; this workflow will not work across multiple repositories…a limitation that includes submodules.  In practice this is not typically a problem but it’s still worth pointing out.

Beta Testing with Google Play

From the perspective of a quality minded application developer, Google’s addition of Alpha / Beta testing facilities to the Play Console is huge.  I recently had the opportunity to test out the new feature and while the experience was mostly pleasant there were a few warts.

The biggest negative I encountered was by far the lack of a means to unpublish a production release without also unpublishing the alpha and beta releases as well.  This is especially dangerous during the initial development of an app; a time where Alpha / Beta testing is most important.  Here’s the scenario:

You’ve recently completed your first working prototype of your app and want to start alpha testing it using Google Play.  You’ve got your application configured in Google Play with a status of “unpublished”, the prototype .APK is uploaded as an alpha and you are ready to go.  This is where you realize that your alpha testers will not actually see your app until you “publish” your app.  Why this is necessary I cannot say but it is.  Once the app has been published, both the Alpha and Beta .APKs should become available to those on your white list via the Play Store and so long as there is no Release .APK defined, access will be limited to those on the white list.

Here’s where it starts to get dangerous.  While not difficult to navigate I found the process of deploying a new .APK poorly documented and extremely error prone. It was only through trial and error that I ultimately discovered what appears to be the intended workflow; not only can an .APK be directly uploaded into the Alpha, Beta and Release tracks but promotion of .APK’s is also possible via multiple methods.  If you do happen to make the mistake of uploading and deploying an Alpha .APK as a Beta, so long as you’ve got a previous release it appears possible to revert.  If however you’ve NEVER released a public .APK and you accidentally release an Alpha or Beta .APK there is no way to unpublish only the Release APK.  At this point you have 3 options:

  1. Unpublish the entire app and lose use of Alpha / Beta testing via the Play Console
  2. Create a dummy .APK with the most restrictive compatibility requirements possible such as compatibility ONLY with devices running Android 1.6.
  3. Move up your release schedule date to today and hope for the best.

Sadly, none of these are very good options.  For whatever it’s worth, it does appear to be possible to enter “Advanced Mode” on the Play Console and “Deactivate” the Production .APK:

Screen Shot 2013-07-30 at 10.22.11 PM

Attempting to save such a change fails with the message “The application could not be saved.  Please check the form for errors.” but mysteriously no form errors are displayed. It’s possible that this functionality will be fixed sometime in the future.  But then again also possible that the visibility of the “Deactivate” button is the bug and that is what ends up being fixed.

At the end of it all, the moral of the story is be very careful when deploying new Alpha / Beta .APKs because there is currently no forgiveness from Google for those who make mistakes.