Friday, March 2, 2018

Success with Serverless

Note: This post was cross-posted on

In this post, I'd like to share the success story of our recent testing of serverless computing. We've been having some issues with a print-proxy service, and the situation gave us the perfect opportunity to experiment with serverless computing. This new(ish) technology has some exciting advantages over traditional solutions, and we've been looking for an excuse to try it out. We weren't disappointed.

The Problem

AGRC's base maps (including the Google imagery) are served via a custom server application called Giza. Part of the advantage of using Giza is that it allows you to secure and track usage via quad-words. These unique words are assigned to a specific user and are locked down to a specific domain or IP address. For example, if my quad-word is locked down to <code></code>, then requests originating from any other domain or IP address are blocked by the server. This prevents unauthorized access of licensed content, as well as allows AGRC to track analytics.

This quad-word system works great . . . until you try to use one of Esri's out-of-the-box print services (here's an example of one). When you send a web map to one of these print services, the service reconstructs all of the layers on the server. This causes requests for base maps to be sent from your ArcGIS Server box rather than your user's browser (with your domain as the referrer). Now, we do allow wide-open quad-words (i.e., quad-words not locked down to any domain/IP) to be used by those who need to make requests from servers or other local machines. However, the wide-open quad-words can't be used in web applications because they could be copied and used by unauthorized users. This is a problem.

Swing and a Miss

Our original solution to this problem was a custom geoprocessing service that was deployed via ArcGIS Server. Basically, this service acted as a proxy to a traditional print service, switching out the secured quad-word with a wide-open one, allowing the traditional print service to successfully make requests.

While we did make this solution work, it was not ideal. Geoprocessing services, in general, are a pain to work with, but debugging can be particularly challenging. Because of the potential strain on our server from the custom geoprocessing service, we asked our users to publish this service on their own servers. But this presented its own issues, as it added additional technical debt for them to deploy and maintain (assuming that they had an ArcGIS Server instance at all).

In the end, we decided that there was no need to incur all of the heavy overhead of ArcGIS Server for a simple proxy service.

Serverless to the Rescue

This is where serverless computing came in.

Serverless computing centers around a simple concept: abstracting away all of the pain that comes from managing systems and allowing developers to focus on building software. With serverless computing, you simply write the code and let the experts (Google, Amazon, Microsoft, and others) take care of all of the headaches associated with deploying and hosting. And even better, you only pay when your service is actually invoked. Consequently, the cost ends up being pennies on the dollar compared to standing up a traditional server. In fact, AGRC, to date, has not crossed the threshold of the free tier.

Many of the major vendors provide command-line utilities to help you get up and running quickly. We decided to give the provider-agnostic project, Serverless, a try. Getting started was fairly simple: 1. Choose a programming language. (And make sure it's supported by your provider. We chose Node.js on Google Cloud Functions.) 1. Create a template-based project. 1. Set up credentials. 1. Deploy!

Once I wrapped my head around this new paradigm and got everything wired up, I was left with nothing to focus on but the business logic of my service. Nirvana!

Another huge win with this solution was automated testing and deployment via TravisCI. Each time I push a commit to <code>master</code>, Travis runs all of my tests and deploys if they are all passing. This would be impossible with our previous ArcGIS Server-based solution.

In the end, we have a stable, scalable, and highly available service hosted on world-class architecture that is kept up-to-date. (And, most importantly: it's maintained by someone other than me. :) ) I can't wait to find another excuse to use this technology.

Monday, April 4, 2016

Why I Speak at Conferences and You Should Too

Recently, I tried to gather all of the presentations that I have participated in during my career up to this point (~10 years). I was able to find materials for almost 20 different presentations or workshops that I have been a part of. This caused a question to come to mind: "why did I put myself through all of this pain"? Speaking in public is not easy for me. Especially when it comes to technical topics in front of a group of people the majority of which, I believe, are smarter than I am.

After pondering on this question for a few weeks I've come up with a few reasons.

Teaching Something is the Best Way to Learn It

It's not a big secret that the best way to learn something is to teach it to someone else. However, I think that sometimes we forget this. The knowledge that I've gained from teaching workshops has been invaluable and I don't believe that I would have been as successful with out it.

Give Back

I defy you to give me an example of another industry that shares it's knowledge as freely and openly as web development. I'm continually impressed by the general attitude of most developers of willingness to share technical knowledge with anyone. As a self-taught developer, I have been especially dependent on this principle and consequently have felt motivated to contribute to it. One of the ways that I have done this is by speaking at conferences.

More than a Spectator

Attending a conference is about connecting with others and the technology that you are learning about. If you just sit and listen the whole time then you are missing out. Feeling like you are a part of the conference changes the entire experience.


I know that it sounds corny and cliche, but your story may be just the inspiration that someone in your audience needs to hear. Too often we assume that what we are working on isn't interesting to anyone else. Yet I believe that most people are wondering if what they are doing is correct and seeing someone else's work helps with that.

So I hope that when you register for your next conference you decide to submit a paper also. You will regret it!

Monday, March 28, 2016

Converting Dojo-AMD Project To TypeScript

At some point in every TypeScript introduction that I have been to, the presenter says something to the effect of:

Since TypeScript is a superset of JavaScript, all JavaScript is valid TypeScript. Getting started is easy. Just change the file name extensions from .js to .ts and then incrementally upgrade your code to TypeScript.

For Dojo/AMD-based projects, I’ve found this statement a little too good to be true. Following are the changes that I had to make (after changing the file extensions) to get the project back up and running again.

Module Imports

The first issue that I encountered was that my AMD module declaration did not work. While TypeScript can output AMD modules I couldn't find a way to author .ts file using AMD. So the first step was to convert all of my modules to the ES6-style that TypeScript uses. For example, this AMD module:

], function (
) {
    return declare([_WidgetBase], {...});

Would need to be changed to something like this:

import * as _WidgetBase from 'dijit/_WidgetBase';
import ToasterItem, { ToasterItemType } from './ToasterItem';
import * as aspect from 'dojo/aspect';
import * as dojoDeclare from 'dojo/_base/declare';
export default dojoDeclare([_WidgetBase, _TemplatedMixin]{...});

The understanding that I worked off of was that the import * as ModuleName from 'path/to/Module' format was for importing non-TypeScript/AMD modules (no default export) and import ModuleName from 'ModuleName' was for importing TypeScript modules.

Notice that I did not use declare as the import name for dojo/_base/declare. This is to prevent collisions with TypeScript's declare keyword.

Note: If you are going to be exporting your TypeScript class to AMD modules then non-TypeScript consumers will need to update their code to use the default property of the return module parameter (e.g. new Module.default(...);).

AMD Loader Plugins

The next problem that I encountered was trying to use the dojo/text! AMD plugin. The root of the problem is that the current version of TypeScript doesn't support globbing of AMD modules. There is an issue that you can follow that shows promise of a resolution to this problem in the future but for now we need a workaround.

The workaround to the problem is a bit of a pain. You need to declare an ambient declaration for each URL that you want to use with dojo/text!. For example:

declare module 'dojo/text!./templates/ToasterItem.html' {
    const ToasterItem: string;
    export = ToasterItem;
declare module 'dojo/text!./templates/Toaster.html' {
    const Toaster: string;
    export = Toaster

Exporting Types in Modules

For TypeScript modules that I used in other TypeScript modules I had to export the types in order to make the transpiler happy. So this meant a lot of duplicate property names and types between my dojo/_base/declare call and the type exports. For example:

export type ToasterItemType = dijit._WidgetBase & dijit._TemplatedMixin & {
    show(): void;
export default dojoDeclare([_WidgetBase, _TemplatedMixin]{
    duration: 5000,
    show() {...}

These were the major gotcha's that ran into when trying to convert a project to TypeScript. Here's a link to a simple project that I recently ported to TypeScript. It has almost no TypeScript upgrades (yet) other than what it took to get the project to run.

The dojo/typings repository is the source for ambient declarations for Dojo 1.x code and also has a lot of great resources to help convert Dojo-based projects to TypeScript.

Thursday, September 17, 2015

Mock your Dojo AMD modules with StubModule.js

When testing AMD modules it is sometimes necessary to verify how it interacts with it's dependencies. For example, you might be writing a module that makes XHR requests using dojo/request and you want to make sure that it's passing the correct parameters. How would you test this? Creating a wrapper around the request method in your module and then spying on that method would work. You could also store the request method as a property of your module and spy on that in your tests. However, both of these solutions lead to messy code and there's something that feels wrong to me when adding code to production modules just for testing purposes.

You might think that it would be as easy as adding a map config to the Dojo loader and pointing dojo/request to a mocked module. While this is a possible solution it means that you have to create a separate file for each mock that you use and it gets very messy if you want to mock the same module multiple times within a single test page (since modules are cached by the loader).

StubModule.js provides a cleaner way to solve this problem. It allows you to stub modules with no dependencies on external files and no side effects to pollute your other tests. It does this by using the map config mentioned above as well as require.undef which is a Dojo-specific method that removes a module from the cache.

Using this tool is fairly straight forward. stub-module.js returns a single method that accepts two parameters. The first is the module identifier (MID) of the module that you want to test. The second is an object with keys that are MID's of the dependencies that you want to mock and values that are the mocked returned values. The method returns a promise that resolves with the stubbed module. For example (using Jasmine):

it('this is a demo', function (done) {
    var stub = jasmine.createSpy('request');
    stubModule('test/Module', {'dojo/request': stub}).then(function (StubbedModule) {
        var testObject = new StubbedModule();

To be honest I was surprised that I couldn't find an existing project that met my use case before I wrote this project. Did I miss something? Also, I wonder if the API of this project could be simplified. Any suggestions?

Monday, August 24, 2015

Boost Your Productivity With Vim

I was surprised to realize today that I have never written about one of my favorite tools that I use to write code. It's something that absolutely transformed my day-to-day coding. If it was suddenly taken away from me I would feel like I had gone back to the dark ages. That's right, I'm talk about Vim. Or more specifically Vim key bindings. Vim (Vi IMproved) is an old text editor that was first released in the 90's and is an improvement to an even older editor called Vi. The intriguing part of Vim for me was not the 20 year old piece of software but the system that it used to edit and navigate text. It's very efficient, requiring the coder to reach for his or her mouse almost never.

Lest you think that I've abandoned my favorite text editor, the real power of Vim for me is not the actual software. In fact, I've only opened it up a few times out of curiosity. The power of Vim is the standard that it's set. There are Vim emulator plugins for every major text editor out there including Sublime, Atom and even JSBin. This means that if you invest the time into learning Vim commands they will be almost universally applicable across your development tools.

Want to quickly go to the end or beginning of the current line? Change everything within the quotes? Delete everything from your cursor to the end of the line? Quickly go to a line number? Change the casing of the selected text? This and much, much, much more can be done with just a few Vim commands.

Here are a few of the commands that convinced me that I should learn Vim:

  • A Go to the end of the line and start inserting new text.
  • I Same as A but go to the start of the line
  • ci" Delete everything within the quotes and start inserting new text
  • C Delete everything from the cursor to the end of the line and start inserting
  • 545 gg Go to line number 545
  • ct, Delete everything until the "," and start inserting new text

These are just a few of the commands that I use every day. While it's significant learning curve, the time investment is worth it to me. After all...

There are endless tutorials available for you to learn Vim. After learning just a few of the basics I made it my practice to add one or two new commands to my personal reference on a regular basis. After a few weeks you'll wonder how you ever lived without it.

There are a few drawbacks that come to mind. Firstly, after a few months of using Vim, your fingers will start automatically typing commands into non-Vim interfaces. This can get annoying. Also, you've probably already realized that the learning curve is pretty steep. If you are not in a code editor on a daily basis then it's probably not worth the investment.

But if you're in the mood to boost your productivity and give your poor mouse a break you may want to play some vim golf and see how it goes. :)

Monday, May 25, 2015

Staying in the Zone with AMD Butler

A few months ago, I built a simple plugin for Sublime Text 3 for managing AMD dependencies called AMD Butler. Now it's hard for me to picture coding without it. If/when I make the switch to Atom this will be the first thing that I port over from Sublime.

AMD Butler is all about staying in the zone. First, let's take a look at life without it:
  1. Get a great idea
  2. Start coding
  3. Decide to add an AMD dependency
  4. Stop coding
  5. Scroll to the top of your file
  6. Remember and type the exact module id
  7. Scroll down to the factory function parameters
  8. Remember the order of the dependencies
  9. Think of what to name the return parameter
  10. Scroll back to where you were working
  11. Completely forget what you were doing
Now let's look at life with AMD Butler:
  1. Get a great idea
  2. Start coding
  3. Decide to add an AMD dependency
  4. Execute the AMD Butler add command
  5. Type the first few letters of the module id and hit enter
  6. Continue coding in the zone
This is what it looks like: 

AMD Butler dynamically crawls your existing modules and builds a quick list. It only takes a few keystrokes to find the correct one and then it automatically adds it to your list of dependencies with an appropriate associated factory function argument. All without affecting the position of your cursor. This is especially nice to use after slurping ESRI JS modules. No more scrolling, no more trying to remember module names or preferred argument aliases. Just quickly add a dependency and get back to what you were doing.

There's also commands for sorting, removing and pruning unused dependencies.

AMD Butler can be installed via Sublime Package Control. Head over to it's GitHub page to checkout the code or report any issues.

Monday, April 6, 2015

Windows Scheduler: Get Your Priorities Straight

At AGRC we have a variety of tasks (usually python scripts) that need to be run on a schedule. These are usually workflows that scrape and ETL data for web applications. Currently we use Windows Scheduler to run these scripts. Recently I've had problems with scripts taking way too long to complete. After a bit of digging I discovered that, by default, Windows Scheduler assigns a process priority of "Below Normal" to all tasks. The pain point is that they provide no UI to change this setting. After a bit of digging I found the following steps to work around this problem by hand editing the xml export of a task.
  1. Right-click on the task and export it as an xml file. 
  2. Open the file in a text editor and search for the "Priority" element. 
  3. Change the value of the this element to the desired priority level. See this page for a list of possible values. Usually 6 is what you want. 
  4. Save your changes and close the xml file. 
  5. Delete the original task and re-import the modified xml file as a new task. 
More details