Raising Money When You Have No Idea What You’re Doing

People somewhat regularly ask me for advice on raising money, typically at very early stages (angel or seed rounds). I was lucky to be able to raise $7.1M in two rounds at Runscope, so I guess that gives me some level of credibility with the process. I don’t actually think I was very good at fundraising — my success rate was probably something less than 1% if you take into account how many pitches didn’t lead to an investment.

I went through seven major revisions of My Deck™ and about 50 meetings before we got a term sheet for the seed round. Each revision attempted to resolve the typical set of issues you hear from feedback of failed pitches:

  • Not enough market size data.

  • Not compelling enough problem statement.

  • Too focused on technical solutions over business viability.

  • Underspecified go-to-market strategy.

  • Poor competitive differentiation, etc.

I eventually settled on creating a narrative instead of trying to convince investors with an avalanche of facts, logic and pie-in-the-sky projections. The deck became an accessory to the narrative, not vice-versa.

I’ve only sat on the other side of the table a few times, so I’m not sure how investors in general would feel about this, but I know from the dull pitches I have sat through, a compelling narrative would have gone a long way. Below is a framework for thinking about your pitch as a story that I’ve sent around to people when they ask for advice raising. It worked for us. If you’re struggling with your pitch, it may work for you too.

There is a timeline from when you first had your idea for a company to the eventual conclusion of your startup as a big company. This is the story to tell. Whenever you are pitching, you are at some point on that timeline. Your deck should reflect that position in time.

Start with quantifying the problem (in real terms! e.g.“Companies are losing $X by [whatever solution you’ll replace for the problem they have]”), how you recognized it, and why you’re the best team to solve it. Pick absolutely the most important points for each of those. Talk about traction only as a reinforcement of the previous points. Do not overwhelm with info. Investors will remember basically nothing specific.

Once you’ve established that, the next part is working forward in time. The further out you get, the more “blurry” it will need to be. But you need to pick milestones. If your next step after the seed round is an A round, describe the resources you need to achieve the business milestones that will set you up for the A round, THEN mention that it will take $X in funding to get there. Work backwards from the next milestone. Pick only the 1–3 most important things you’re going to accomplish.

When you structure a pitch like this here are the things you don’t have to beat over investors’ heads: market size, competition, business model. If you’ve made a compelling case at the beginning of the story, those will all be readily apparent. You should absolutely have answers for those questions, but you don’t need to focus on them.

Instead, focus on:

  • This is a big problem.

  • There’s nobody better to solve it.

  • You should get in now to help us continue on to the inevitable next milestone and ultimate success.

Get them excited about the story, not any specific number.

Remember, investors will remember next to nothing (they get pitched A LOT) so pick your spots and make the few key things that need to be recalled over-emphasized. Repeat them at each stage and how it applies to the past, present and future. Imagine after your meeting they’re going to describe it to another investor in the hallway. What is the one thing you’d want them to say about you? Repeat that over and over again in your pitches. Have answers for everything, but keep the deck focused on the key points. In a way, the less info the better because there’s less chance you’ll get pulled off in random tangents.

Lastly, this is the single piece of advice that changed my approach to fundraising: Find true believers, not converts. Think of the process as eliminating anyone who does not believe in what you’re doing. When you find your first believers, you’ll wonder why you spent so much energy trying to convince people. The difference in behavior on their part is striking.

Thank You

Today we announced that Runscope has been acquired by CA Technologies. This was made possible by the contributions of so many people. I'm eternally grateful for your support.

To our customers: I can't overstate how obsessed we are with building a tool you can trust and rely on to solve real technology problems. Thank you for pushing us to always be better, and for being patient when we can't meet your needs exactly. We'll continue to use your feedback as the primary driver in setting our direction.

To the API and developer community: So many people in the community have helped spread the word about Runscope that it would be impossible name everyone, but I want to give special thanks to Jeff Lindsay,  Kin Lane, Kenneth Reitz, Jason Harmon, Cory Benfield, Marty Alchin, Matt Bernier, Hung Truong, Kristen Womack, Bruno Pedro, Keith Casey, Steve Willmott, Adam Duvander, James Higginbotham, Phil Sturgeon, Mehdi Medjaoui, Jeremiah Lee, and David Hayes.

To our investors and advisors: Without your belief in our vision, Runscope wouldn't exist. In particular, I'd like to thank Nat Friedman, Jon Dahl, Steve Herrod, Ullas Naik, Tom Drummond, James Lindenbaum, Justin Mares, Greg Burkus, Robert Benner, David Cohen, Steve Schlafman, Taylor Greene, Frank Chen, Chris Dixon, Peter Werner, and Amanda Busch.

To True Ventures: You set an example for how a VC should be. To say you are "founder friendly" would be a major understatement. You've been a true partner from the very beginning and have stood by Frank and me every step of the way.  To Adam, Puneet, Phil, Christiaan, Jon and everyone else at True, thank you.

To Runscopers past and present: People, product, profit. The people of Runscope have always been my favorite part of this venture. Working with and learning from you has been an honor and the highlight of my career. Thank you for taking a chance on us. I hope we are a worthy line on your resumé.

To Frank: There is no movement without the first follower. Your steady hand was a calming force in the startup chaos. You remained dedicated through the highs and lows and consistently pushed me to be better. I couldn't ask for a better co-founder.

To Emily: I couldn't have run this company without your support and counsel. Your unwavering support means the world to me. Thank you, I love you.

Twilio: A Retrospective

It’s been over 4 years since I left Twilio but no career experience outside of Runscope has had a bigger impact on me. With the impending Twilio IPO, I’m finding myself especially nostalgic about my time there. I was only there two years, but I learned so much.

Discovering Twilio

In 2008 I was a web developer in the Twin Cities. I ran a big softball league and when it would rain I needed to notify 600+ people that the games were canceled. I used my cell phone’s voicemail as a hotline for awhile but that rendered my phone useless on game days. I thought it was ridiculous that everyone had to ‘poll’ the hotline for updates. I wanted push notifications via phone.

My first attempt at this involved trying to set up Asterisk. Being a Windows user though I didn’t even get past getting a Linux distro set up (kids, learn *nix). I gave up pretty quickly.

One night during the off season I was reading TechCrunch when I came across a post about Twilio. It sounded like the perfect fit. I went to the site, read the docs and a few minutes later my phone was ringing. I don’t know how to describe the feeling I had. I recall running around the house. I think I scared my wife.

I was hooked. In just a few minutes a whole new set of capabilities opened up to me. I was consumed with ideas of telephony apps to build. My eyes were opened to the power of APIs. Stumbling on that TechCrunch post literally changed my life.

From Fanboy to Employee

I started blogging about Twilio, writing sample apps and helper librariesgiving talks at user groups and regularly participating in their weekly IRC office hours where I virtually got to know Danielle Morrill, Jeff Lawson, Evan Cooke and other members of the team. Not too long after, I won the infamous Netbook contest which I was ridiculously proud of. Ironically, when I later took over running the contest I went to the master spreadsheet and found out there was only one other entry that week and it didn’t even work. I didn’t care; a win is a win.

In early 2010 I was in Las Vegas for a trade show for the company I was working for. I saw that Danielle was tweeting about being at a different show there and so we started planning to meet in person for the first time. I was able to leave my show early and go over to the conference center where Twilio was exhibiting. The security guard wouldn’t let me in without a badge. I hung out near the entrance hoping to catch them on the way out but I noticed an exit-only side door people were leaving the show through. When the guard wasn’t looking, I snuck through the door. Conveniently, the Twilio booth was just inside where I found Danielle and told her how I got in much to her amusement (if you know Danielle, you know this is her kind of thing).

We chatted a bit and then she mentioned she wanted to introduce me to Jeff Lawson. When we found him, she introduced me by saying, “Jeff, this is John Sheehan, my first choice for our evangelist job opening.” This was news to me on two fronts: that such a job existed and that I was a candidate for it (I had only previously applied for a PHP developer job I was not qualified for).

Jeff and I chatted and he invited me to dinner with some Twilio customers, investors and other team members. It felt like a dry run of doing an evangelist’s job. After dinner they told me they wanted me to come out to SF for an interview. A few days later, without even visiting, I got an offer that I quickly accepted. I started April 5th, 2010 as Twilio’s first full-time Developer Evangelist and Twilio’s 10th employee.

Being an Evangelist

I didn’t really have a job description. I knew my job was to help bring developers to Twilio by doing the things I was already doing but I didn’t know what success looked like. We ended up mostly focusing on registered developers as our success metric but found the best way to drive that was not to evangelize (UGH STILL HATE THAT WORD) Twilio but to help developers be more successful. Period. Eventually those relationships would benefit the company when it benefited the developer.

I really loved doing the job. I met so many great people, many of which I still consider friends. I got way better at writing and presenting. We were blazing a trail for how small companies could do developer outreach (remember that prior to 2010 most Developer Evangelists were at companies like Microsoft and Sun). The sign ups were piling up and accelerating. It was exhausting and exhilarating. The team was growing and I eventually took it over. Shortly after, a new opportunity within the company emerged that I had to go after.

Working for Danielle

Danielle is the best manager I’ve ever had. No one has ever pushed me harder. She gave me a ton of autonomy and trusted me to do what I thought was best for the company and our community. She made sure I was recognized for my successes and took the fall for my failures.

She knew how competitive I was and used it to push me further. We had a 1–1 meeting once where she told me whoever got the most blog page views that month would win something. I found out the next month (after “winning”) she hadn’t included anyone else on the team in the contest. I was really happy with the results and too impressed with being hoodwinked to be upset.

Danielle is a polarizing figure but is one of the sharpest people I know and someone I will never, ever bet against.

The API Debate

Shortly after I moved to SF to take over the evangelism team we were gearing up for a new product launch. Earlier on in the development the team was having trouble coming up with a friendly API (JavaScript in this case). I made a proposal that got adopted. Having that level of impact on product design from outside the product team on a big new product was a big personal win.

Closer to launch, there were still some details to be worked out on the design. Under the pressure of shipping many people became involved in a debate on a final issue. PMs, devs, support, and even Jeff were trying to work it out. I felt very invested in the outcome considering my earlier contribution and having just moved to SF literally to be more involved in moments like this. It got a little heated and I ultimately ‘lost’. I was pretty devastated. My reaction didn’t sit well with a few people. I stewed a little bit but calmed down and got on with the launch. There was too much to do. Unfortunately this would come up again later.

Becoming a Product Manager

As the product line grew it became difficult for every team to focus on developer experience. The developer console, docs, helper libraries, etc. were starting to suffer a bit from a lack of attention. A new position opened up on the product team to be the Product Manager for developer experience. It felt like a perfect and obvious next step for me given the amount of direct customer feedback I had been exposed to the previous 18 months.

I don’t know what was going on behind the scenes, but it took a long time (around 3 months) to go from expressing interest to getting the job (earlier design debate incident perhaps?). I’m sure it was complicated but it was really frustrating. I didn’t really feel like I had a lot of momentum once the switch was official. My team was small, the projects were unsexy (the ones we wanted to do wouldn’t drive the business) and I was new at being a PM. It didn’t go particularly well.

The Breakdown

While I was failing at being a PM, some new initiatives in the company were being explored that I felt like undermined our developer-friendly ethos. I understood the potential upside but it felt like the company was going through its biggest change yet (change was not uncommon, things felt very different at 10, 20, 50 and 100 people).

As a PM with a backlog a mile long, it was difficult to watch resources get diverted to things I didn’t believe in. I was also a bit burned out from the intensity of the previous 18 months. This all culminated in a meeting with Jeff (my manager at the time) where I broke down and cried over where things were going. I’ve never been more embarrassed in my career. I left the meeting with a sense that my time at Twilio was coming to an end soon.

I don’t blame Jeff or others in the company for pursuing those opportunities. I probably would have done the same in their position. While those projects were ultimately scuttled, I think the company learned a lot. I was really excited the first time I saw the “Ask Your Developer” billboard years later. It felt like a return to the Twilio I loved.


I left in March of 2012. The company had gone from 10 to over 100 employees. Our registered developers had gone from single-digit thousands to over 100,000. The evangelism team was around 10 people strong and becoming a machine (Rob and crew have done an amazing job since taking it to over 1M developers).

On my 2nd to last day I accidentally spilled coffee on my laptop and destroyed it.

Quitting was a mixed bag of emotions. I missed the people. I missed the sense of urgency. I missed the prestige that came with being a part of a startup that was making it. But I was also relieved to not be so stressed all the time.

Do I wish I had stayed? The short answer is “no” but I’m definitely experiencing a bit of FOMO right now. I went to Twilio because I truly believed it had a chance to become a successful company. There was always a really strong sense of mission and endless opportunity. It was palpable sometimes. I would have loved to have seen that play out directly, but leaving was still the right choice. In the 4 years since, I’ve been incredibly fortunate. I owe my entire career since to having been at Twilio. I will probably be banking on that for years to come.

I have only one regret: I should have negotiated for more options.

Other Random Memories

Opening the first red track jacket, Jeff waiting on Andrew, missing every company photo, the Montana trip (and subsequent Owl reward), Super Startup Weekend, Epic Sax on a string, the Chicago demo fail, johnsheehanneedsajacket.com, Andre Bempton, Mark’s laugh, Frank taking a stand on ‘presents’, the fake Twitter feedback, 95% Taylor Swift, Bad Romance jam session, 4am Whattaburger, London tube singing guy & Hawksmoor, Twilio’s first “acquisition” via expense report.

60+ Tools and Services for API and Webhook Logging, Debugging, Testing, Monitoring, Documentation and Discovery

Pretty much every day someone asks me if there's a tool for solving a specific API problem. While Runscope solves many API problems (particularly debugging, testing and monitoring) for app developers, there are many things we don't do. My goal with this post is to list out all the API tools that I know of and frequently recommend to people. Some even compete with us, but the more the merrier.

I've left out code libraries and frameworks since that would make the post too long.

Webhook Debugging

  • RequestBin - The original POST catcher, formerly postbin.org. Inspect HTTP requests. (Free)
  • Webhook Inbox - Receives HTTP requests and captures the data for later inspection. (Free)
  • RespondTo.it - Debug web hooks like a pro. Supports custom responses. (Free) 
  • InspectBin - Similar to RequestBin with live updates (Free)
  • Runscope Request Captures - HTTP request inspection with sharing, live updates, SSL, persistent URLs, search, comparisons and more. (Free + Paid)

Webhook Utilities

  • Hookify - Filter, manipulate, combine and forward webhook events. (Private Beta)
  • Webhooks.io - Fast, reliable, scalable webhook delivery platform. (Private Beta)
  • Webscript - Webhook receiver and scripting engine using Lua. (Free + Paid)
  • Torpio - JavaScript micro scripting platform for joining up & extending cloud apps. (Free + Paid)

Local Tunneling

API Monitoring

  • Runscope Radar - Global API monitoring and testing with flexible assertions, chained requests and OAuth support. Integrates with PagerDuty, New Relic, HipChat, Slack, Keen IO and more. (Free + Paid)
  • APImetrics - Real-time API tests & performance analysis. (Free + Paid)

Response Mocking

  • JsonStub - Fake the backend while you develop the front end. (Free, Beta)
  • Mockable - Create REST and SOAP services which mimic your external providers. (Free, Preview)
  • mocky.io - Mock your HTTP responses to test your REST API. (Free)
  • httpbin.org - HTTP responses for common scenarios like status codes, delays, streaming, etc. (Free)
  • pathod - A pathological web daemon. (Free)

JSON Utilities

  • JSON formatter and validator - Paste in raw JSON, get a formatted, validated version back. (Free + Donations)
  • JSON format - Paste or load JSON from a URL to format it. (Free)
  • JSON Schema Lint - JSON schema validator to assist you in the writing and testing of JSON Schemas that conform with the Draft 03 specification. (Free)
  • JSON Generator - Generate sample JSON data with a comprehensive templating system. (Free)
  • JSON editor online - Edit JSON via a GUI. (Free)

OAuth Utilities

API Directories

API Documentation

  • Apiary - Collaborative design, instant API mock, generated documentation, integrated code samples, debugging and automated testing. (Free + Paid)
  • API Blueprint - API documentation format used by Apiary and other tools. (Free)
  • RAML - RESTful API Modeling Language. (Free)
  • Swagger - Specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. (Free)
  • I/O Docs - Fast, powerful API documentation & test calls. (Free)
  • Embedcurl.com - Embeddable curl commands for your web site, blog or API documentation. (Free)

API Testing

  • Runscope Radar - HTTP/RESTful API testing in the cloud. Run your tests on every commit, build or deploy with email and webhook notifications for failures. (Free + Paid)
  • SoapUI - SoapUI is a free and open source cross-platform Functional Testing solution. (Free + Paid)
  • StopLight - A powerful HTTP testing tool. (Free, Beta)
  • pathoc - A perverse HTTP client. (Free)

Cloud-based Debugging Proxies

  • Runscope Traffic Inspector - Log and inspect HTTP API calls from any language or framework to any API. Share, search, retry, compare requests and more. (Free + Paid)
  • API Tools - Track, transform and analyze the traffic between your app and the APIs you use. (Free)

Desktop Debugging Proxies

  • Charles - Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. Supports OS X, Linux and Windows. (Paid)
  • Fiddler - The free web debugging proxy for any browser, system or platform. (Free)
  • mitmproxy - Intercept, modify, replay and save HTTP/S traffic. (Free)
  • Burp Suite - Integrated platform for performing security testing of web applications. (Paid)
  • HTTP Scoop - The HTTP sniffer for Mac OS X. (Paid)

API Gateway (Self-Hosted)

  • Tyk - An open source, lightweight, fast and scalable API Gateway. (Free + Paid)
  • ApiAxle - A proxy that sits on your network, in front of your API(s) and manages things that you shouldn't have to, like rate limiting, authentication and analytics. (Free)
  • 3scale - "3scale provides a hybrid (API traffic management on-premise, API administration, traffic reports and developer portal cloud based) full featured API Management solution that is highly scalable, secure and flexible to match the most demanding requirements." (Free + Paid)

Load Testing

  • Loader.io - Stress test your web apps/APIs with thousands of concurrent connections. (Free + Paid)
  • Blitz.io - Performance testing for websites, web apps and REST APIs. (Paid)
  • BlazeMeter - JMeter load testing. Test the performance of any mobile app, website or API. (Free + Paid)
  • LoadUI - API load testing solution. (Paid)

Command Line Tools

  • HTTPie - HTTPie is a command line HTTP client, a user-friendly cURL replacement. (Free)
  • jq - Lightweight and flexible command-line JSON processor. (Free)

GUI HTTP Clients

  • Hurl.it - Make HTTP requests. (Free)
  • Postman for Chrome - There are many Chrome apps for making HTTP requests, this one is the best. (Free + Paid)
  • Paw 2 - OS X HTTP client. (Paid)
  • Echo - Another OS X HTTP client. (Paid)

Did I miss something?

I almost certainly did. Let me know in the comments!

Everything is a remix; making the best use of conventions to build Runscope.


One of the most influential books of my software career is Steve Krug's Don't Make Me Think. While the book is a little dated now, it contains a plethora of software design gems. The one that stuck with me the most over the years is "Conventions are your friends" in Chapter 3.

Short of copying and pasting the whole section, here are my three favorite excerpts, starting with how an idea becomes a convention:

All conventions start life as somebody’s bright idea. If the idea works well enough, other sites imitate it and eventually enough people have seen it in enough places that it needs no explanation.

Why conventions are useful (emphasis mine):

As a rule, conventions only become conventions if they work. Well-applied conventions make it easier for users to go from site to site without expending a lot of effort figuring out how things work. (...) There’s a reassuring sense of familiarity...

Krug's ultimate recommendation:

Innovate when you know you have a better idea (and everyone you show it to says “Wow!”), but take advantage of conventions when you don’t.

Reduce, Reuse, Recycle

About a year ago I started on the initial product design that would become Runscope. Knowing we were building something completely new I wanted to introduce as few new concepts as possible. With Krug's voice in my head I was determined to make the best use of existing conventions.

I thought it would be fun to look back at the concepts we borrowed from other services with the benefit of hindsight. Which ones worked out? Which do we regret? Let's find out.


Buckets are the term we use for organizing your requests by project, app, customer or whatever else works for you. The name was borrowed from S3 but my intial idea for how they would work was more like Gmail's filters and labels. Using a well-known name from another developer tool with the behavior from another tool ended up being pretty confusing. Some early external feedback drove this point home and in the end buckets ended up much closer to their S3 brethren. Verdict: right decision after an adjustment.


In a fast moving stream of API calls, we were looking for a way to single out requests to save and easily reference later. Gmail's stars seemed like a good model for this. That ended up to not be the case. It forced people to use buckets for lighter-weight organization and those weren't a good fit for a variety of reasons. We needed a middle ground: a simple organization method within a bucket. Verdict: Not a useful convention for us.


We replaced Stars with Collections. They're conceptually similar to Gmail's labels: lightweight, user-defined and optionally one-to-many. With starred requests you had to attribute your own meaning to what starring meant. For collections, you define the meaning in the name you give them. It's much more flexible. Verdict: Right decision.

In need of a way to share API calls across account boundaries (e.g. to send to an API's support team) I immediately thought of Dropbox's shareable links. We implemented it just like Dropbox, with an explicit click to make a request public and create a public, shareable URL. Public links can be revoked at any time. Revoking and resharing creates a new, unique URL.

A funny thing happened. When asking people to share requests with us, people would click the link to go to the share preview page and then send us the URL for that page, even after clicking the 'Create Share Link' button and being presented with the public URL. Even making the button and resulting URL much more prominent didn't fix it. It took a little bit of history.replaceState() cleverness to mitigate the confusion.

If I could go back, I'd simplify the feature. Every request would have a single non-guessable URL with a simple public/private visibility toggle (like Sentry). We may still do this, though revokability is an issue. Verdict: Right idea, but a little over-engineered.


The first version of our identity database had tables Accounts and Users with every user belonging to a single account. Frank wisely deleted that as soon as he saw it. People very rarely have a single organizational association. We ended up using the GitHub model of users and organizations (we call them Teams). This has given us and our customers a lot of flexibility. Verdict: one of the best early decisions...thanks Frank!

Many More

These examples are just from our initial Traffic Inspector product. In Runscope Radar (our new product for automated testing) we've also re-used concepts like integration testing assertions and variables with a Mustache-like template syntax.

Next time you're designing a new feature, listen to your inner Steve Krug:

Sometimes time spent reinventing the wheel results in a revolutionary new rolling device. But sometimes it just amounts to time spent reinventing the wheel.

How Runscope raised a $1.1M seed round without customers, revenue or even a product.

I’ve been seeing a lot of posts lately on how to raise money from venture capitalists for your startup. I used to gobble that stuff up, reading any and every post I could to try to glean any insight I could into the process. Now that I’ve been through the process and have the benefit of hindsight I’ve concluded there’s no universally-applicable advice other than this: your mileage may vary.

Fundraising is a crazy experience, and every founder who’s tried will love to tell you just how crazy it was for them. Your story always feels like the worst possible case until you hear someone else’s. For instance, another founder I knew was raising around the same time we were. They had a product, revenue and had just graduated from an accelerator. I felt like we had no chance compared to where they were at. They didn’t get funded and the company is gone. We did, and we’re here.

I’ve been asked a lot, “How did you pull it off?” We had three things working for us: the right people, a ripe market and a good “story.” My approach to pitching was to hone in on those three things and ignore everything else.

The Right People

When I left Twilio, I really wanted to get into consumer web stuff. IFTTT was the perfect hybrid between dev tools and consumer services. Except I repeatedly found myself drawn back to building dev tools. My co-founder Frank is the same way. Everywhere we go, we build tools because it’s what we love to do.

Between the two cofounders, we also had the right talent mix. My background as a developer, product manager and evangelist and Frank as a Lead Engineer on a popular API. We had the skills and track record. Selling the team was always the easiest part of pitching.

A Ripe Market

APIs are blowing up. App developers (especially mobile) are building and relying on more APIs (even if they don’t call them that) than ever. The trend is clear. Existing API tools are all focused on API providers. App developers need tools too.

The most common question I got was “How big a market is it?” How many apps can you imagine not talking to a remote web service in the future? That’s how big it is. This was also easy to pitch.

A Good Story

Investors hear from good teams, with good ideas in good markets all the time. That was the minimum bar for getting in the door (being relatively well-connected also accelerated the process). But I think what made the biggest difference was the story we told about getting from where we are now to becoming a huge business.

I tried to paint as vivid a picture as possible. Like any good story it had a beginning, middle and end. I explained how product development and marketing would progress and dovetail with each other over time. What the target audience was for each phase. How each phase set the stage for the next one. What we learned from working at a company that went through the process already. After the first few meetings I proactively addressed concerns before they were asked.

Knowing our story inside and out gave me a lot of confidence. The more people I told, the more confident I got that we were on to something. Paul Graham covers this in ‘How to Convince Investors’:

Investors are not always that good at judging technology, but they’re good at judging confidence. If you try to act like something you’re not, you’ll just end up in an uncanny valley. You’ll depart from sincere, but never arrive at convincing.

Raising money was not fun. It was stressful and time-consuming. I feel incredibly fortunate to have had it work out. We ended up with great investors and never felt like we had to “sell out.”

It’s still early for Runscope. We are finding our way and we’re a long way from successful. So far I’m happy with how it’s going. We’re on track to meet our goals, we have a great team (what I’m the most proud of) and we’re having a lot of fun.

A Survey of the Localhost Proxying Landscape

In late 2009, soon after coining the term ‘webhooks’ and starting to evangelize them to the world, Jeff Lindsay ran into a problem.

I had another idea while thinking about webhooks. It would be great if I could expose a local web server to the Internet with a friendly URL. It should just be a simple command. There would have to be a server, but there could just be a public server that you didn’t even have to think about.

Jeff’s idea grew into localtunnel, a friendly wrapper around an SSH reverse tunnel that makes your localhost available via a public URL. The first time I used it, it felt like magic. It completely changed the nature of working with webhooks. I was, well, hooked.

What was novel at the time has grown into a little bit of a cottage industry amongst API tool makers. Many similar services have cropped up, specializing in different versions of the same problem. Here’s an overview of all the services I know of.


The original. The initial SSH-based version with the Ruby client eventually ran into some issues. Ruby and SSH are also a bit precocious on Windows. Jeff later released a beta of a version 2 with a Python client and a new wire format. Localtunnel has been discontinued in favor ngrok (see below).


A Node.js copy of the original localtunnel that couldn't come up with an original name.

Forward (formerly Showoff)

Forward was another early entrant, focusing more on allowing developers to show off web sites they were working on from their local machine. In early 2011 they rebranded from Showoff to Forward. They offer a few different pricing plans for various needs.


Another Ruby based solution, ProxyLocal was started in late 2010 and was last updated a year ago, though the web service seems to be operating still. It distinguishes itself from localtunnel by letting you choose the subdomain for your public URL. The service is free.


PageKite is another commercial service started around the same time independently of the others (this comment suggests localtunnel predated it by only a few months). PageKite is Python-based, open source, offers end-to-end encryption and can tunnel protocols other than HTTP. Here’s a post comparing it to Showoff. They offer subscriptions, pay-what-you-want and a free plan for OSS devs.


Ultrahook is a new entrant to the scene, focusing on webhook debugging. Created by Vinay Sahni from SupportFu, Ultrahook consists of a client distributed via Ruby gem. The service is free, but the source is not available. By signing up for a free account you’ll receive an API key that “gives you a exclusive namespace. All endpoints you create will be subdomains under your own namespace. This ensures that you can always reconnect to the same endpoints at some point in the future.” Ultrahook does not proxy responses back and only supports POST requests.


Another Node.js client/server, currently in beta as of May 2014. Paid plans will be offered when it officially launches. As it is in beta at the time of this update, the full feature set is unknown.

Vagrant Share

Vagrant Share is a solution for sharing web servers (or any other TCP/UDP connection like SSH) running inside your Vagrant environments. Uses TLS end-to-end. Requires a Vagrant Cloud account.


The current king of localhost proxying is ngrok from Alan Shreve (formerly of Twilio). ngrok is written in Go (both client and server) and has, by far, the easiest client installation options with single-file executables for Windows, OS X and Linux. No more fighting with Ruby gems Windows users! In addition, ngrok adds an introspection layer so you can see the traffic that was passed back and forth over the tunnel and even retry requests. ngrok supports SSL, password-protected tunnels, reserved subdomains and support for TCP/UDP tunneling. It’s free and the source is on GitHub.

Runscope Passageway

My company Runscope also offers a version of ngrok called Passageway. We’ve added tight integration with the Runscope API Traffic Inspector and some other enhancements to create a more seamless experience with our other products.

My Recommendation

If you’re just getting started, I recommend ngrok. Client set up is the easiest and it offers the most features of any of the options and its completely free.

Did I miss any? If so, let me know in the comments.

Update 5/18/2014: Added Finch, localtunnel.me, Vagrant Share and updated details about Runscope Passageway using ngrok instead of localtunnel.

Photo courtesy of Tina Carlson

API Changelog: API Documentation Change Notifications

My company Runscope just launched this free service for tracking and sending notifications for changes to the docs for 23 popular APIs including Facebook, Twitter, Stripe, Twilio, Dropbox and more. Follow the APIs you depend on and receive a daily email summary if anything changes.


Today we opened the doors on Runscope, our new set of tools for debugging, testing and inspecting your API integrations. In addition to that, we announced that we’ve received a seed round investment of $1.1M from True Ventures, Lerer Ventures, David Cohen, Andreessen Horowitz, Nat Friedman, Jon Dahl and Ullas Naik.

While launching to the public is a lot of fun and the culmination of a lot of hard work, it’s just the starting line for the next stage of what we’re trying to accomplish. Even so, getting here required a lot of help and support from a great many people and I’d like to thank those that were instrumental in helping us get this far.

First off, my wife Emily. When I told her I wanted to quit my job to start a company she didn’t even flinch. When fundraising hit rough patches she was unwaveringly supportive. Thank you honey.

Next up, my co-founder Frank and the rest of the team that has joined us in the past few months: you’ve done amazing work in a short period of time and you’re a blast to work with. Honored to be working with you.

To Nat Friedman, Jon Dahl, Ullas Naik, Adam D’Augelli and the True crew, David Cohen, Ari Newman, Max Stoller, Steve Schlafman, Chris Dixon, and Frank Chen: I couldn’t ask for a better set of investors. Thank you all for your support.

And lastly to everyone else who helped with intros, reference calls, testing out the preview, advice, lawyering, finding office space, candidate referrals, etc: thanks for playing a small part in helping us get this far.

Now, the hard part.

API Digest 2.0

Today I’m happy to release a brand new API Digest. Previously the digest was email-only. Starting today all links will be available via apidigest.com and @APIDigest on Twitter and App.net. You can also subscribe to the RSS feed. Lastly, be sure to check out the new ‘Events' list.

Designing APIs for Humans

Earlier this month I gave a talk at Devs Love Bacon on an API design philosophy I picked up while working at IFTTT I’ve been calling ‘APIs for Humans’. Sadly, it’s not about this:

At IFTTT I identified three major areas for how non-developers end up consuming API data in significant numbers. In this talk I cover what those areas are and how you can tailor your API designs to encourage developers to build human-friendly integrations.

Since I had extra time I threw in a few points on developer experience too. After all, developers are humans too.

Authentication: Don’t be Clever

My contribution to the new API UX blog:

By using standardized, common authentication schemes you can reduce the cognitive overhead for the developer consuming your API and avoid getting into unfamiliar, untested security situations. Authentication is the first thing any developer using your API will have to deal with and it’s those first few moments that are crucial to their success using your API to solve their problem. Take care to make the first impression a good one.

The Good and the Bad of OAuth 2.0 Authorization Implementations

While testing out a new tool I’m working on that uses a variety of OAuth2 providers and thought I’d catalog some of the quirks I came across. This is just for the authorization flow, not for actually making requests once you’ve secured a token.

Now that the OAuth2 spec is solidified we should start seeing less and less of these issues.


Good: The APIs Console is one of the best out there. My favorite feature: allowing you to specify multiple callback URLs for a single app. This makes testing different environments way easier because you don’t need to go back and constantly edit the callback URL value.

Good: The OAuth 2.0 Playground is fantastic. It will show you all the HTTP requests that are made for a standard auth flow. This makes it really easy to debug issues on your end by comparing the requests. I used it to diagnose the following problem.

Bad: When requesting an Access Token the request will fail if you include any parameters that it is not expecting. Google does not require a state or type parameter when getting the token like some other APIs do and will give you a 400 Bad Request with an invalid_request error if they are included.

Bad: The scopes options are not immediately obvious. There’s a huge list of services and you must enable them individually (good) in the ‘Services’ section of your project in the APIs Console. This page though does not list the scopes value to use. The Playground does have what appears to be a complete list.


Bad: If you just want to use OAuth with the Graph API, you still need to enable “Website with Facebook Login” in order to set the Site URL. This is hidden by default and took me a little bit to find it. This setting restricts which domains they will redirect back to.

Good: The Site URL only requires a domain instead of a full URL which means you can change your callback URL path without breaking your app.


Bad: When you create the app you select which services you want your app to have access to but during the auth flow only one of the services is displayed.

Bad: There’s no support for limiting access to read-only via scopes. The only option is full read/write for all of the apps selected.


Bad: Stripe does not use a client_secret. Most client libraries and OAuth examples use it so it may be confusing for experienced developers wondering where that value is.

Bad: This is not technically wrong, but it isn’t common. When exchanging the code for an access token you are required to send an Authorization header with a value of Bearer and your Stripe account secret key (either test or live). I’d rather they just tell you to use the secret key as the client_secret and not require the additional header.


Bad: The docs say that the scope parameter is optional but will give you an error (400 OAuthException - Invalid scope field(s)) if it isn’t specified. You should specify at least ‘basic’.


Bad: The redirect URL settings requires HTTPS which can be difficult if you’re trying to test locally (for instance my test app runs on http://localhost:5001 which is accepted every where else). Box has informed me this will be resolved soon.

Bad: Does not use scopes for read-only or read/write access (is configured with the application). Box has also told me they will be changing this once they have more than one scope.

Microsoft Live Connect

Good: Permissions are set via an extensive and clear list of scopes. Very nice.

Stack Exchange

Good: Callback URL validation is set via domain only (similar to Facebook).


Bad: Scopes must be specified comma-separated, contrary to the OAuth2 spec. This is on the roadmap to be fixed.


Good: The application manager allows you to generate a token for yourself with custom scopes without having to go through the flow. This is extremely helpful during development.


Here are some others that I tried that I didn’t have any notable issues with: MailChimp, Foursquare, Wordpress.

Here are the services I desperately wish would move from OAuth 1.0a to OAuth 2.0: Twitter, Dropbox, LinkedIn

If you’ve come across a unique implementation quirk of your own, post it below in the comments.

Related Posts


This post is a little more personal and not at all technical. I realized I’ve been wanting to write about this for awhile.

On the latest episode of Hanselminutes two of my favorite internet friends had a discussion on age as they are nearing their 40th and 45th birthdays. I happened to come up a couple times as a reference point for The Young’uns.

Age has always been a sensitive issue for me. When I was 5 my mom homeschooled me and when she couldn’t do it anymore and it was time to go to public school I was effectively done with first grade so she bumped me ahead to second grade. The rest of my primary and secondary school years I trailed all my classmates in age by a year. I got my license my junior year, graduated at 17, etc. I played sports on teams consisting of all older players. I was always very determined to prove that my age didn’t matter.

When I started running companies when I was 15 my age was an advantage. It got me attention and customers. When I started working real jobs at 19 it was also an advantage because people couldn’t believe how mature and experienced I was “for his age.” At about 25 when all my peers had experience too (with degrees to boot) it wasn’t as advantageous.

Now I’m 31. In the SF tech scene I frequently feel old. My experience frequently gets deferred to (which often makes me slightly uncomfortable). Some of the younger engineers I’ve worked with lately have made me feel really old. They’re insanely smart but make me realize how much experience I actually have now (going on 16 years of doing computer stuff for money). 30 under 30 lists drive me crazy not because that’s my goal in life, but just because it’s impossible nonetheless. My wife consoles me by saying there’s plenty of time for 81 under 81.

Back to the Hanselminutes episode. Rob talks about a recent dinner we had together where he was giving me some advice (that I appreciated) and my reaction. I (not surprisingly) reacted by giving him a look that said I have it all under control. Almost certainly I was overcompensating for my age insecurities. I rarely feel like I do have everything under control and I crave insight from people more experienced than me.

At the same time, I do know some things now. More importantly I know what I don’t know. All of that experience has lead me to a position where I can be a founder of a startup. I’m not doing this because I’m young and impetuous and want to ride the wave of the current ‘bubble.’ I’m doing this because I know what it will take for me to be happy in my work and I’m not going to spend any more time not pursuing it. I’ve known that for a long time, but I’m at a point where my age is an asset in making this happen. So I’m going to.


I’m very excited to announce that I have co-founded a company. If you use APIs in your mobile or web applications request an invite to our early preview. We’re still very early in this and will be talking about more details as we progress.

The plan when I moved out to San Francisco was always to eventually start something of my own. I learned a ton working at Twilio and IFTTT and now hope to apply that knowledge to build a great company. Now’s the time. We’ve got a great team and a giant problem to solve and I can’t wait to show it to all of you.

Follow Runscope on Twitter to track our progress.

Box API v2: Less is More

Box has updated their API to version 2 and has simplified it along the way. The highlights:

  • Consistent request/response models
  • Convention-based attribute names (please do this in your APIs)
  • No more XML, huzzah!
  • Better events streams (I hope that Webhooks follows)
  • Settled on OAuth2 standard

My favorite line:

There’s an inherent impedance mismatch between XML and regular humans.

These are all very developer-friendly changes. If you’re designing an API, there’s a lot to learn from Box’s experience.

Source: http://developers.blog.box.com/2012/12/14/...

Traffic and Weather: My new Cloud/API Podcast

Steve Marx and I had such a great time with the conversation I posted here that we decided to do it every week. Check out Traffic and Weather, a (mostly) weekly news and commentary podcast about all things cloud and API. We’ve already done two episodes and I’m really happy with how they’ve turned out.

Subscribe to the RSS feed or subscribe in iTunes.

Lastly, thanks to my friends at SendGrid for sponsoring our first few episodes.

Craving Conversation

Twitter is my water cooler. I blow off steam, talk about what I’m working on, learn new things from others, etc. For the first few years I spent using the service it seemed like every night there was an awesome conversation going on. Lately it seems like people have been conversing less and retweeting more. I don’t know if it’s because I’ve shifted away from following people in a specific tech community (the .NET crew seems to have focused their conversing there more than the others I’ve observed) or just a general behavioral shift in how people use Twitter (or both) but lately I’ve been missing those great conversations.

Other social networks never really filled the gap. Google+ was OK, but activity tapered off pretty quickly after it launched. App.net is still promising, but nascent and hasn’t grabbed me yet. I almost started a site just to share links and discuss APIs just to have it. I’m not famous enough to bootstrap a discussion community though (learned that lesson once before).

Then Google+ Communities were launched a few days ago. You can create them by topic. They’re moderated. The posts work like any other G+ post so links, etc. are first-class and each item has comments. To me, this is the perfect application of the G+ stream structures and having them scoped by topic solves the problem of having to follow someone who may post a lot about stuff you’re not interested in.

Ultimately, it’s another message medium like many before. The quality will be determined by the people that contribute to the community. But it’s looking promising. You can find me chatting about APIs over there.