You will optimize whatever metrics you choose to measure, which is why if you evaluate programmers on lines of code, you’ll end up with gigantic comment blocks, and if you evaluate them on bugs closed, you’ll be seeing a lot more initially-buggy code.A

So how do you measure something fuzzy like “built a generation of webmakers”?

Until we’ve installed mind-reading microchips in everyone’s brain (patent pending), we’re probably going to have to settle for indirect measurement: measuring side-effects, close approximations and proxies.

Here’s some examples of things we might be able to measure.  Thanks to Ben Simon for brainstorming this stuff with me.


Total authorship “levels” on the web

Imagine that you had some reasonable way to figure out who authored a document on the web, and some reasonable statistic about the number of personas an average person has on the web.  Now all you need is to run those numbers across every document, and you can collect a reasonable figure for the number of authors online.

There’s an extra step to make this figure even more compelling.  Divide the web into several content “types”:

  • Commented on someone else’s blog
  • Created a wordpress blog and posted something
  • Created a wordpress blog and manually changed the template
  • Created an HTML page from scratch
  • etc.
Could we partner with search engines (eg: Google) to measure the unique number of authors over time?  That seems like it might let us measure whether or not people are “advancing” to the next bucket or not.


Post-event sampling

After attending one of “our” events, could we do a focus-group-style sampling of the people who leave the event and what they go on to do thereafter?   I put “our” in quotation marks, because I don’t just mean Mozilla events, but actually any webmaking style event that is affiliated with us, belongs to our community, etc.


Measuring deltas of public participants

Some of our participants (like journalists) are going to have very public artifacts (ie: their articles) before and after their participation in a program like Open News.  We could measure the difference in their output, or the complexity of elements that they use to see whether or not they learned anything and are putting those skills to use.


Drop-off in our tools

This might be the easiest thing to measure, though we’d need to couch the tool usage in some sort of opt-in measurement thingything, but when our tools are being used to teach these skills, are the “lessons” completed?  If not, where does drop-off happen?  etc.


Measure bottlenecks to making

Are you not making webpages because you don’t know how?  Because you’re scared of breaking shit?  Because you can’t see how it has anything to do with your life?  Because you tried and failed?  Unless we know where the bottlenecks are, it’s going to be hard for us to improve those bottlenecks.  Ben suggested that we could do old-school phone surveys to get a general population wide metric.  As an engineer who hates the telephone, this made me cry a little on the inside, but it’s not a bad idea, especially since if we want a “before” metric, we should be jumping on this train now.


Measure what we actually want to measure

This is an interesting point: why do we want to build a generation of webmakers?  If we go up a little and tackle that “why” (forthcoming blog entry, I suspect…), then maybe that‘s what we should be measuring.  For example, if we believe that by making things on the web, we’ll convince people that the open web should be protected, then let’s actually measure that.


So much food for thought!!  I’m gonna end up with a brain tummyache at this rate…