I dunno about you, but I suck at wild guessing.
You know that contest where you have to guess the number of jelly beans in a jar? I’m that person who sits there pondering: “if I can see 16 yellow jelly beans from where I’m standing, and there are 6 colors of beans, and the ratio of half the surface area of a cylinder divided by the volume of the cylinder is, …” I like to refer to this type of process as making an educated guess. People who play Clue with me like to refer to it as “just guess a damn murder weapon and stop making all those charts”.
There are times, though, when the problem space is so fuzzy, that even an educated guess isn’t going to yield fantastic results.
Consider the example problem: “Which internet kitten is cutest?”
I could probably do better by consulting some experts. My friend Yuka certainly seems like she’d have a lot of smart things to say about the topic. Or I could reach out to the lolcat folks, who no doubt come across a wealth of adorable kittens and could offer some expert testimony.
The problem is that this question doesn’t consider whose opinion matters, nor does it offer any solution to the problem that there are millions of cat photos and no one person is probably going to evaluate each one.
A much better solution is to develop a methodology to help discover the answer. Here’s an example plan: consult some “experts” and put together a bucket of really cute kittens. And then make a “which is cuter, A or B?” system where the world can evaluate kittens for cuteness. Anyone can add a kitten they think is even cuter than the current options, and thus the system will eventually converge on the answer.
Right now, kittenwar has done just that. At no point can you definitively claim that kittenwar has the “right” answer (for example, my two kittens aren’t in their database yet, and until you seem them sleepily grooming each other, the contest is still on!), but it’s continually getting “righter”.
So what does this have to do with web literacy? Am I just looking for an excuse to look at kitten videos?
The answer to the question “what should every person know to be considered ‘web literate’?” is a damn fuzzy problem. I can do my best to consult experts I know, and well-known experts, and there’s a lot of information in the field already out there. But no static guess (Plan A above) is going to be better in the long term than developing a methodology for testing a hypothesis set of skills and evaluating whether or not it’s superior to another set (Plan B above).
This simple process of guess, test, repeat also allows us the flexibility to be wrong without disaster striking. There’s room for changing later. And when it comes to a domain as constantly-in-motion as the web, this is not just a feature, but actually essential. Today’s skills may be irrelevant in 2, 5, 10 years.
So this is what I’m going to be spending a bit of time working on. Talking to Really Smart People, developing a set of initial skills, and then testing and iterating.
(Now those of you who are wearing your science pants today may be wondering about this whole “test” stage. How do you determine which set of skills is superior? What are your “superior set of skills” metrics? Aha, great question. You’re way ahead of me. The quick answer is: I don’t know yet. The slightly longer answer is: I suspect we won’t have great metrics at first. Instead we’ll have to start with as good of metrics as we can think of, test them in practice, then repeat. 😉 But I’ll clearly be touching on this a lot more in the weeks to come…)