I’ve long maintained that the deal that’s been struck by computer makers with computer users about security is very one-sided. It assumes that the user always has the nouse to deal with their end of the bargain, and that’s simply not always the case. One has to remember that, by definition, half of all people are of less than average intelligence. I call it the ‘paradox of expertise’: computer makers are incapable of empathising with a rather large contingent of users.
When I was studying for my BSc, I had one lecturer in particular who quite clearly knew his stuff. But he seemed — at least from where I sat — unable to impart his knowledge.
I believe that he was so familiar with his subject that he had forgotten what it was like to be unfamiliar with it (as I was). And so, on the whole, his lectures were, for me at least, more or less inscrutable.
Another example is in a game called ‘Hand of Fate‘, in which there are various beasties one has to overcome. Some of them have weaknesses to certain things. For instance, lizardmen have a particular vulnerability to cold. Some weapons and artifacts in the game have effects that tie in with such vulnerabilities, making it easier to defeat certain beasties — if you use them.
In this case, Frost Fang and Feathered Ice are effective against lizardmen. The problem is that these effects are only ever presented to the player once, when these items are first discovered. The reason for this is, to my mind, clear: the game’s developers are very familiar with the game. Too familiar; they’ve lost the ability to put themselves in the new player’s shoes, so they don’t see that their game doesn’t equip the new player with enough information to handle certain encounters. (Suggest such things to expert players who are as familiar with the game’s mechanics as the developers and all you’re likely to get is advice to ‘git gud‘.)
I have used the term ‘paradox of expertise’ for many years to describe this inability on the part of experts to effectively pass on their knowledge. Unsurprisingly, others have also used this term, as a quick Internet search will reveal, although some uses of the term aren’t quite the same as mine. (Here, for instance, it’s used to describe the inability of experts to continue learning; and the CIA, no less, uses the term to describe how experts are no better at predicting the future than simple statistical models.)
An article from creativethinking.net illustrates the problem succinctly:
The figure below illustrates a series of progressively modified drawings that change almost imperceptibly from a man into a woman.
When test subjects are shown the entire series of drawings one by one, their perception of this intermediate drawing is biased according to which end of the series they started from. Test subjects who start by viewing a picture that is clearly a man are biased in favor of continuing to see a man long after an “objective observer” (an observer who has seen only a single picture) recognizes that the man is now a woman. Similarly, test subjects who start at the woman end of the series are biased in favor of continuing to see a woman. Once an observer has formed an image–that is, once he or she has developed an expectation concerning the subject being observed–this influences their future perceptions of the subject.
Wikipedia describes ‘my’ paradox as a ‘conceptualization paradox‘. I think my term is a more descriptive label — but being a self-professed expert in the field, I’m bound to think that :)