Okay, so after months of inflamed hype, Wolfram Alpha launched a week or so ago and nearly everyone agrees it is terrible. The question is not how it could be so terrible; it is why everyone didn’t know from the start that it would be terrible. Stephen Wolfram is a brilliant software engineer, but his horizons address the abstract infinite spaces of computational logic, and so he misses the trees for the forest, the earth for the heavens. Let’s be blunter. Wolfram Alpha is terrible because Stephen Wolfram doesn’t respect and fear data.
Wolfram Alpha depends upon data, thousands of different electronic reference sources, “about nine-tenths of what you’d see on the main shelves of a reference library,” according to Wolfram. The problem with the assumption that this data foundation is adequate is two-fold. First, the amount of information locked in online databases, information that even Google cannot access, is exponentially larger than what one would find on the shelves of a reference library. Second, even integrating the data on a reference library shelf for the purposes of logical computation requires surmounting basic database relationship challenges that no one has yet to resolve.
Those who toil in vast and lonely data mines quickly learn that structured data is no one’s friend. Most data does not talk easily or work cooperatively with other data. Yoking database tables together is painstaking, difficult, and often requires hacks and tricks and legerdemain worthy of Houdini. Issues of timeliness, reliability, and performance stymie nearly every ambitious data development effort. At the end of the day, the week, the month, and the year, the creation of useful and powerful database applications requires careful, intelligent attention to the details of the data. Working with databases is artisanal, not mathematical. It is like brewing fine craft beer, not building the Starship Enterprise.