Actually, he was writing here (in the mid-1940s) about the art of translation in Bend Sinister, his second novel composed in English (a reviewer in 1947 dismissed it—I paraphrase from memory—approximately as follows: “Here is a kind of novel that might enjoy ephemeral success for a season and then be forgotten. So why not forget it now?”). Interestingly, this theory of translation was one Nabokov came to repudiate when he undertook his unmusical but fiercely literal translation of Eugene Onegin a decade later. Here are the lines I have in mind:
BTW, given the rapid advances in the field, ought AI be given its own forum in Area 51?
cordially,
It was as if someone, having seen a certain oak tree (further called Individual T) growing in a certain land and casting its own unique shadow on the green and brown ground, had proceeded to erect in his garden a prodigiously intricate piece of machinery which in itself was as unlike that or any other tree as the translator's inspiration and language were unlike those of the original author, but which, by means of ingenious combination of parts, light effects, breeze-engendering engines, would, when completed, cast a shadow exactly similar to that of Individual T—the same outline, changing in the same manner, with the same double and single spots of sun rippling in the same position, at the same hour of the day.But I think the passage has some application as we contemplate the creation of “artificial intelligence.” A future machine intelligence might be capable of a close mimicry of human behavior, but the underlying architecture will inevitably be very, very different. Some would accordingly maintain that this is, ipso facto, proof that silicon, or gallium arsenide, or whatever inorganic material it might please you to imagine, cannot serve as a substrate for sentience. Daniel Dennett refers to these people as “carbon chauvinists,” and as you might imagine, I league myself with him. AI skeptics cite the phenomenal complexity of human wetware as an insuperable barrier to our understanding or creation of machine consciousness. For my part—speaking here, of course, as a layman whose acquaintance with hardware and with code is far toward the lower end of the regulars here—I anticipate the creation of systems that will pass the Turing test in daily use by early the next decade. At a certain point before 2040, it will be recognized that some of these systems possess volition, and I’ll further predict that this recognition will dawn only a few years after the fact.
BTW, given the rapid advances in the field, ought AI be given its own forum in Area 51?
cordially,