Monday, November 21, 2022

'The Singularity'


I tend to be wary of Ted Chiang's books because they belong firmly to the genre of sci-fi. That is NOT a genre I'm keen on. I grudgingly read 'Exhalation' (2019) last year. Hahaha. His short stories are amazing, but they're still not my preferred read. To that, I'm sure everyone has watched the film 'Arrival' (2016), directed by Denis Villeneuve; starring Amy Adams, Jeremy Renner and Forest Whitaker. That's adapted from the author's novella 'Story of Your Life' (1998).

At a Keynote at Singapore Writers Festival (SWF) titled 'Time Travel in Fiction and Physics', the 55-year-old Ted Chiang mentioned about prophecies and how they were pretty much the concept of time travel in the past, and flagged the tale of tragic hero in Greek mythology Oedipus, who fulfilled the prophecy at birth regardless of how hard he tried to avoid it. The SWF moderator Huzir Sulaiman mentioned that the author's stories talk a lot about free will and lacks a 'state'. It also talks about language and determinism, and yeah, fatalism. 

Of course Ted Chiang talked about time travel films like the iconic 'Back to the Future' (1985) and its sequels, all the Terminator films, I liked it that he mentioned 'Looper' (2021)! That was such a good one.  He didn't mention Shane Carruth's annoying and mind-boggling 'Primer' (2004) though.

We fear and yearn for “the singularity.” But it will probably never come.

In an essay titled 'Why Computers Won't Make Themselves Smarter' published in The New Yorker on March 30, 2021, Ted Chiang began the piece with the proposed argument of God in existence laid out by 11th century Italian monk St. Anselm of Canterbury versus 20th century mathematician Irving Good's 1965 hypothesis of an 'intelligence explosion' via 'an ultraintelligent machine'. 

Our computers are innovating and improving every year. Well, as long as we have electricity, and our undersea internet cables withstand all pressure and we remember to replace it every 25 years. Oof. Tech companies are all building and refining their own AI programs. Can a smart machine ever design a successor? Well, all our mainstream/indie/weird dystopian movies have been doing storylines and plots. If it ain't zombies, it's the talk that AI would take over the world. Machines vs Humans. 

Ermmm... I don't think we're there yet. The apocalyptic scenario, or doomsday isn't anywhere in sight. At least not in my lifetime. The author suggested that,

In the same way that only one person in several thousand can get a Ph.D. in physics, you might have to generate several thousand human-equivalent A.I.s in order to get one Ph.D.-in-physics-equivalent A.I. It took the combined populations of the U.S. and Europe in 1942 to put together the Manhattan Project. Nowadays, research labs don’t restrict themselves to two continents when recruiting, because building the best team possible requires drawing from the biggest pool of talent available. If the goal is to generate as much innovation as the entire human race, you might not be able to dramatically reduce that initial figure of eight billion after all.

We’re a long way off from being able to create a single human-equivalent A.I., let alone billions of them. For the foreseeable future, the ongoing technological explosion will be driven by humans using previously invented tools to invent new ones; there won’t be a “last invention that man need ever make.” In one respect, this is reassuring, because, contrary to Good’s claim, human intelligence will never be “left far behind.” But, in the same way that we needn’t worry about a superhumanly intelligent A.I. destroying civilization, we shouldn’t look forward to a superhumanly intelligent A.I. saving us in spite of ourselves. For better or worse, the fate of our species will depend on human decision-making. 

No comments: