Should I Learn Coding as a Second Language?

I can’t program And it bothers me because – with so many books, courses, and camps out there – there are so many opportunities to learn these days. I think I would understand the machine revolution better if I spoke their language. Should I at least try? ”

-code extraction


Dear decoder,
Your willingness to speak the “language” of machines reminds me of Ted Chiang’s short story “The Evolution of Human Science”. The story imagines a future in which nearly all academic disciplines are now controlled by “superhumans” whose understanding of the world vastly outstrips that of human experts. Reports of new metahuman discoveries — though ostensibly written in English and published in scientific journals anyone can read — are so complex and technically opaque that human scientists have been relegated to a role akin to theologians, trying to interpret obscure texts. For them it was the will of God for the medieval scholastics. Instead of conducting original research, these would-be scholars are now practicing the art of hermeneutics.

There was a time, not so long ago, when programming was seen as among the most forward-looking skill sets, one that turned a person into the technological elite that would define our future. Chiang’s story, first published in 2000, was prescient in its ability to predict the limits of this knowledge. In areas such as deep learning and other forms of advanced artificial intelligence, many technologists already look more like theologians or alchemists than “experts” in the modern sense of the word: although they write the raw code, they are often unable to explain the emergence of high skills. The level that their software develops while training on the datasets. (One still remembers the shock of hearing David Silver, Principal Research Scientist at DeepMind, insist in 2016 that he couldn’t explain how AlphaGo — a program he designed — managed to develop its successful strategy: “He figured this out himself,” Silver said, through her own process of reflection and analysis.”).

Meanwhile, algorithms like GPT-3 or GitHub’s Copilot have learned how to write code, sparking debates about whether software developers, whose profession was once a quiet island in the coming tsunami of automation, may soon become irrelevant — and sparking Existential concerns about the self. Programming. Scenarios of runaway AI have long relied on the possibility of machines learning to evolve on their own, and while coding algorithms are not about to start the Skynet takeover, they nonetheless raise legitimate concerns about the growing opacity of our technologies. AI does have a ingrained tendency, after all, to figure out proprietary solutions and to devise custom languages ​​that are not intuitive to humans. Many are beginning to understandably wonder: What happens when humans can’t read code anymore?

I state all this, Decoder, by acknowledging the stark facts, and not belittling your ambitions which I think most commendable. For what it’s worth, the prevailing concerns about programmer obsolescence seem to me alarming and premature. Automated code has existed in some form for decades (remember the 90s web editors who created HTML and CSS), and even the most advanced coding algorithms, right now, are prone to minor errors and require little human supervision. It also seems to me that you are not looking to make a career out of programming so much as being motivated by a deeper sense of curiosity. Perhaps you are thinking about the creative pleasures of amateurs – contributing to open source projects or suggesting fixes for minor bugs in software you use regularly. Or maybe you are fascinated by the possibility of automating boring aspects of your work. What you most desire, if I’m reading your question correctly, is a broader understanding of the language that underlies so much modern life.

There is a compelling case to be made that coding is now an essential form of literacy — that understanding data structures, algorithms, and programming languages ​​is just as critical as reading and writing when it comes to understanding the larger ideologies in which we become involved. It is normal, of course, not to trust the undertaker. (Hobbyist developers are often disparaged for knowing just enough to make a mess, having mastered the syntax of programming languages ​​but having none of the foresight and insight needed to create successful products.) But this forgetfulness of experience might be seen as system in humility. One benefit of amateur knowledge is that it tends to spark curiosity simply by convincing the novice how little they know. In an age of simplistic, easy-to-use interfaces, it’s tempting to take our technologies at face value without looking at the incentives and agendas lurking beneath the surface. But as you learn more about the underlying architecture, fundamental questions will come to your mind: How is code translated into electrical impulses? How does software design subtly change users’ experience? What is the core value of principles such as open access, sharing, and the digital commons? For example, to the average user, social platforms may seem designed to connect you with your friends and pass on useful information. However, awareness of how the site is structured inevitably leads one to think more critically about how to organize its features to drive interest, create robust data funnels, and monetize social graphs.

Ultimately, this knowledge has the potential to inoculate us against fatalism. Those who understand how and why the software is built are less likely to accept its design as inevitable. I’ve talked about the machine revolution, but it’s worth noting that the most famous historical revolutions (those started by humans) were the result of mass literacy combined with technological innovation. The invention of the printing press and the demand for books from the newly educated public laid the foundation for the Protestant Reformation, as well as for the French and American Revolutions. Once a large part of the audience could read for themselves, they began to question the authority of priests and kings and the inevitability of ruling suppositions.

The technologist cadre weighing our most pressing ethical questions—about data fairness, automation, and AI values—frequently stress the need for greater public debate, but nuanced dialogue is difficult when the general public lacks basic knowledge of the technologies in question. (One need only look at a recent US House subcommittee hearing, for example, to see how far lawmakers have come from understanding the technologies they seek to regulate.) The New York Times Advanced models of AI are being developed “behind closed doors,” notes technology writer Kevin Rose, and curious secularists are increasingly forced to brush off esoteric reports of their inner workings — or take expert explanations on faith. “When information about [these technologies] Advertised,” he writes, “is often mitigated by corporate PR or buried in obscure scientific papers. “

If Xiang’s story is an example of the importance of keeping humans “in the loop,” it also makes an accurate case for making sure the circle of knowledge is as large as possible. As artificial intelligence has become more proficient in our languages, astounding us with its ability to read, write, and speak in a way we can feel reasonably human, humans’ need to understand programming dialects has become everything. more urgent. And the more people of us are able to articulate this argument, the more likely we are to remain the authors, rather than the translators, of the machine revolution.

faithfully,

clouds


Please note that CLOUD support has higher wait times than usual and appreciates your patience.

If you buy something using the links in our stories, we may earn a commission. This helps support our journalism. learn more.

This article appears in the March 2023 issue. subscribe now.

Let us know what you think of this article. Send a message to the editor at mail@wired.com.

.

Leave a Reply

Your email address will not be published. Required fields are marked *