Techno Blender
Digitally Yours.

You Can Keep Your Job, but It Won’t Be the Same Job

0 84


I recently devoted three posts on my reluctant study for the OCP-17 Java exam, offering advice on how to make the effort less of an ordeal. I haven’t passed it yet. With every new advance in AI coding assistance, honing your skills as a human compiler seems to me more anachronistic. It always was an act of masochism, but I am increasingly convinced that there is no professional advantage in becoming good at something the machine is superior at. I concede that any pursuit can be beneficial or enjoyable for reasons other than mere utility, but as a developer, I am paid to be productive. Having a good time at the job is a nice-to-have, and the skills the OCP calls for are not my idea of fun.

Many intellectual tasks that are hard for humans are easy for computers (chess, arithmetic, rote learning) and have been that way for decades. We invented higher-level programming languages and garbage collecting because human beings are terrible at flipping bits and managing memory. The roadmap of computer languages and tooling is one towards ever greater abstraction. GitHub Copilot and the likes are only the next unavoidable step in removing accidental complexity.

Techno-optimists like Marc Andreessen believe AI will save the world. Read his essay, but also consider he’s a billionaire whose creature comforts are safe from upheavals in the developer job market. You and I, with a mortgage to pay and some years left until retirement, must wonder how we can keep our skills several clever steps ahead, preferably in a niche that AI can never encroach on. Does such a niche exist, and will it stay off-limits for long? I believe there is. It’s within human language. Humans still have a massive competitive advantage. That’s what this post is about. I won’t deny I’m impressed by ChatGPT and Google Translate, but as a former translator, I am neither unduly worried that they can replace us. 

Cylons Would Never Speak English

You’d assume that sentient computers in a distant galaxy would speak some form of computer language to each other. As young sci-fi fans, my brother and I were served very well by Dutch tv in the late seventies: Star Trek, Buck Rogers, Blake’s 7 and goofiest of all, the original Battlestar Galactica. Being the bright pre-teens we were, we quickly noticed the serious design flaws in the evil Cylon robot race. Besides their frankly pathetic flying skills, why in Zeus’ name would these fridges communicate in American English to each other!? At the rate of two syllables per second and using sound waves. No general intelligence would evolve in such a clumsy and anthropocentric direction.

In his essay, Andreessen gives the following description of AI: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.

This sounds plausible, but the inner working of any AI isn’t similar to how people do it at all. A chess algorithm doesn’t emulate what goes on in Magnus Carlsen’s brain. It achieves a similar (by now superior) effect by computational force, using large amounts of energy. All our inventions are energy-hungry compared to the biological functions they emulate. Aeroplanes do not mimic the way birds fly. You can’t carry 300 holidaymakers plus their luggage by flapping your wings. So, we invented powerful propellers and jet engines, that work better with rigid, non-biological materials.

Large language models are much more wasteful and less successful than a chess engine because the rules of the game are vastly more complex. Fair enough. Just expose it to more source data and Moore’s Law will do the rest. But exposure to more data is not the solution here. Human language is intractable to machines because it is used by humans for humans. Infants do not acquire it by being locked in a library for a year, but from using it in context for very particular and pressing purposes, like asking for milk or a diaper change, building up to more sophisticated usages like a salary negotiation 25 years later. AI misses that context entirely and has no such purpose.

Universal Grammar

A brief lesson in linguistics may be in order. Historically, the study of language belonged to the arts faculty of universities. Scholars studied literature, ancient scripts, and dialects and compiled grammar and dictionaries for minority languages. In the 20th century (notably through the works of Noam Chomsky) scientists became interested in what is unique about the human capacity for language and started asking new questions. Can we study a corpus of linguistic utterances (speech and writing) and deduce laws that hold for all human languages? Is there a universal grammar from which all current languages derive, and can we phrase it as precisely as the laws of physics, without exceptions? Could we then convert these rules into code and have it produce correct French, Navaho, or Swahili sentences?

Perhaps, but those sentences still wouldn’t make much sense. Language operates at multiple layers. At the syntactic layer, there are rules to tell when a sentence is well-formed but have nothing to say about meaning. We can say that a sentence is syntactically correct, yet quite meaningless. Conversely, we have an effective fault tolerance for syntactic errors and can still understand a foreign speaker’s broken attempt to understand themselves. This remarkable feature is known as the duality of structure. Syntax and meaning (semantics) seem to operate with their own rules. Native speakers have acquired the linguistic intuition to know when a sentence is syntactically correct and when it makes sense. It’s the ultimate challenge to make these mental mechanisms explicit. We’re not there yet. Not by a long stretch.

Do You Mind, I’m Eating My Tea

The science gets even fuzzier when you zoom out and include usage and intent. Words need to make sense within the sentence, but the message as a whole must also be appropriate to the situation. Do you mind, I’m eating my tea makes perfect sense in British English, where the evening meal is called tea. But it’s also informal. You don’t call it tea at a fancy restaurant. The study of pragmatics looks at language usage and considers all these cultural sensitivities. You don’t learn this implicit knowledge from a book. You acquire it from years of exposure. LLMs are entirely clueless and oblivious to this. How could it be otherwise? If ever we managed to pour the full richness and messiness of human language into working code and computers grew conscious, they’d probably hate it.

Machine translation does a fine job on anything predictable and unimaginative, like a weather forecast or a recipe for chocolate fudge. Literature, poetry, or anything that calls for originality and creativity: not so much. Try this for a prompt: rewrite the hit musical Hamilton into Afrikaans, set in the 1980s with lead characters Nelson Mandela and Frederik de Klerk. Keep all the meter, rhyme, and humour intact. The output will be unintentionally funny in places, but generally atrocious and unusable.

In a recent podcast interview, the same Marc Andreessen gushed to Sam Harris that you can have a meaningful philosophical discussion with ChatGPT. No, Marc, you can’t. You have been bamboozled. The machine is still flying blind. It’s making statistically informed guesses from having ingested the works from Plato to Bertrand Russell, with infusions of neo-Nazi hate speech, and has cooked you up an elaborate bluff. 

Meanwhile, What About Coding?

Writing computer code is analogous to the syntactic layer of language. AI is perfectly equipped to help you with warnings and useful suggestions in real time beyond simple compiler correctness. It can even write the code for you. But is the code appropriate to our human goals?  Are we building the right thing? Is this code useful or harmful? Should we have written it in the first place? There’s no IntelliJ plugin for that. Only a human being can answer that.

Your competitive edge is in answering these questions. You should take an interest in the fuzzy and messy layers where code touches the world of human affairs. Yes, your role may become a hybrid between business analyst and developer. You may not like it, because you will write less and less code, and coding is such darn fun. Keep it as a hobby then. AI already wins the Advent of Code and I’m sure it aces the OCP. You’re out of your depth already and things can only get worse. That’s not a disgrace. Nobody can win at arm-wrestling against a gorilla. Choose your opponent wisely.

Let AI deal with the accidental complexity and let us get to the essence. Using computers to solve problems for humans was never about writing more code. Much of the essence of building a program is the debugging of the specification, as the late Fred Brooks wrote as early as 1986.


I recently devoted three posts on my reluctant study for the OCP-17 Java exam, offering advice on how to make the effort less of an ordeal. I haven’t passed it yet. With every new advance in AI coding assistance, honing your skills as a human compiler seems to me more anachronistic. It always was an act of masochism, but I am increasingly convinced that there is no professional advantage in becoming good at something the machine is superior at. I concede that any pursuit can be beneficial or enjoyable for reasons other than mere utility, but as a developer, I am paid to be productive. Having a good time at the job is a nice-to-have, and the skills the OCP calls for are not my idea of fun.

Many intellectual tasks that are hard for humans are easy for computers (chess, arithmetic, rote learning) and have been that way for decades. We invented higher-level programming languages and garbage collecting because human beings are terrible at flipping bits and managing memory. The roadmap of computer languages and tooling is one towards ever greater abstraction. GitHub Copilot and the likes are only the next unavoidable step in removing accidental complexity.

Techno-optimists like Marc Andreessen believe AI will save the world. Read his essay, but also consider he’s a billionaire whose creature comforts are safe from upheavals in the developer job market. You and I, with a mortgage to pay and some years left until retirement, must wonder how we can keep our skills several clever steps ahead, preferably in a niche that AI can never encroach on. Does such a niche exist, and will it stay off-limits for long? I believe there is. It’s within human language. Humans still have a massive competitive advantage. That’s what this post is about. I won’t deny I’m impressed by ChatGPT and Google Translate, but as a former translator, I am neither unduly worried that they can replace us. 

Cylons Would Never Speak English

You’d assume that sentient computers in a distant galaxy would speak some form of computer language to each other. As young sci-fi fans, my brother and I were served very well by Dutch tv in the late seventies: Star Trek, Buck Rogers, Blake’s 7 and goofiest of all, the original Battlestar Galactica. Being the bright pre-teens we were, we quickly noticed the serious design flaws in the evil Cylon robot race. Besides their frankly pathetic flying skills, why in Zeus’ name would these fridges communicate in American English to each other!? At the rate of two syllables per second and using sound waves. No general intelligence would evolve in such a clumsy and anthropocentric direction.

In his essay, Andreessen gives the following description of AI: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.

This sounds plausible, but the inner working of any AI isn’t similar to how people do it at all. A chess algorithm doesn’t emulate what goes on in Magnus Carlsen’s brain. It achieves a similar (by now superior) effect by computational force, using large amounts of energy. All our inventions are energy-hungry compared to the biological functions they emulate. Aeroplanes do not mimic the way birds fly. You can’t carry 300 holidaymakers plus their luggage by flapping your wings. So, we invented powerful propellers and jet engines, that work better with rigid, non-biological materials.

Large language models are much more wasteful and less successful than a chess engine because the rules of the game are vastly more complex. Fair enough. Just expose it to more source data and Moore’s Law will do the rest. But exposure to more data is not the solution here. Human language is intractable to machines because it is used by humans for humans. Infants do not acquire it by being locked in a library for a year, but from using it in context for very particular and pressing purposes, like asking for milk or a diaper change, building up to more sophisticated usages like a salary negotiation 25 years later. AI misses that context entirely and has no such purpose.

Universal Grammar

A brief lesson in linguistics may be in order. Historically, the study of language belonged to the arts faculty of universities. Scholars studied literature, ancient scripts, and dialects and compiled grammar and dictionaries for minority languages. In the 20th century (notably through the works of Noam Chomsky) scientists became interested in what is unique about the human capacity for language and started asking new questions. Can we study a corpus of linguistic utterances (speech and writing) and deduce laws that hold for all human languages? Is there a universal grammar from which all current languages derive, and can we phrase it as precisely as the laws of physics, without exceptions? Could we then convert these rules into code and have it produce correct French, Navaho, or Swahili sentences?

Perhaps, but those sentences still wouldn’t make much sense. Language operates at multiple layers. At the syntactic layer, there are rules to tell when a sentence is well-formed but have nothing to say about meaning. We can say that a sentence is syntactically correct, yet quite meaningless. Conversely, we have an effective fault tolerance for syntactic errors and can still understand a foreign speaker’s broken attempt to understand themselves. This remarkable feature is known as the duality of structure. Syntax and meaning (semantics) seem to operate with their own rules. Native speakers have acquired the linguistic intuition to know when a sentence is syntactically correct and when it makes sense. It’s the ultimate challenge to make these mental mechanisms explicit. We’re not there yet. Not by a long stretch.

Do You Mind, I’m Eating My Tea

The science gets even fuzzier when you zoom out and include usage and intent. Words need to make sense within the sentence, but the message as a whole must also be appropriate to the situation. Do you mind, I’m eating my tea makes perfect sense in British English, where the evening meal is called tea. But it’s also informal. You don’t call it tea at a fancy restaurant. The study of pragmatics looks at language usage and considers all these cultural sensitivities. You don’t learn this implicit knowledge from a book. You acquire it from years of exposure. LLMs are entirely clueless and oblivious to this. How could it be otherwise? If ever we managed to pour the full richness and messiness of human language into working code and computers grew conscious, they’d probably hate it.

Machine translation does a fine job on anything predictable and unimaginative, like a weather forecast or a recipe for chocolate fudge. Literature, poetry, or anything that calls for originality and creativity: not so much. Try this for a prompt: rewrite the hit musical Hamilton into Afrikaans, set in the 1980s with lead characters Nelson Mandela and Frederik de Klerk. Keep all the meter, rhyme, and humour intact. The output will be unintentionally funny in places, but generally atrocious and unusable.

In a recent podcast interview, the same Marc Andreessen gushed to Sam Harris that you can have a meaningful philosophical discussion with ChatGPT. No, Marc, you can’t. You have been bamboozled. The machine is still flying blind. It’s making statistically informed guesses from having ingested the works from Plato to Bertrand Russell, with infusions of neo-Nazi hate speech, and has cooked you up an elaborate bluff. 

Meanwhile, What About Coding?

Writing computer code is analogous to the syntactic layer of language. AI is perfectly equipped to help you with warnings and useful suggestions in real time beyond simple compiler correctness. It can even write the code for you. But is the code appropriate to our human goals?  Are we building the right thing? Is this code useful or harmful? Should we have written it in the first place? There’s no IntelliJ plugin for that. Only a human being can answer that.

Your competitive edge is in answering these questions. You should take an interest in the fuzzy and messy layers where code touches the world of human affairs. Yes, your role may become a hybrid between business analyst and developer. You may not like it, because you will write less and less code, and coding is such darn fun. Keep it as a hobby then. AI already wins the Advent of Code and I’m sure it aces the OCP. You’re out of your depth already and things can only get worse. That’s not a disgrace. Nobody can win at arm-wrestling against a gorilla. Choose your opponent wisely.

Let AI deal with the accidental complexity and let us get to the essence. Using computers to solve problems for humans was never about writing more code. Much of the essence of building a program is the debugging of the specification, as the late Fred Brooks wrote as early as 1986.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment