Skip to main content
6 answers
7
Updated 2421 views

Is computer science worth it with ai taking over

Is computer science worth it with ai taking over

+25 Karma if successful
From: You
To: Friend
Subject: Career question for you

7

6 answers


4
Updated
Share a link to this answer
Share a link to this answer

Robert’s Answer

Short answer: No, "AI" is not "taking over." Yes, it is worth it. If you have an aptitude for coding, you will be able to find a job coding.

Long answer:
The things people are calling "AI" are not thinking. They are not alive. They do not have a brain, they are not problem solvers. They are glorified auto-complete engines (using what in computer science is known as Markov Chains) doing essentially what your phone does now when it tries to suggest to you words and sentences as you type. All of that is done by predictive guessing based on comparisons against what you have written, to a large sample database of what others have typed, and statistically likely words and phrases which follow words and phrases you have typed.

A true "AI" like the Star Trek computer would require an actual, thinking, positronic "brain" capable of independent thought without input. We are nowhere near knowing how to do that in Computer Science. There are NP-Complete problems, like Traveling Salesman, and Full Self Driving --- and then there is true AI (known in many science circles as "The Singularity"). We don't even know how the human brain works. That's why the surgeon keeps you awake during brain surgery and talks to you while they perform the surgery, they have to know immediately if they poke the wrong thing. So, if we don't know how our biological brain works -- how would we realistically create an artificial one? Most artificial things humans make are modeled after things in nature (boats after large animals that sit on water, planes after birds in flight, a pace maker after the beating of the actual human heart, etc.) If we don't know how a real brain works, how would we model a technological one after it?

Generative Predictive Text (GPT)-types of Large Language Models (LLMs) have sampled large sets of data (mostly by crawling the web and sucking up everything that wasn't nailed down behind a paywall), analyzed it, and now they have a decent set of data (at this point 0.5-2 years old, depending if you are a paying customer or not) that can look at a prompt you give it, and spit out something that statistically matches what other people wrote against a similar prompt.

That's it. That's all they do. Everything else people are claiming LLMs ("AI") can do is mostly hype from marketing departments, and startups trying to start up the next crypto-like bubble, and true believers thinking this will finally be the invention that will make them rich, or people who think these tools will make the world a better place, etc. Can LLMs ("AI") do all that? No. Can they do some of that? A little bit.

Several serious researchers have been testing the GPT-like LLMs with prompts like this:
"Steve has 9 apples, 7 pickles, 13 pears, and 11 cherries. Mariko gives Steve 2 strawberries, 8 radishes, and fourteen lemons. Jessica takes 2 apples, and 9 pickles. How much fruit does Steve have left?"

The LLMs usually come back with an answer like: "Steve has 53 fruits left" or "Steve has 39 fruits left"
All of the answers tend to miss the fact that some of the items are fruits, and some are not. And some of them also fail to pick out the "fourteen" written as a word, and not a number. (The answer was 47 by the way).

The problem with LLMs is they have been programmed to be "confidently wrong." There was a case that hit the tech press recently about a Lawyer who used GPT to write a legal brief ( search for the headline "Lawyer cited 6 fake cases made up by ChatGPT; judge calls it 'unprecedented'"). The lawyer knew enough to ask the GPT LLM if the cases cited in the brief were correct. And GPT confidently confirmed they were (spoiler alert: they weren't). When the judge read the brief, the lawyer got in a lot of trouble.

The GPT models hitting the market right now are being marketed correctly as CHAT-bots. Not coding bots. Not legal bots. CHAT bots. Think about it. When you chat with your friends and family do people make things up to tell a better story? Do they back down if someone calls them on it? Of course not. That's part of what makes chatting fun. The GPT models on the market right now make excellent fake chatting partners. They predict and generate text likely to mesh well with the text it is interacting with. That's all they do, they chat. They could, 100%, be hooked up to a game engine to provide better NPC background chat than the 10-15 canned phrases NPCs will cycle through now. But can they code? LOL, no.

Don't get me wrong, the GPT-type LLMs on the market right now sucked up all of the coding knowledge (both right and wrong) on StackOverflow and Github. And they can spit out code. And the simple code will even run. You ask a GPT-type LLM to code you a loop which prints out a multiplication table, and a GPT-type LLM model can do that.

But, if you ask a GPT-type LLM to write you a program which will integrate with X process with Y web service, from Z group, using A security protocol, and a backend database connection to B data server -- it will try mightily to produce something like that, and on the surface it might even look somewhat passable. But it would take someone with experience to find the 11 minor flaws, 7 serious flaws, 3 security bugs, and 1 critical issue with the code. Additionally, once that code has been generated, now what? Who's going to know enough to deploy that code? Going to give OpenAI the production passwords to your hosting server? What happens if it breaks? Who is going to fix it? Who is going to change it when the business wants something added or removed? I 100% guarantee you that no GPT-style LLM writes code with any sort of a warranty. If a company fired all of their programmers and tried to replace them with GPT "AI" -- who is going to be there, and know how it is all supposed to work, when it breaks? (And trust me, code always breaks. The internet is 100 million connected computers held together with twine, chewing gum, spinning plates, and swearing -- lots of swearing ^_^)

Long term, I see GPT-style "AI" being integrated into coding IDEs much the same way the "intellisense" and similar predictive coding tools were integrated into Eclipse and Visual Studio a while ago. That is all the current "AI" trend is right now, but on steroids. Generating a code snippet is only a single step in a process. The rest of the steps involve knowing how code works, knowing how all of the systems work, and using that knowledge together to come up with a solution to a problem. THEN code is written, tested, deployed, maintained, etc. GPT "AI" can only help with one of those steps.

So yes, coding will continue to be a viable career path in the near, and likely long term future. Heck, even if true AI arrives tomorrow, there will still be a need for coders to maintain the "AI"s taking over everything. AI doesn't maintain itself after all. If the Matrix taught us nothing, it taught us that ;)
4
2
Updated
Share a link to this answer
Share a link to this answer

Anna’s Answer

The answer to this question actually comes from a sci-fi book I read recently! (Dune: Book 4 by Frank Herbert) In that fictional world, lots of people are concerned about the human race eventually being replaced by technology. To paraphrase one of the characters, "Computers increase the NUMBER of things we can do without thinking, but what we need to be concerned about is WHAT we do without thinking". That is where computer scientists/programmers come in. They are the ones who translate real world problems into something a computer can comprehend. There will always be new problems to solve, even with an AI to help us do it faster.
2
1
Updated
Share a link to this answer
Share a link to this answer

Sean’s Answer

I have been a programmer for about 40 years and AI can not even come close to doing my job. It is not just about writing code. I have not seen AI figure out why a program has a bug or how to fix it. And yes, AI writes code with bugs too. Another challenge is software engineering which is different than just writing code. Often a library that is developed is used by many other people and making changes will break their code. Finding ways to minimize this and still get the features and improvements into the library is not something AI is good at yet. AI to me will just be another tool that software developers use.
1
1
Updated
Share a link to this answer
Share a link to this answer

Jonathan 〰️’s Answer

How would AI exist without computer scientists?

Really, AI is currently just a good pattern matcher. It learns what people have written/drawn/programmed, and then takes a best-guess at what you want it to do. It's not really smart, and it can make very bad mistakes (especially when it tries to write code).

It's a very useful tool when programming, since AI can fill in a lot of boiler-plate code for you and even give good starting places for unit testing. AI assisted coding tools is starting to become a booming business.

In it's current state, and probably for the forsee-able future, AI is just a really good assistant. It can't replace even basic engineers, and there always needs to be a computer scientist to help.
1
0
Updated
Share a link to this answer
Share a link to this answer

Prashanth’s Answer

It is your passion for computers and coding that should drive this decision. AI is a technology change that won't stop computer programming or demand for code designers or algorithm experts.
If you like studying computer science and getting good at coding, you will find a lot of new technologies that will interest you and that will take you forward.

Prashanth recommends the following next steps:

Understand what you like in computer science and learn more about it
Master any one of the coding language and start coding
Learn more about AI and how it is developing. See how computer science experts are contributing to it
0
0
Updated
Share a link to this answer
Share a link to this answer

Yelena’s Answer

Ai is just a new tool to make research, analysis, programming and other creative work easier. My advice is to study computer science with an emphasis on AI.
0