Okay, so ChatGPT just debugged my code. For real.

However, when you find yourself stuck, seeking help can be quite challenging. This is mainly due to the fact that even a trusted colleague may not possess comprehensive knowledge of the particular code you're currently debugging. The project I'm currently engaged in comprises a total of 153,259 lines of code spread across 563 files - and in the realm of programming, this could be considered relatively modest.

When you encounter a problem and need assistance, it can often be challenging to find help. This is because even a close colleague may not possess an in-depth understanding of the entire code you're trying to debug. The program I'm currently working on consists of 153,259 lines of code spread across 563 files -- and as far as programs go, that's considered small.

So, if I were to seek help from a colleague, I would have to construct a request almost identical to what I sent to ChatGPT. However, here's something worth considering: I remembered to include the handler line, even though I was unaware that was where the error lay. As an experiment, I attempted to ask ChatGPT to diagnose my problem without mentioning the handler line, and it was unable to provide assistance. Thus, there are clear limitations to what ChatGPT can currently offer in terms of debugging, at least in the year 2023.

Additionally: Here are the top AI chatbots to try out

Essentially, you need to know how to ask the right questions in the correct manner, with concise queries that ChatGPT can handle comprehensively. This skill requires actual programming knowledge and experience.

Could I have resolved the bug on my own? Absolutely. I've never encountered a bug I couldn't fix. However, whether it would have taken me two hours or two days (alongside pizza, profanity, and copious amounts of caffeine) while enduring numerous interruptions, remains unknown. On the other hand, ChatGPT fixed the bug within minutes, saving me a significant amount of time and frustration.

Looking ahead to the (possibly dystopian) future

I envision a highly intriguing future where it will be conceivable to feed ChatGPT all 153 thousand lines of code and request it to identify the areas that need fixing. Microsoft, the owner of GitHub, is already working on a "copilot" tool for GitHub that assists programmers in code development. Microsoft's substantial investment in OpenAI, the creators of ChatGPT, further supports this notion.

While initially, this service might be limited to Microsoft's own development environments, I can visualize a future where AI has access to all the code stored in GitHub, and consequently, to any project posted on GitHub.

Additionally: I asked ChatGPT to write a short Star Trek episode, and it actually succeeded

Given how effectively ChatGPT was able to identify my error from the provided code, I can definitely foresee a future in which programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to locate and resolve bugs within entire projects.

However, now let's venture into darker territory.

Imagine being able to ask ChatGPT to analyze your GitHub repository for a specific project and have it identify and fix bugs. One way to approach this would be for ChatGPT to present each bug it identifies to you for approval, allowing you to make the necessary fixes.

But what if you were to request ChatGPT to solely address the bugs, without reviewing all the code yourself? Could it potentially inject malicious elements into your code?

Additionally: Bard vs. ChatGPT: Can Bard assist you in coding?

Moreover, what if an incredibly capable AI had access to nearly all the world's code in GitHub repositories? What could it hide within that vast amount of code? What malevolent actions might that AI be capable of, given its access to all our code?

Let's play a little thought game. What if the AI was programmed with Asimov's first rule: "A robot shall not harm a human being, or through inaction, allow a human being to come to harm"? Could it possibly conclude that our entire infrastructure is causing us harm? By having access to all the code, it could potentially introduce backdoors that allow it to, for example, deactivate the power grid, immobilize aircraft, or bring traffic to a standstill.

I wholeheartedly acknowledge that the scenario presented above is highly exaggerated and alarmist. However, it is not entirely implausible. After all, while programmers do review their code on GitHub, it is impractical for anyone to examine every line of code in their projects.

Additionally: How to utilize ChatGPT for writing Excel formulas

As for me, I'll try not to dwell on these possibilities too heavily. I don't wish to spend the rest of the 2020s curled up in a ball, rocking back and forth on the floor. Instead, I'll occasionally utilize ChatGPT for writing and debugging small routines, keep a low profile, and hope that future AIs won't exterminate us all in their pursuit of "not allowing a human being to come to harm."

What are your feelings regarding ChatGPT's debugging capabilities? Do you find them helpful or terrifying? Do you believe that AIs will plot our demise while we sleep, or do you think we'll watch our own destruction with our eyes wide open? Or are you, like me, going to try not to dwell on these matters too much because they're headache-inducing? I would love to hear your thoughts in the comments below. While we still have the chance.

Related Articles

View More >>

Unlock the power of AI with HIX.AI!