AI Tools Generate More Buggy Code

Top o' the morning! This is The Steel Man- keeping you informed of all things AI

Here's what is on the docket today: 

  • Study finds AI tools generate more buggy code

  • 72-year old senator pursues Master degree in AI

  • Chinese Robo Taxi Rollout

Study finds AI tools generate more buggy code

Researchers from Stanford published “the first large-scale user study examining how users interact with an AI Code” and the results weren’t pretty.

Some background info on the tooling they investigated.  

They specifically looked into programmers using codex-davinci-002, which is the most recent Codex model. The codex models are essentially the code-centric descendants of GPT-3 models, which we’ve all interacted with by this point through ChatGPT- these codex models were trained using the vast code repository that exists on Github, which OpenAI was able to efficiently access due to their partnership with Github’s owner Microsoft. 

As of now, most programmers using this model interface it through Github Copilot (which runs code-davinci-002 under the hood)- which gives you code snippets and autocompletes your functions when you ask for help.

So, what exactly did the study find about tools like these in their current iteration?

Well, their 3 primary results are as follows

  • Programmers who have access to an AI assistant are more likely to believe that they wrote secure code than those without access to the AI assistant. 

  • Their impression is false- as programmers with access to an AI assistant produce more security vulnerabilities.

  • Perhaps obviously, programmers who do have access to an AI assistant but trust the AI less are more likely to produce secure code 

As with any study, we must take into account the limitations of the results. Indeed, most of the participants were university students and the sample size was 47, so the results may not generalize as well as one could hope.

Nevertheless, it is important to recognize that tools like Github Copilot should not be used without engaging your critical thinking about security issues, and we cannot yet have complete faith in their output.

72 year old Congressman Pursues AI Degree

In other news, Don Byer, a 72 year old congressman from the 8th district of Virginia made headlines this week due to news that he’s been pursuing a Masters degree in AI by taking classes outside of work. He’s currently plowing through the prerequisite undergraduate work (mathematics + computer science) and will begin the graduate coursework in 2024. 

While professionals taking night classes for advanced degrees is nothing new, this comes at a time when legislators such as Dr. Byer increasingly have to regulate and understand concepts far outside their wheelhouse (in this case AI)- so hopefully Dr. Byer’s example leads to more understanding and interest in these up-and-coming technologies.

As for his stated incentive for pursuing an AI degree? He says he’s interested in the role that AI can play in suicide prevention.

Baidu Robo-Taxi Rollout in Beijing

Baidu has received the first license to test autonomous vehicles (without any personnel inside) in Beijing. This parallels development in the US, where services such as Waymo and Cruise have begun offering ridership to the general public (or select groups) in SF and Phoenix, respectively.

This comes during a news-filled week for Baidu. They earlier announced they would expand the operating time and number of robo-taxis in their existing driverless service in Wuhan.

Thank you for reading this edition of the Steel Man!