Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(56,462 posts)
Tue Jun 3, 2025, 09:16 AM Tuesday

AI 'vibe coding' startups burst onto scene with sky-high valuations

Source: Reuters

Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development.

So-called code generation or “code-gen” startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers.

-snip-

Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters.

Its tool is known for translating plain English commands into code, sometimes called “vibe coding,” which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition.

-snip-

Read more: https://www.reuters.com/business/ai-vibe-coding-startups-burst-onto-scene-with-sky-high-valuations-2025-06-03/



These coding tools are already eliminating jobs. The article cites a study showing that hiring of software engineers with less than a year's experience fell 24% last year. Last month Microsoft announced worldwide layoffs of 6,000 people, with 40% of them software enginers in Washington state.

This is hype-driven insanity. The AI-coding companies are still operating at a loss because of the cost of computing. The entire generative-AI industry is a bubble. See Ed Zitron's scathing analysis at https://www.democraticunderground.com/100220066913 .

And using AI for coding results in errors that might not be caught.

From Barron's, May 23, 2025:

Microsoft Is Rolling Out AI Agents. Developers Are Finding Coding Errors.
By Adam Levine
Updated May 23, 2025, 4:40 pm EDT / Original May 23, 2025, 1:16 pm EDT

-snip-

The agent is still in development, but potential problems have emerged in the days since launch. Microsoft engineers have assigned bug fixes to the agent as a test, and the results are uneven, according to public repositories of the coding process. (Some of Microsoft’s software is open source, so the records are public.)

In one case, the AI agent made a bug fix in some software for iPhones late on Monday. On Tuesday morning, a Microsoft engineer came to work and discovered that there was an error in the agent’s code. That engineer tried to get the agent to fix the problem with simple prompts like, “fix the build error on apple platforms,” as is intended by Microsoft.

But those prompts failed to solve the issue, and other Microsoft engineers jumped in to try to get the agent’s code to work. The prompts became increasingly more complex, and farther away from how Microsoft would like this agent to work. In all, 11 prompts to the agent from four different Microsoft employees didn’t fix the problem.

Early on Wednesday, non-Microsoft developers on GitHub got wind of this and other issues with GitHub’s coding agent. “How concerned should I be about AI agents being let loose on codebases like this?” one developer asked on GitHub.

-snip-



From InfoWorld, March 17, 2025:

https://www.infoworld.com/article/3844363/why-ai-generated-code-isnt-good-enough-and-how-it-will-get-better.html

-snip-

“Let’s be real: LLMs are not software engineers,” says Steve Wilson, chief product officer at Exabeam and author of O’Reilly’s Playbook for Large Language Model Security. “LLMs are like interns with goldfish memory. They’re great for quick tasks but terrible at keeping track of the big picture.”

As reliance on AI increases, that “big picture” is being sidelined. Ironically, by certain accounts, the total developer workload is increasing—the majority of developers spend more time debugging AI-generated code and resolving security vulnerabilities, found The 2025 State of Software Delivery report.

-snip-

AI code completion tools tend to generate new code from scratch rather than reuse or refactor existing code, leading to technical debt. Worse, they tend to duplicate code, missing opportunities for code reuse and increasing the volume of code that must be maintained. “Code bloat and maintainability issues arise when verbose or inefficient code adds to technical debt,” notes Sreekanth Gopi, prompt engineer and senior principal consultant at Neuroheart.ai.

GitClear’s 2025 AI Copilot Code Quality report analyzed 211 million lines of code changes and found that in 2024, the frequency of duplicated code blocks increased eightfold. “Since AI-authored code began its surge in mid-2022, there has been more evidence every year that code duplication keeps growing,” says Bill Harding, CEO of Amplenote and GitClear. In addition to piling on unnecessary technical debt, cloned code blocks are linked to more defects—anywhere from 15% to 50% more, research suggests.

-snip-


The article notes that AI tools often waste more time than they save, in addition to making code less secure.

And it links to another article on the security problems, focusing on PII (personally identifiable information) and payment information being much less secure when AI is used for coding:

https://www.developer-tech.com/news/ai-coding-tools-productivity-gains-security-pains/

Apiiro’s Material Code Change Detection Engine revealed a 3x surge in repositories containing PII and payment details since Q2 2023. Rapid adoption of generative AI tools is directly linked to the proliferation of sensitive information spread across code repositories, often without the necessary safeguards in place.

This trend raises alarm bells as organisations face a mounting challenge in securing sensitive customer and financial data. Under stricter regulations like GDPR in the UK and EU, or CCPA in the US, mishandling sensitive data can result in severe penalties and reputational harm.

Apiiro’s Material Code Change Detection Engine revealed a 3x surge in repositories containing PII and payment details since Q2 2023. Rapid adoption of generative AI tools is directly linked to the proliferation of sensitive information spread across code repositories, often without the necessary safeguards in place.

-snip-

Perhaps even more worrisome is the rise in insecure APIs. According to Apiiro’s analysis, there has been a staggering 10x increase in repositories containing APIs that lack essential security features such as authorisation and input validation.
2 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
AI 'vibe coding' startups burst onto scene with sky-high valuations (Original Post) highplainsdem Tuesday OP
"And using AI for coding results in errors that might not be caught." ToxMarz Tuesday #1
Right, so who reviews the code for errors? Miguelito Loveless Tuesday #2

ToxMarz

(2,443 posts)
1. "And using AI for coding results in errors that might not be caught."
Tue Jun 3, 2025, 10:17 AM
Tuesday

The time and cost to have experienced programmers to review, catch and fix the errors is pretty much the same as just having professional programmers in the first place. It has to be done, and you probably would get a better solution. There is an allure to calling everything AI (personnaly I find it off putting) that commands $$$.

Latest Discussions»Latest Breaking News»AI 'vibe coding' startups...