Artificial intelligence is no longer just assisting developers — it’s becoming the developer. 

Thanks to a new wave of “software engineering agents” that are powered by large language models and run on specialized neocloud infrastructure, these AI agents are reshaping how software gets built. 

AI agents aren’t new, but as we all know, technology gets better with time, and these agents are now capable of autonomously analyzing their environment, making decisions, taking actions and achieving goals.

And as technology improves, these AI agents, or coding partners, help developers perform their tasks more efficiently, enabling them to delegate some of the more mundane activities. They also improve the developer experience so that devs can focus on more creative and challenging tasks.

Research firm Gartner noted that this is why AI agents and coding partners are transforming the way software engineering is being done, and why enterprises need to pay attention.

This is perhaps why lots of folks are in on it, including startups like Anysphere, Augment, Qodo, Reflection AI, Replit and Sourcegraph to big players, like Cognition, ByteDance, OpenAI and Google

“In the last year, people are starting to realize how powerful agentic AI is,” said Michele Catasta, Replit’s vice president of AI. “All the products and demos that we’ve seen on social media are either making humans more productive, or they are substituting parts of the labor. There is also a class of product, including Replit, that allows people to create software. That's a major unlock.”

AI agents: Errors make the engine

Most AI coding agents start out making plenty  of mistakes. Early versions write buggy code, misinterpret documentation or fumble at deployment. That’s not a failure, but rather it’s how the agents learn.

The trick is writing a good prompt, said Guy Gur-Ari, co-founder and chief scientist of Augment Code. For example, the company has a Prompt Enhancer product that takes into account the customer’s code base, so even a half-baked prompt can be turned into a mini plan of what the agent needs to do, he said.

“The prompt box is deceptively simple, because, especially with all the hype around these tools, people expect to type any sentence and get the agent to do it,” Gur-Ari said. “When a prompt lacks context or is not detailed enough, the agent starts off on the wrong track, and it’s hard to steer it back. That leads to people thinking the agent doesn’t work.”

Developers and teams that can harness these mistakes, using them as fuel in a tight, rapid feedback loop, quickly refine both their agents and their products.

Every AI agent, whether focused on writing code, deploying software, or automating workflows, gets better through feedback. Rapid iteration loops give agents the data and context to improve. That loop looks like this:

  1. The agent attempts a task

  2. System or human reviews the output

  3. Errors or inefficiencies are identified

  4. Agent retrains, retries or is guided with new prompts

  5. Repeat, faster and faster

Fast iteration is vital

Time-to-iteration is the heartbeat of progress in software development. With the rise of AI agentic tools, how quickly systems can learn, adapt and deploy improvements makes the difference between pushing the edge and falling behind.

That said, some of the clunkiness of earlier tools is guiding the developer experience of today, which commands low-latency inference, scalable training and deep customization.

Before ChatGPT, software development was focused on latency, said Dedy Kredo, Qodo’s co-founder and chief product officer. User interface frameworks were “snappy and clean.” Then came ChatGPT with its chat experience.

“A lot of people thought chat was dead before ChatGPT came out,” Kredo said. “People hated chat, but we don't remember it. The experience with a chat was always ‘this bot that doesn't understand you.’”

There also wasn’t much speed associated with the application or software experiences around chat interfaces, he said. Then large language models came and made chat more useful.

“All of a sudden we started being okay with waiting a little bit longer for the response, and we got a lot more value,” Kredo said. “We're now in a process that, as these LLMs become more capable, the agents are becoming more capable and they can do longer horizon tasks. And for software development, we are okay with waiting, except if the agent is messing up. The next phase we'll need to see is very big tasks only taking days or hours, but I don't think we're quite there yet.”

What this means for developers

So what does this mean for developers? Well, according to Catasta, Gur-Ari and Kredo, it means more productivity and the ability to delegate and mold the AI agents. And as a result, we are likely to see a software boom.

“These agents are operating at the level of interns,” Gur-Ari said. “You can call them eager interns. That's roughly where they're at. They're improving rapidly, though.”

Now developers can be the tech lead of their agent, having them run in the background and even run a few of them in parallel to achieve tasks, he said.

Catasta is even more optimistic, saying that the mission is to “empower the next billion software creators.”

“Rather than building and competing in a space for tools and making a small amount of people even more productive, we are creating basically the new Excel for knowledge workers — a software that can be used by more than a billion people,” he said. “There are a lot of good ideas not implemented today because people lack software engineering skills. AI agents will give them those skills.”

Keep Reading

No posts found