Perhaps not Boring Technology after all
EXECUTIVE SUMMARY
The Evolving Role of LLMs in Programming: A Shift from Boring Technology
Summary
The article discusses the impact of large language models (LLMs) on technology choices in programming, particularly how they may not favor widely used tools as previously thought. It highlights recent experiences with coding agents that demonstrate adaptability to new and lesser-known technologies.
Key Points
- Concerns exist that LLMs may bias technology choices towards popular tools based on their training data.
- Earlier models showed better performance with languages like Python and JavaScript compared to less common languages.
- Newer coding agents are showing promising results with tools not widely represented in training data.
- The article references a study by Edwin Ong and Alex Vikati, which tested Claude Code over 2,000 times, revealing a bias towards build-over-buy and a preference for tools like GitHub Actions, Stripe, and shadcn/ui.
- Skills mechanisms are being rapidly adopted by coding agent tools, enhancing their functionality with official skills from various projects.
- Examples of projects releasing official skills include Remotion, Supabase, Vercel, and Prisma.
Analysis
The findings suggest that coding agents are evolving beyond the limitations of their training data, allowing for a more diverse range of technology choices. This adaptability could lead to more innovative solutions in programming, challenging the notion of 'boring technology'.
Conclusion
IT professionals should explore the capabilities of newer coding agents and consider integrating them into their workflows, especially for projects involving less common technologies. Staying informed about the evolving landscape of AI-assisted programming tools will be crucial for leveraging their full potential.