The Pragmatic Engineer
The Pragmatic Engineer Podcast
AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django)
2
0:00
-1:12:43

AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django)

Ways to use LLMs efficiently, as a software engineer, common misconceptions about them, and tips/hacks to better interact with GenAI tools. The first episode of The Pragmatic Engineer Podcast
2

The first episode of The Pragmatic Engineer Podcast is out. Expect similar episodes every other Wednesday. You can add the podcast in your favorite podcast player, and have future episodes downloaded automatically.

Listen now on Apple, Spotify, and YouTube.

Brought to you by:

Codeium: ​​Join the 700K+ developers using the IT-approved AI-powered code assistant.

TLDR: Keep up with tech in 5 minutes

On the first episode of the Pragmatic Engineer Podcast, I am joined by Simon Willison.

Simon is one of the best-known software engineers experimenting with LLMs to boost his own productivity: he’s been doing this for more than three years, blogging about it in the open.

Simon is the creator of Datasette, an open-source tool for exploring and publishing data. He works full-time developing open-source tools for data journalism, centered on Datasette and SQLite. Previously, he was an engineering director at Eventbrite, joining through the acquisition of Lanyrd, a Y Combinator startup he co-founded in 2010. Simon is also a co-creator of the Django Web Framework. He has been blogging about web development since the early 2000s.

In today’s conversation, we dive deep into the realm of Gen AI and talk about the following: 

  • Simon’s initial experiments with LLMs and coding tools

  • Why fine-tuning is generally a waste of time—and when it’s not

  • RAG: an overview

  • Interacting with GPTs voice mode

  • Simon’s day-to-day LLM stack

  • Common misconceptions about LLMs and ethical gray areas 

  • How Simon’s productivity has increased and his generally optimistic view on these tools

  • Tips, tricks, and hacks for interacting with GenAI tools

  • And more!

I hope you enjoy this epsiode.

In this episode, we cover:

(02:15) Welcome

(05:28) Simon’s ‘scary’ experience with ChatGPT

(10:58) Simon’s initial experiments with LLMs and coding tools

(12:21) The languages that LLMs excel at

(14:50) To start LLMs by understanding the theory, or by playing around?

(16:35) Fine-tuning: what it is, and why it’s mostly a waste of time

(18:03) Where fine-tuning works

(18:31) RAG: an explanation

(21:34) The expense of running testing on AI

(23:15) Simon’s current AI stack 

(29:55) Common misconceptions about using LLM tools

(30:09) Simon’s stack – continued 

(32:51) Learnings from running local models

(33:56) The impact of Firebug and the introduction of open-source 

(39:42) How Simon’s productivity has increased using LLM tools

(41:55) Why most people should limit themselves to 3-4 programming languages

(45:18) Addressing ethical issues and resistance to using generative AI

(49:11) Are LLMs are plateauing? Is AGI overhyped?

(55:45) Coding vs. professional coding, looking ahead

(57:27) The importance of systems thinking for software engineers 

(1:01:00) Simon’s advice for experienced engineers

(1:06:29) Rapid-fire questions

Some takeaways:

  • If you are not using LLMs for your software engineering workflow, you are falling behind. So use them! Simon outlined a bunch a of reasons that hold back many devs from using these tools – like ethical concerns, or energy concerns. But LLM tools are here to stay, and those who use them get more productive.

  • It takes a ton of effort to learn how to use these tools efficiently. As Simon puts it: “You have to put in so much effort to learn, to explore and experiment and learn how to use it. And there's no guidance.” Also, in related research we did in The Pragmatic Engineer about AI tools, with about 200 software engineers responding, we saw some similar evidence. Those who have not used AI tools for 6 months, were more likely to be negative in their perception of these. In fact, a very common feedback from engineers not using these tools was “I used it a few times, but it didn’t live up to my expectations, and so I’m not using it any more”

  • Use local models to learn more about LLMs. Running local models has two bigger benefits:

    • Tou figure out how to do these! It’s less complicated than one would think, thanks to tools like HuggingFace. Go and play around with them, and try out a smaller local model.

    • You learn a LOT more about how LLMs work, thanks to local models being less capable. So it feels less “magic”. As Simon said, “ I think it's really useful to have a model hallucinate at you early because it helps you get that better mental model of, of, of what it can do. And the local models hallucinate wildly.”

Where to find Simon Willison:

• X: https://x.com/simonw

• LinkedIn: https://www.linkedin.com/in/simonwillison/

• Website:

https://simonwillison.net/

• Mastodon: https://fedi.simonwillison.net/@simon

Referenced:

• Jeremy Howard’s Fast Ai: https://www.fast.ai/

• jq programming language: https://en.wikipedia.org/wiki/Jq_(programming_language)

• Datasette: https://datasette.io/

• GPT Code Interpreter: https://platform.openai.com/docs/assistants/tools/code-interpreter

• Open Ai Playground: https://platform.openai.com/playground/chat

• Advent of Code: https://adventofcode.com/

• Rust programming language: https://www.rust-lang.org/

• Applied AI Software Engineering: RAG: https://newsletter.pragmaticengineer.com/p/rag

• Claude: https://claude.ai/

• Claude 3.5 sonnet: https://www.anthropic.com/news/claude-3-5-sonnet

• ChatGPT can now see, hear, and speak: https://openai.com/index/chatgpt-can-now-see-hear-and-speak/

• GitHub Copilot: https://github.com/features/copilot

• What are Artifacts and how do I use them?: https://support.anthropic.com/en/articles/9487310-what-are-artifacts-and-how-do-i-use-them

• Large Language Models on the command line: https://simonwillison.net/2024/Jun/17/cli-language-models/

• Llama: https://www.llama.com/

• MLC chat on the app store: https://apps.apple.com/us/app/mlc-chat/id6448482937

• Firebug: https://en.wikipedia.org/wiki/Firebug_(software)#

• NPM: https://www.npmjs.com/

• Django: https://www.djangoproject.com/

• Sourceforge: https://sourceforge.net/

• CPAN: https://www.cpan.org/

• OOP: https://en.wikipedia.org/wiki/Object-oriented_programming

• Prolog: https://en.wikipedia.org/wiki/Prolog

• SML: https://en.wikipedia.org/wiki/Standard_ML

• Stabile Diffusion: https://stability.ai/

• Chain of thought prompting: https://www.promptingguide.ai/techniques/cot

• Cognition AI: https://www.cognition.ai/

• In the Race to Artificial General Intelligence, Where’s the Finish Line?: https://www.scientificamerican.com/article/what-does-artificial-general-intelligence-actually-mean/

• Black swan theory: https://en.wikipedia.org/wiki/Black_swan_theory

• Copilot workspace: https://githubnext.com/projects/copilot-workspace

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems: https://www.amazon.com/Designing-Data-Intensive-Applications-Reliable-Maintainable/dp/1449373321

• Bluesky Global: https://www.blueskyglobal.org/

• The Atrocity Archives (Laundry Files #1): https://www.amazon.com/Atrocity-Archives-Laundry-Files/dp/0441013651

Rivers of London: https://www.amazon.com/Rivers-London-Ben-Aaronovitch/dp/1625676158/

• Vanilla JavaScript: http://vanilla-js.com/

• jQuery: https://jquery.com/

• Fly.io: https://fly.io/

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.

Discussion about this podcast

The Pragmatic Engineer
The Pragmatic Engineer Podcast
Software engineering at Big Tech and startups, from the inside. Deepdives with experienced engineers and tech professionals who share their hard-earned lessons, interesting stories and advice they have on building software.
Especially relevant for software engineers and engineering leaders: useful for those working in tech.
Listen on
Substack App
Apple Podcasts
Spotify
YouTube
RSS Feed
Appears in episode
Gergely Orosz