Why I Took Stanford’s LLM Class Without Knowing Python
As a designer, attending a rigorous 6 week class of building AI agents and apps using LLM in python, sounds counter intuitive, but hear me out...
Imagine This…
At the end of a 6-week class on large language models (LLMs) at Stanford, students—many with little or no coding background—showcased amazing projects with working code. Here are a few examples:
1. Personal Wine Sommelier
This project was an app that learns your wine tastes. Simply point your phone’s camera at a restaurant menu, and the app recommends wines you’ll like, complete with a simple description.
2. Feynman GPT
An AI chatbot trained on lectures by the famous physicist Richard Feynman. You can ask it questions about physics, and it answers in Feynman’s signature style.
3. AI-tuber
This tool helps aspiring YouTubers create video scripts based on their favorite creator’s style. Just tell the chatbot your video topic, and it will generate a script in the style of popular YouTubers like MrBeast.
4. Ask Neethi
Our team created a chatbot that answers questions about Indian government policies. It’s designed for educators, farmers, and policymakers who need quick, clear answers on specific topics.
There were many more ideas as well.
What amazed me most was that, in just six weeks, we all built projects that actually worked—even those of us who had never used Python before.
Why I Took the Class
I joined the class to get over my fears about Gen AI. I wanted to see how these tools really work, and learn how to create useful projects myself. The class focused on making LLMs easier to use for everyone, not just expert programmers. The instructors—Charlie Flanagan, Dima, and Anja—were incredibly supportive, making the whole experience even better.
Key Learnings
LLMs Are Becoming More Accessible and Affordable
Each week, we saw new models popping up. They’re not just cheaper and easier to use, but also more specialized and powerful than ever before.
Using LLMs for Skills, Not Just Knowledge
Modern LLMs are great at more than giving answers. We use them for tasks—like writing, summarizing, or even coding.
“Vibe Coding” Is Real
You don’t need to be a coding expert. Sometimes, it’s about experimenting, learning as you go, and getting the AI to help fill in the gaps.
Enterprise Uses of LLMs
We also learned about how big companies use these models, including:
AI agents (digital assistants)
Retrieval-Augmented Generation (RAG) for better answers
Tools like LlamaIndex and LangChain for building with LLMs
Organizing information using chunking, embeddings, and vector databases
Context Matters
How you phrase your input—known as prompt engineering—makes a big difference in what the AI produces. We discussed frameworks like RISEN, as well as protecting against “prompt injection” (when users try to trick the AI).
Hugging Face
We explored this platform, which makes AI models and tools available to everyone.
Some Reflections
AI and LLMs are changing unbelievably fast. Every week, new tools and ideas show up that make last week’s breakthroughs feel old. As Ethan Mollick says, “The AI you’re using now is the worst you’ll ever use.” If you think AI can’t do something, try again—it probably can now.
Who Will Succeed in the Age of AI?
Some industries will move slowly, but others could be transformed almost overnight, especially by startups. It’s up to us to lean in, experiment, and learn—not just with well-known models like OpenAI, but also others like Claude. And as we do, we should start thinking about “guardrails”—rules and guidelines to use these technologies safely and responsibly.
