Two years after my 7 Andrew Ng courses: the 2026 path I'd actually take
People still message me asking if the 7 Andrew Ng courses I recommended in 2023 are the right path today. Short answer: most of them, yes, but with a different roadmap around them. Here is my 2026 update, with per-course verdicts and a forked path for builders and operators.
People still message me about the 2023 post. The most common question, almost word for word: "is this still the right path in 2026?"
I have to admit, I have been putting off a full answer, because the honest answer is layered. Most of Andrew Ng's courses still hold up. The roadmap I built around them does not. This post is the update I owe you.
If you are starting from zero today, like I was in 2023, the foundation part of this roadmap still applies to you. You just need less of it than I did, for reasons I will get into.
A heads up before we start: I might get some of these verdicts wrong, and I would genuinely like to hear back if your experience disagrees with mine. I am still a student in this subject. I just happen to have spent the last two years trying to ship things with what I learned.
1. What those 7 courses actually gave me
Looking back with two more years of context, I think the 2023 courses gave me two things very clearly, and left me short on a third.
They gave me vocabulary. Prompt engineering, retrieval-augmented generation, embeddings, function calling, chain-of-thought reasoning. These are still the terms I use in almost every technical conversation I have. When I am explaining a tricky bug to a teammate who is newer to AI, we share a language. That language came from Andrew.
They gave me confidence. I came from a non-technical background. Without the structure of those courses, I am not sure I would have had the nerve to start building. A good course can do that. Not teach you everything, but convince you that the next step is reachable.
What they could not give me was taste: the sense of when a prompt is going to be brittle, when an evaluation is actually measuring the thing you care about, when a cost pattern is about to explode on you in production. That only came from breaking things in front of real users. I wrote about the first year of that in this 2024 post — three months in and still getting stuck. Two years later, I get stuck on different things, but I still get stuck.
The courses got me to "I can read the docs without panicking." Everything after that came from shipping.
2. Why the old roadmap ages faster now
Here is the part of the 2023 post I would most want to rewrite.
I framed the seven courses as a complete on-ramp. They are not, and they never were. They are a foundation layer. The roadmap around them has changed more in the last two years than the courses themselves have.
My short thesis: AI pair-programming has compressed execution faster than it has compressed judgment. That is why the foundation courses still teach good things (judgment ages slowly) and why almost nothing else in my learning stack looks the way it did in 2023.
Here is my own timeline, in plain language:
- Late 2022: ChatGPT launches. Everyone becomes a prompter.
- March 2025: Google Gemini 2.5 Pro becomes my daily driver for coding. The model starts writing code I would actually ship.
- Around March 2025: I subscribe to Claude Max, Anthropic's premium plan. That gets me Claude Code, a terminal-based AI coding assistant that writes and edits code alongside you, inside your own project. It quickly takes over a meaningful share of my daily work.
- March 2026: I start dual-wielding Codex, OpenAI's equivalent coding assistant, with Claude Code.
- April 2026: I cancel Claude Max after 13 months and move primarily to Codex. Thirty-day experiment. Jury is out.
Every step on that list was a workflow change, not a course I took. The thing you cannot get from a browser lecture is the experience of watching an AI assistant refactor your own repository while you are reading the suggestion. That is closer to code review under pressure than to programming, and it is where a lot of the 2026 learning actually happens.
For what it is worth, I also did the building side of it. Between the 2023 post and now I shipped three things: DIALOGUE, an AI podcast generator; STRATUM, a 9-agent marketing platform; and the course platform on this site. I also kept taking the occasional course, mostly foundation fills like Google's IT Automation with Python and the Cybersecurity Specialization. The courses kept me literate. The products made me competent.
One practical implication of all of this: where your learning money goes is different now. In 2023 you paid for structured lectures and built on the free tier of a handful of APIs. In 2026, the lectures are a small part of the bill; the tools you write code with are a larger ongoing cost. If you are budgeting to learn AI now, budget for the tools too, not just the courses.
If you are not a builder, if you are a marketing leader or an operator who will never run Claude Code yourself, the implication is the same stated differently: what you are really buying from a 2026 learning path is judgment about the tools, not fluency with any particular one. The tools will churn. The judgment about when to trust them, when to challenge their output, and when to put a human in the loop, is the durable part.
3. The 2026 per-course verdict
This is the section I think will be most useful to you, so I am going to be blunt, with the verdicts softened where the honest answer depends on who you are.
Who this is for: anyone deciding what to take today. Whether you are a marketing operator who needs enough AI literacy to lead a team, or a builder who wants to ship, the verdicts below distinguish the two where it matters.
| 2023 course | 2026 verdict | Why |
|---|---|---|
| Machine Learning Specialization | Time-box: 1–2 weeks, skim the math | Vocabulary only for most readers. The math-heavy derivations are worth it if you are heading into research. Otherwise, skim. |
| Generative AI for Everyone | Still take, for everyone | Best non-technical framing of generative AI I have seen. Ages gracefully. Genuinely give it to your CEO. |
| ChatGPT Prompt Engineering for Developers | Still take, but pair with the cookbooks | Core patterns still apply. Pair with the Anthropic and OpenAI cookbooks for 2026-era APIs. |
| Building Systems with the ChatGPT API | Take for the mental model; do not trust the API specifics | Moderation, chain-of-thought, chained prompts, output checks. Still right. The specific API surface has moved on more than once. |
| Neural Networks and Deep Learning | Skip unless you are heading into research | I flagged the redundancy with the ML Specialization back in 2023, and I would flag it harder now, for both builders and operators. |
| Functions, Tools, and Agents with LangChain | Check the current state before committing | When I built a 9-agent platform in 2025, I did not use LangChain. Earlier that year I tried a LangGraph agent and hit performance ceilings that pushed me to a simpler orchestration. The agentic patterns are the real lesson; the specific framework choice is yours, and the field has moved since my 2025 experience. I would not rule LangChain back in without doing a fresh look. |
| Vector Databases: from Embeddings to Applications | Still take, keep it short | These are the patterns powering the search on this site right now. Skip provider-specific chapters that have aged out. |
Take these as one builder's view, not a universal ranking. If you are coming in with a different goal (research, a very specific stack), your table might look different.
4. What I would add now, and how the path forks
At this point the path forks. An operator and a builder need different second layers.
If you are a marketing leader or operator
Tier 1. Take the "Still take" rows above. Focus on Generative AI for Everyone and Prompt Engineering for Developers. You are buying vocabulary and instinct.
Tier 2. Learn enough about evaluations (a way to measure whether an AI output is actually good, not just plausible) and agent design (how to stitch multiple AI steps into a reliable workflow) to make team decisions. You do not need to build these yourself. You do need to know what questions to ask. What exactly are we measuring? What does failure look like? Where do the traces live? How often are we reviewing real outputs? If nobody on your team can answer those clearly, you are probably looking at a demo, not a durable system. And if someone tells you "the AI will figure it out," ask what raw materials it is being grounded in. A strong model can help turn real traces, accepted outputs, failure cases, and internal documents into draft eval criteria and a seeded dataset. That is useful. It still needs a human to review the rubric and calibrate the standard.
Tier 3. Redesign one workflow. Pick the smallest real thing in your team's week, a weekly report, a brief intake, a QA review, and rebuild it with AI in the loop. I come back to how I would frame that rebuild at the end of this post, but the work itself is yours either way.
If you are a builder
Tier 1. Same foundation courses from the table above.
Tier 2.
Start by building, not studying. Set up a free GitHub account and create your first repository. Learn enough Git to make small commits and roll back cleanly when you break something. Then start building on a real project with one of the leading coding assistants. I have used both Claude Code with Opus 4.7 and OpenAI Codex with GPT-5.4. The walkthrough is the work, and if you wait to feel ready before you start, you will not start. Read the docs for the tool in front of you when you get blocked, and do not turn "studying the stack" into another delay.
Once something is running, start learning evals the practical way. Save real inputs and outputs. Gather some ground-truth materials. Decide what good and bad look like for each step in the workflow. Then use a strong coding assistant, Claude Code with Opus 4.7 on xhigh thinking, or Codex with GPT-5.4 on xhigh thinking, to help you scaffold the eval framework, propose criteria, and generate an initial dataset grounded in those materials. The AI can do a lot of that setup work. What it should not do is silently define your standard for you. Review the rubric yourself.
Then learn the basics of MCP, the Model Context Protocol: the layer that lets a tool like Codex talk directly to the rest of your stack. In my own workflow on this repo, that currently means Chrome DevTools, Playwright, Supabase, GitHub, Stripe, Resend, and Cloudflare. MCP did not exist in 2023, and it is now part of how I build.
Tier 3. Build something someone else uses. Not a tutorial. Not a copy of a demo. Something with a real user, even if that user is one person on your own team.
Both paths share one rule: you are not done until something real is running.
A practical note, because money matters: if $20 a month on a coding assistant is out of reach for you right now, skip the tooling tier for the moment. The foundation courses plus free-tier API keys still work. That is how I started in 2023, and it is still a real path.
5. What I would skip or time-box now
Three traps, all easy to fall into because they feel like progress.
- Cert-collecting as procrastination. I have done this. It feels productive. It is not a substitute for shipping. Take the foundation certs, then stop counting them.
- Framework-of-the-month courses. If a course is tightly bound to a specific framework that is less than two years old, be cautious. Read the framework's own docs instead, and come back to courses when the field has settled.
- Math-heavy deep-learning theory, unless you are moving into research. A builder does not need to derive backpropagation. A leader does not need to either.
6. The course I ended up building
After the foundation courses, the gap I kept hitting was not technical. It was operator-level judgment. How do you redesign a marketing team around AI, instead of just calling a few APIs? How do you decide what humans should still do, and what the machine has finally earned? Nobody was teaching that in a way that matched what I was seeing at work.
So I built the course I wish I had been able to take after Andrew Ng's seven. It is called AI-Native Media Operations, and it lives on this site. 7 modules, 16 templates, roughly 3 hours of video, yours to keep. It is the framework I point the operator track toward, because I believe in it.
One link, one pitch. If the seven Andrew Ng courses did their job for you, and the verdicts above helped you trim the roadmap, the next rung is real work, whether you take my course or not.
If you took the 2023 list, I would genuinely love to know which one paid off most for you and which one you wish you had skipped. If you are starting today, what is making you hesitate?
That's it from me for now.
Cheers, Chandler





