Getting Hands-On with Windsurf
Learn how to build a simple web app using Windsurf.
We'll cover the following
Building a functional web app in ten minutes might sound unrealistic, but in this lesson, weâll show exactly how it works. Step by step, youâll see how Windsurf and Cascade can help create, debug, and refine a basic to-do list app using just vanilla HTML, CSS, and JavaScript. No frameworks, no setup shortcutsâjust clear, transparent development inside the IDE.
Everything in this example happened inside the editor we set up in the last lesson. The process is quickâjust a few minutes end-to-end. The key is knowing how to communicate with Cascade effectively. Instead of vague or incomplete prompts, give it clear, specific instructionsâjust like you would when collaborating with another experienced engineer.
How to build a simple web app with Windsurf
We start exactly where we should: staring at an empty Windsurf workspace. If youâve used VS Code, your fingers already know the lay of the land; Windsurf is a fork, so every extension and key combo you love came along for the ride. We pressed the âCode with Cascadeâ buttons, which in our case were ââ+L.â It slides in on the right, demarcating a space that feels less like a search bar and more like the colleague who always has a whiteboard marker handy. This distinction is psychological gold: a search bar is where you try to look for answers, a colleague is someone you brief confidently and expect results from.
A lot of people add paragraphs into AI chats, but verbosity hides the actual ask. We prefer the opposite: one big sentence packed with constraints. Hereâs the exact text I pasted:
Create a minimal Todo-List web app using only vanilla HTML, CSS, and JavaScript. Provide index.html,styles.css, and script.js in the project root; let users add tasks, mark them complete, delete them,filter All / Active / Completed, toggle light-and-dark themes, and persist everything to localStorage.Donât use any external libraries or CDNs. When finished, run the app, add two sample tasks,and confirmall features work.
Thatâs forty-nine words, yet it delivers purpose, scope, file structure, feature list, a hard prohibition, and a self-test. Itâs the kind of micro spec youâd hand a competent coworker before coffee break. Notice that we are not micromanaging CSS colours or HTML semantics; if we hire a pro, we donât specify how many tabs of indentation they must use. We state outcomes and rely on professional defaults. That trust is an important psychological signal: it tells Cascade we believe it can handle autonomy, but weâll still inspect the result.
We press âEnterâ and lean back. Cascade spends maybe around twelve seconds thinking; the sideways spinner flicks, and suddenly three files appear in our Explorer. It then asks us if we want to start a local server so we can test it out, and suggests that we run the command python3 -m http.server 8000
.
If we open localhost:8000
in any browser on our system now, it shows a centered to-do box already waiting with demo items. One click flips to light theme, another click the checkbox strikes out a task. Total keystrokes from us: precisely the prompt. This took us around 5 minutes, counting the time it took us to decide on the wording for our initial prompt.
Hereâs the important thing: we are not clapping yet. Fast output isnât automatically good output. A senior engineer knows âworks on first runâ is just the opening handshake; the real test is whether it works correctly under the variations you care about.
Beyond first impressions
When we inspected the light mode, we immediately spotted the classic oversight: the âTODOâ text was nearly invisible, rendered as pale gray on a white background. Cascade had nailed dark-mode styling, then phoned it in on the flip side. We tried the drag-and-drop reordering feature that it spontaneously added, and noticed items jumping unpredictably, not landing correctly, or sometimes refusing to move altogether. Classic AI optimismâintroducing ambitious bonus features but skipping the QA. We werenât annoyed, though; this scenario was expected. When your teammate ships a rushed feature, you donât rewrite it yourselfâyou clearly communicate whatâs broken and let them handle it. Peer-to-peer accountability.
Cascade, please review the current Todo-List app: in light mode the task text blends intothe backgroundâadjust colors or contrast so each item remains clearly readable while preservingdark-mode styling. Additionally, the drag-and-drop re-ordering either fails to trigger or dropsitems in the wrong position; debug the event handlers and update the logic so tasks can be smoothlydragged to any index without duplication or loss. After fixes, rerun the app, demonstrate correctlight-mode visibility and successful re-ordering with three sample tasks, then summarize thechanges made.
Again, note the clarity here: we precisely identify whatâs wrong and specify exactly how weâll verify the fixes. Weâre not dictating low-level implementation details (Cascade handles that), but we do demand proof of correctness. Seeing is believing.
Less than twenty seconds later, Cascade updated our files. It adjusted a CSS variable for contrast and completely reworked the JavaScript drag-and-drop logic. Did we immediately deep-dive into the diff to scrutinize every line? Nopeâwe opened the live preview first. Instantly, tasks were readable in both themes, and drag-and-drop now worked as intuitively as youâd expect. Cascade provided a neat summary of exactly what it changed, but we didn't obsess over the exact JS method names or specific CSS hex codes. The outcomes mattered more: no glitches, no surprises, just clean functionality.
Human time investment for these fixes: typing exactly one short, clear paragraph. Total session time at this point: still comfortably under ten minutes. We now had a robust Todo list app with full persistence, theming, and polished drag-and-dropâbuilt entirely by clearly communicating our goals to Cascade.
By now, the key pattern here should feel blindingly obvious: the difference between a clean, impressive first draft and an error-filled mess always comes down to your original instructions. AI isnât a magic crystal ball; it's a brutally literal executor of your words. If you feed Cascade fuzzy, ambiguous ideas (âmaybe it could have this...â), youâll end up babysitting it all afternoon. But approach Cascade with clear, specific expectationsâexactly like youâd brief a professional devâand it rewards you with speed and quality.
The real lesson here isnât that AI makes everything trivial. Itâs that when your own thinking is clear, specific, and organized, an AI assistant becomes an incredible multiplier of your productivity. Vague ideas in, vague code out. Tight, well-defined prompts in, production-grade results outâfast. Cascadeâs strength mirrors your clarity.
Note: The examples demonstrated in this course are specific to our particular sessions and should be treated as illustrative rather than definitive. Since AI-generated code varies with each request, the exact output, functionality, and even the bugs you encounter when using Windsurf will likely differ from what we show here. The code generation process is inherently non-deterministic, meaning you might get different implementations, styling choices, or even entirely different approaches to solving the same problem. While the core principles and workflows we demonstrate remain consistent, expect your own Windsurf experience to produce unique results each time you interact with it.
Weâve demonstrated how quickly you can launch something basic. But now imagine extending this further. Within the next few minutes, you could prompt Cascade to add keyboard shortcuts, color-coded priorities, JSON export/import, or even automated unit tests. Each incremental enhancement is one tight, precise prompt away, and the feedback loop shrinks dramatically when youâre confident in what youâre asking. Windsurfâs entire ecosystemâbuilt on familiar VS Code foundationsâmeans you spend zero time relearning environments or switching contexts. You just focus on clearly communicating your next step.
Whatâs next?
So what did we actually accomplish with this simple to-do list? More than just checking off featuresâwe demonstrated how clear, structured prompts can drive meaningful progress. Cascade didnât just fill in code snippets; it helped with debugging, design decisions, and implementation by following your intent.
This is more than autocomplete. Itâs a tool that responds to your thinkingâone that improves as you get better at defining problems and communicating your goals clearly. The value isnât in typing less, but in working smarter with systems that understand your project and context.