Yesterday, we ran an in-person unconference with a big question at its heart: what does software engineering look like in the age of large language models?
We weren’t so much interested in prompt hacks or productivity tools. We weren’t asking how to pass LeetCode with ChatGPT, or automate our Jira boards. We were trying to reckon with something deeper: if the act of writing code is being transformed by tools that can now write it alongside us—or for us—then what does it mean to be a software engineer?
Roughly thirty people joined us for the afternoon: engineers, team leads, researchers. Folks who’ve built things and broken things. Folks who’ve mentored and hired and shipped and debugged under pressure. And folks who are deeply curious—and perhaps a little unsettled—about what AI is doing to the discipline they’ve spent their careers practicing.
We ran things under Chatham House Rule, so nothing here is attributed. But here’s what emerged from the sessions.
Interested in Exploring this idea more? Join our related LinkedIn Group
One of the topics we kept coming back to was around hiring. AI is already disrupting every part of the pipeline: resumes are machine-written, cover letters are increasingly indistinguishable from AI paste, and even interviews are becoming something you can prep for with an LLM.
Some participants shared their unease at how many early-career applicants are submitting code they likely didn’t write or even understand. Others talked about the awkwardness of banning AI tools during interviews, knowing full well these tools are part of the modern workflow.
What emerged was a shared sense of tension: if everyone is using AI, how do you distinguish between skill, potential, and dependency?
It’s not just about catching people out—it’s about figuring out what we’re even assessing. Technical skill? Critical thinking? Prompt literacy? Cultural alignment? The conversation didn’t yield answers, but it surfaced a lot of sharp, necessary questions.
Key takeaways:
Traditional hiring signals are eroding fast
Interviewing practices need to evolve alongside developer workflows
We’re not sure what we’re actually measuring anymore
Questions to reflect on:
What do we really want to understand about a candidate?
How can we responsibly evaluate developers in an AI native environment?
What new hiring practices might emerge that treat AI literacy as a positive signal?
Juniors Are Being Left Behind
Another topic covered extensively was the question of what happens to junior developers when AI becomes the default assistant.
There’s a real risk that foundational learning—how to read an error message, how to trace a bug, how to design something from scratch—is being bypassed. Not maliciously, but subtly, over time. And without those foundations, growth stalls.
Some teams are trying to adapt mentoring models. Others are rethinking interview processes altogether: asking candidates to critique AI-generated code rather than write it from scratch. But no one’s nailed this yet.
How do we teach the craft when the tools obscure the craft? That question hung over much of the day.
Key takeaways:
Juniors may not be gaining deep foundational skills
AI is changing what early-career learning looks like
Teams need to intentionally redesign mentorship structures
Questions to reflect on:
What should “learning by doing” look like when AI is doing the work?
How can we support juniors in building lasting intuition and judgment?
What mentorship models are suited to AI-assisted development?
We’re Still Not Sure What Makes a “Good Engineer” Anymore
This thread connected many of the conversations: what even is a software engineer now?
In an LLM-enabled workflow, raw output speed isn’t impressive. Knowing a framework inside-out might be less useful than knowing what’s worth delegating to an AI. Some participants suggested the best engineers now are the ones who ask better questions. Or who can filter good AI output from bad. Or who understand how to synthesise ideas, not just implement solutions.
But how do you interview for that? And more importantly, how do you support people to develop it?
Key takeaways:
Prompt strategy, synthesis, and critique are emerging as core skills
Traditional engineering archetypes are being redefined
Teams need new ways to support and evaluate evolving competencies
Questions to reflect on:
How has your own definition of “good engineer” changed?
What traits are you seeing become more valuable in your team?
How can we design career growth around these new skillsets?
Speed, Metrics, and the Illusion of Progress
A few sessions drifted (intentionally) into uncomfortable territory. As AI tools accelerate our output, they also invite bad incentives. If you can ship five PRs an hour, are you adding value—or just motion?
Several folks spoke about metric bloat: counting commits, tickets closed, words generated. There was quiet anxiety that AI is making us faster, but not necessarily better. That performance might be flattening into productivity theatre.
At the same time, there was curiosity: could we use AI to surface more meaningful signals? Like code clarity? Design evolution? Collaborative impact? The room didn’t land on answers—but the questions were sticky.
Key takeaways:
Output speed isn’t a proxy for quality
Common engineering metrics may become misleading in the AI era
There’s a need for better ways to evaluate meaningful work
Questions to reflect on:
What are you measuring today that might be misleading tomorrow?
Can AI help us track quality, not just quantity?
What would a healthier, more honest set of performance signals look like?
This Is a Cultural Shift, Not a Technical One
Maybe the most important thread of all: AI isn’t just a tooling change. It’s a cultural one.
Teams need space to process that. We heard stories of junior engineers embracing LLMs eagerly, seniors resisting them outright, and staff-level folks cautiously experimenting. That dynamic alone creates tension.
There was a lot of appreciation for teams building space to learn together—brown bags, internal prompts libraries, “failure file” talks. Not because it’s a productivity boost, but because it makes sense of the new landscape together.
Key takeaways:
Cultural adaptation is lagging behind technical adoption
Teams need psychological and conversational space to adjust
Shared learning rituals help normalize and refine practice
Questions to reflect on:
What are the cultural signals around AI use in your team?
How do you make room for learning without pressure to perform?
What rituals or routines help your team evolve together?
It’s clear we’re only getting started
If there was one thing this unconference made clear, it’s that the hard questions are only beginning.
What does it mean to be a “senior engineer” when the machine can do your job faster?
How do we preserve mentorship when junior devs never struggle in the same ways?
How do we design teams where humans and machines collaborate—but humans still grow?
What kinds of engineering values matter in a world of AI scaffolding?
These aren’t hypotheticals. They’re starting to hit teams, workflows, and careers. And if we want to shape what comes next, we need space—like this unconference—to sit with those questions honestly.
I’m grateful to everyone who came, who shared, and who listened. This wasn’t a conference. It was a conversation. And we need a lot more of them.
A quarter of a century ago this week, A List Apart published A Dao of Web Design, something I wrote about how to think about the web, and design for it.
I’m working on a piece or two revisiting this, and have the privilege of speaking on this topic at CSS Day in June, since it seems to have stayed around in folks’ minds ever since–something gratifying and humbling.
A quarter of a century older and hopefully at least a little wiser, I feel the central idea holds up–for better or worse
What I sense is a real tension between the web as we know it, and the web as it would be. It’s the tension between an existing medium, the printed page, and its child, the web. And it’s time to really understand the relationship between the parent and the child, and to let the child go its own way in the world.
I’d argue that tension still exists–though perhaps its focus has shifted. We no longer so much see the Web as essentially an extension of print technology and culture–instead, we’ve made it into an application platform, aping all the conventions of other app platforms.
The Web is its own thing–but we’ve still yet to really discover what that is. Don’t ask me, I don’t know what that is either. But a quarter of a century on I’m still just as interested in discovering what that is.
More soon…
Meanwhile this week’s reading it appears it’s MCP (Model Context Protocol) week here on the newsletter with several MCP related posts (not sure what MCP is? We’ve got you covered!) Plus there’s plenty of CSS, performance, JavaScript, and more.
Forced reflows on a website happen when running JavaScript code depends on style and layout calculations. For example, if website code is reading the width of a page element that can cause a forced reflow.
Ideally, JavaScript code does not depend on layout recalculations that happen while the code is running. Instead, the browser performs layout calculations when it needs to display content to the user.
Especially before we got sophisticated layout aspects of CSS like grid and flex, it wasn’t uncommon for developers to do a lot of layout with JavaScript manipulating the DOM directly. And like any habit that’s formed over time, we tend to keep doing it even when better ways come along. So if you need other reasons to move away from using JavaScript to layout part or all of a web page, here it is.
VUCA Revisited: Acting Skillfully in Uncertain Times
Now that we’re again living through uncertainty, I wanted to share what I said at the time. Turns out, I never wrote about the framework itself. Let’s correct that.When contexts shift, it’s harder to act skillfully. The end of the Cold War was such a time. The tense order that emerged after World War II had ended; military leaders had to make decisions in unfamiliar territory.
In response, the U.S. Army War College produced VUCA, a framework for describing unsettling contexts. It’s an acronym of their four main characteristics:
There’s little doubt we live in volatile times–and not just in terms of the impact of new, dramatically higher tariffs on world trade. Technological changes give rise to uncertainty, and their impacts can take much longer to play out than we imagine. Often attributed to Bill Gates, the futurist Roy Amara observed in the 1960s “Most people overestimate what they can do in a year and underestimate what they can do in ten years”. And I would suggest that the changes LLMs will bring to many fields fit this model.
Here Jorge Arango considers how to respond to times of uncertainty, using a model from the US military. Have a “compelling picture of where you’re heading beyond the current turmoil” very much resonated with me.
‘An Overwhelmingly Negative And Demoralizing Force’: What It’s Like Working For A Company That’s Forcing AI On Its Developers
We’re a few years into a supposed artificial intelligence revolution, which could and should have been about reducing mundane tasks and freeing everyone up to do more interesting things with their time. Instead, thanks to the bloodthirsty nature of modern capitalism and an ideological crusade being waged by the tech industry, we’re now facing a world where many people’s livelihoods–like video game developers–are under direct threat.
I’ve been generally optimistic about the impact of AI on software engineering–but not all applications or adoptions of the technology are judicious. Ted Chiang observed as we’ve noted several times Fears of Technology Are Fears of Capitalism and that’s very much at play here.
Ideas of what makes for “good” typography are deeply rooted in eras when type was set by hand using metal, wood, or ink. Typesetters took great care when deciding if a word should go on the end of one line, the beginning of the next, or be broken with a hyphen. Their efforts improved comprehension, reduced eye-strain, and simply made the reading experience more pleasant. While beauty can be subjective, with disagreements about what’s “better”, there are also strongly-held typographic traditions around the globe, representing various languages and scripts. These traditions carry people’s culture from one generation to the next, through the centuries.
Typography on the Web has always been challenging, since, among other things, line lengths change due to numerous factors. If you’ve ever typeset print, you’ll likely have laboriously set lines of text to reduce rivering, something software struggles to address. Which is why full justified text is not recommended for web content.
CSS’s text-wrap property improved things significantly, and text-wrap: pretty promises to make typography even better on the Web. It’s supported in Chrome and now in Safari Technology Previews, and it’s straightforward to use in a progressively enhanced way. Here Jen Simmons gives history of typography on the Web and takes a look at text-wrap.
Cascade Layers, Container Queries, Scope, and More
I chat with Bruce Lawson about all things CSS. We geek out over the latest and greatest features like Cascade Layers, @Scope, Mixins, and Container Queries – exploring how these features impact web design.
For your listening pleasure, Miriam Suzanne and Bruce Lawson discuss all sorts of new exciting things happening with CSS, many covered on Conffab, some by Miriam herself.
At The Future of Software conference in London, Patrick Debois shared how mastering the latest AI tools and understanding the principles and patterns can help you navigate the space of AI Native development.
These patterns include “Producer to manager,” “Implementation to intent,” “Delivery to discovery,” and “Content to knowledge.”
Since just about the arrival of the second browser, web developers have debated this question–how do we handle the differences in support for browser features.
Do we revert to the lowest common denominator?
Do we build with the latest technologies and slap a “best viewed in” badge on the site (this was a surprisingly common strategy for a long time, though with its many other flaws, it makes no sense in an age of webkit only iOS browsers).
When we talk about the unfair practices and harm done by training large language models, we usually talk about it in the past tense: how they were trained on other people’s creative work without permission. But this is an ongoing problem that’s just getting worse.
The worst of the internet is continuously attacking the best of the internet. This is a distributed denial of service attack on the good parts of the World Wide Web.
While it’s very clear we’re positive about the impact of LLMs on software development, we entirely share Jeremy’s position here. As we noted earlier, Ted Chiang observed, “Fears of Technology Are Fears of Capitalism“.
if you aren’t redlining the LLM, you aren’t headlining
There’s something cooked about Windsurf/Cursors’ go-to-market pricing – there’s no way they are turning a profit at $50/month. $50/month gets you a happy meal experience. If you want more power, you gotta ditch snacking at McDonald’s.Going forward, companies should budget $100 USD to $500 USD per day, per dev, on tokens as the new normal for business, which is circa $25k USD (low end) to $50k USD (likely) to $127k USD (highest) per year.
If you don’t have OPEX per dev to do that, it’s time to start making some adjustments…These tools make each engineer within your team at least two times more productive. Don’t take my word for it—here’s a study by Harvard Business School published last week that confirms this.
Geoff Huntley has been writing extensively about working with LLMs recently. He proposes companies should be heavily investing in these tools for their developers. But as Mark Pesce observed to me in response–perhaps companies should be investing in the hardware themselves, and running their own models?
MCP, short for Model Context Protocol, is the hot new standard behind how Large Language Models (LLMs) like Claude, GPT, or Cursor integrate with tools and data. It’s been described as the “USB-C for AI agents.”It allows agents to:
Connect to tools via standardized APIs
Maintain persistent sessions
Run commands (sometimes too freely)
Share context across workflows
But there’s one big problem… MCP is not secure by default.
And if you’ve plugged your agents into arbitrary servers without reading the fine print — congrats, you may have just opened a side-channel into your shell, secrets, or infrastructure.
Is vibe coding going to create a sea of crappy software? Are software engineers doomed?Let’s look at the evolution of photography again. Each of us have cameras in our pocket, that are with us 24/7, and are dramatically higher quality than Polaroid cameras. The number of images created per day is staggering, and once again you can argue that one persons slop it another’s gem. The photo of my kid at the Eiffel Tower is highly personal, and truly not interesting beyond a small family cohort. On the other hand, some amazing photos are captured from someone else’s phone, due to skill, luck, and the fact that these cameras are impressive and constantly improving.
The legendary Dion Almaer weighs in one the impact of AI on software development, likening it to the impact of polaroid cameras on the practice of photography.
I don’t know what MCP is and at this point I’m too afraid to ask
It feels like everyone’s talking about MCP (Model Context Protocol) these days when it comes to Large Language Models (LLMs), but hardly anyone is actually defining it. Let’s go deeper!
If you still think of AI-based code-autocompletion suggestions as the primary way programmers use AI, and/or you are still measuring Completion Acceptance Rate (CAR), then you are sitting on the vaguely dinosaur-shaped curve representing Traditional Programming in Figure 1. This curve super-slides into obsolescence around 2027.I have bad news: Code completions were very popular a year ago, a time that now feels like a distant prequel. But they are now the AI equivalent of “dead man walking.”
In May 20224, Steve Yegge wrote The death of the junior developer speculating that LLM based programming would have a very negative impact on the prospects of junior developers (and juniors in a lot of knowledge based careers and professions). He’s followed that up with The revenge of the junior developer.
A Model Context Protocol Server (MCP) for Microsoft Paint
Why did I do this? I have no idea, honest, but it now exists. It has been over 10 years since I last had to use the Win32 API, and part of me was slightly curious about how the Win32 interop works with Rust.
Anywhoooo, below you’ll find the primitives that can be used to connect Microsoft Paint to Cursor or ClaudeDesktop and use them to draw in Microsoft Paint. Here’s the source code.
The Web Share API allows us to present users with choices that matters to them, because it triggers the share mechanism of their device. For example, I can easily share links to the group chat, Bluesky or even Airdrop on my phone because the share sheet on my phone is relevant to me, not what a publisher thinks is relevant to me.
The other web platform tool — the Clipboard API — allows us to create a nice, simple “Copy Link” button. Copy and paste makes sense, mainly because it’s easy. Copy link buttons also give users complete choice in terms of where they share your URL.
The Web platform is full of features to make developers lives easier that many developers may not be familiar with. One such is the Web Share API. What does this do? Everyone is familiar with the share icon in iOS, Android, Mac OS and so on. This makes it simple for you to share say a web page to messages, Signal, AirDrop and other services on your device. It’s very very straightforward to implement, widely supported and can be used with progressive enhancement as Andy Bell explores here.
What I’m Doing Instead of Chasing FrameworksI haven’t sworn off new frameworks entirely (they’re still fun to explore!). But I have changed my approach:
Double down on fundamentals: I invest more time sharpening my core JavaScript/TypeScript, HTML, and CSS skills. It’s amazing how many “magical” framework features are just clever applications of language fundamentals. Understanding closures and the module pattern can help you write simple state management without any library.
Use frameworks as tools, not dogma: Now I choose frameworks based on project requirements and team expertise, not novelty. If my team knows React well and we need to build quickly, we stick with React. If I’m adding a small feature to a mostly vanilla JS project, I might not use a framework at all.
Build real projects (even small ones): Nothing teaches better than creating something end-to-end. Instead of rewriting the same demo in 5 frameworks, I focus on building useful projects with tools I already know. Working on my own app, Kollabe, taught me more about software development and teamwork than framework migrations ever could. By building a real product, I learned about managing growing complexity, handling user feedback, and performance optimization – lessons that stick.
Keep an eye on trends, but bet on proven tech for serious work:I stay aware of new libraries and developments (I’m a developer after all!). I’ll experiment with them in small projects or study their concepts. But for production code or anything with deadlines, I rely on stable, well-documented technology. It’s less about being conservative and more about being pragmatic. I can always adopt new tools once they prove their value and address a specific need.
This shift in focus has brought more peace of mind and productivity. I no longer panic that “FrameworkXYZ 1.0 just launched, I’m falling behind!” Instead, I evaluate new tech calmly and often find that my existing skills are sufficient to do the job well.
Do people still write CSS in CSS files any more? I honestly don’t keep up with the trends as much as I did back when I started by career. Partly because I’ve started to feel that the web development community online has become a lot more “this(my) way is the best way” than when I started out at the tail-end of the HTML tables era and the start of the floats era.
Isaiah Berlin was something very rare these days, a public intellectual. An academic, he also wrote more popular books and essays, including The Hedgehog and the Fox. The hedgehog, he writes ‘view[s] the world through the lens of a single defining idea’, whole the fox ‘draw[s] on a wide variety of experiences and for whom the world cannot be boiled down to a single idea’. Or, as he quotes the Greek poet Archilochus, “a fox knows many things, but a hedgehog knows one big thing”.
From the nature of these newsletter you can probably tell I identify more as a fox than a hedgehog.
I was reminded of this when reading one of our choice for this week, ‘Why Generalists Own the Future’. In an era that will be defined by LLMs, for good and otherwise, Dan Shipper argues, perhaps counter-intuitively, that it’s foxes, who know many things, who are best placed to take advantage of the tools that are LLMs. Perhaps its my biases, but my intuition suggests this may be true.
There’s also a piece from design legend Mike Davidson, who draws on his experience of designing for the Web for 30 years, and the early days of web and digital design. He argues similarly ‘The Future Favors the Curious’–and provides encouragement and advice for those who might be wanting to focus on (as existing or new designers) on design for AI. Advice I think it more broadly applicable.
Today, once again many things. Quite a bit of Performance and CSS, and a fair bit at the intersection of AI, LLMs and software engineering.
Correlation charts: Connect the dots between site speed and business success
If you could measure the impact of site speed on your business, how valuable would that be for you? Say hello to correlation charts – your new best friend. Here’s the truth: The business folks in your organization probably don’t care about page speed metrics. But that doesn’t mean they don’t care about page speed. It just means you need to talk with them using metrics they already care about – such as conversion rate, revenue, and bounce rate.That’s why correlation charts are your new best friend.
As has come up many times in our presentations, making the case for investment in performance isn’t always easy. Money and effort spent on something other than new features can be a hard sell. Here Tammy Everts explores correlation charts, “a powerful data visualization that shows you the relationship between your page speed metrics and your business and user engagement metrics”, that can help demonstrate the value of that investment to business decision makers.
The systemic failure of implementing CSS principles
Before CSS came along, HTML did not have a version specifically dedicated to styling. But, back in the days when I started creating webpages around 1996, I still managed to style and layout pages using HTML’s attributes and tags. I remember doing a lot of copy-pasting to get the right look by reusing those inline styles and trying to tweak everything. It was a nightmare, but it worked for the time.
In this article, we’ll explore how Chrome determines the request priority for images. We will explain how prioritization works and what optimization techniques you can use to ensure important images load faster.
Image requests are low-priority by default
By default, image requests are Low priority. The browser will instead prioritize requests that are render-blocking or part of the critical request chain.In this request waterfall we can see several low-priority image requests.
Modern web pages will typically include links to dozens of resources or more–images, JS, CSS, and god knows what. http now enables numerous simultaneous requests, But how do browser prioritise these? Conor McCarthy looks at how Chrome prioritises image loading.
This article explains how to use dedicated, modern HTML and CSS features together to create fully-customized elements. This includes having full control over styling the select button, drop-down picker, arrow icon, current selection checkmark, and each individual element.
Following on from a recent post about new features coming to Chrome (and hopefully elsewhere too soon) that enable rich styling of the select element. This tutorial covers new ways of styling select elements.
A common refrain I hear is that in the age of AI, you don’t want to be a “jack of all trades and a master of none.
”For example, my good friend (and former Every writer) Nat Eliason recently argued:
“Trying to be a generalist is the worst professional mistake you can make right now. Everyone in the world is getting access to basic competence in every white-collar skill. Your ‘skill stack’ will cost $30/month for anyone to use in 3-5 years.”
He makes a reasonable point. If we think of a generalist as someone with broad, basic competence in a wide variety of domains, then in the age of AI, being a generalist is a risky career move. A language model is going to beat your shallow expertise any day of the week.
But I think knowing a little bit about a lot is only a small part of what it means to be a generalist. And that if you look at who generalists are—and at the kind of mindset that drives a person who knows a lot about a little—you’ll come to a very different conclusion:
This seeming counter-intuitive concept, that it’s generalists not specialists (foxes, not hedgehogs in Isaiah Berlin’s formulation) who will be most valuable in an AI assisted world appeals to me as a self identified fox. Does this apply in Software Engineering? My instinct is yes. But time I guess will tell.
Shift Left on Security: Empower Full-Stack Devs to Build Safer Code
For most developers, we see securing secret keys and requiring authentication for API endpoints as basic coding, but AI won’t. The same way that someone new to the industry wouldn’t know these best security practices either. We need to treat AI the same way we treat onboarding and training interns and juniors; we need to be explicit with the requirements and create guardrails and checkpoints as far left into the process as possible. That way, the vibe can stay high as we avoid becoming the next tech meme on Reddit.
As the capability of (and demand for) LLM generated code grows, there’ll be a pressure to ship more and more quickly–focussing on new features and whole new products. But as recent episodes have shown, LLM generated code may not be as focused on security as you might hope–after all it’s trained on a lot of code that itself is unlikely to be entirely secure. So what’s to be done? Valerie Phoenix considers tools, and approaches that can help–but above all it is about a Security-First Mindset.
Declarative Web Push allows web developers to request a Web Push subscription and display user visible notifications without requiring an installed service worker. Service worker JavaScript can optionally change the contents of an incoming notification. Unlike with original Web Push, there is no penalty for service workers failing to display a notification; the declarative push message itself is used as a fallback in case the optional JavaScript processing step fails.
Apple’s Webkit team have added declarative Web push to Safari. Why?
Declarative Web Push is more energy efficient and more private by design. It is easier for you, the web developer to use. And it’s backwards compatible with existing Web Push notifications.
The CSS Cascade seems simple enough when you’re dealing with a single stylesheet. But in real world applications, we use bundlers like webpack or vite to load thousands of files. How can we ensure that our CSS always loads in the correct order?
The @layer CSS rule allows devs to “declare a cascade layer and can also be used to define the order of precedence in case of multiple cascade layers mdn link.” This is exceptionally useful for design systems maintainers who often don’t control the environment in which their components are used.
I don’t think Bert Bos and Håkon Wium Lie, when they first proposed CSS, imagined the scale of its use today. The cascade and inheritance, which make CSS so powerful are also challenging when a page might draw style from dozens of style sheets, first and third party. This is made even more challenging when the load order of style sheets may be difficult to determine when using bundlers.
The Cascade, and inheritance (often confused for each other) along with specificity do cause challenges for developers, and numerous methodologies (Nicole Sullivan’s OOCSS, BEM) and technologies (CSS-in-JSS, Tailwind) have emerged to tame these admittedly at times complex aspects of CSS.
CSS Cascade Layers is the first large scale attempt to address the complexity of developing large scale sites and applications with CSS. Now baseline, and widely supported in most browsers since early 2022 (late 2024 in Android browsers) it provides a mechanism to create different ‘layers’ of CSS, within which styles cascade independently of other layers.
Mae Capozzi introduces these here, and for a more in depth look at a couple of talks on the topic here on Conffab: Cascade Layers from Bramus Van Damme at Hover ’22 CSS Architecture with Layers, Scope, and Nesting also from Bramus Van Damme at Summit ’22
OK, we have to get one thing straight. When Andrej Karpathy coined the term ‘vibe coding’ eons ago (actually less than 2 month ago) he did not mean, as Simon Willison has observed, ‘all code generation with LLMs’. Karpathy wrote (sorry, I am no longer linking to X/Twitter but Simon does if you want the original context)
“There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard.”
But that naming ship has probably sailed. So what Patrick Smith is referring to here, where he expressly couples Test Driven Development with LLM generated code is by Karpathy’s definition not vibe coding. But it is very interesting.
WebAIM: The WebAIM Million – The 2025 report on the accessibility of the top 1,000,000 home pages
For the seventh consecutive year, WebAIM conducted an accessibility evaluation of the home pages for the top 1,000,000 web sites. The evaluation was conducted using the WAVE stand-alone API (with additional tools to collect site technology and category data). The results provide an overview of and insight into the current state of web accessibility for individuals with disabilities as well as trends over time.
How are the most popular sites on the Web, and the technologies they are built with, fairing in terms of accessibility? Turns out now so well-and it is really core things like color contrast and alt text that lead the way. WebAIM has been tracking this for several years now, and their latest WebAIM Million report has just been released.
Running design at Microsoft AI and having done a decent amount of hiring lately, I can tell you that the patterns emerging now are exactly as they were in 1995. There is a giant population of designers who have a bunch of really great skills. Some of those designers will decide they are content doing the same sort of design they have done for their whole careers. Others will decide to learn as much as they can about AI and prepare for an industry that will look very different in 5 to 10 years. Finally, there is another group of people who have no design experience whatsoever but are so enamored with this new technology that they will teach themselves very useful skills in a short amount of time. Do not underestimate this third group as it’s easier than ever to fake it ’til you make it right now.
Mike Davidson is a giant in the design profession, now Corporate Vice President, Design & User Research @ Microsoft AI. He’s been doing design for the Web for as long as just about anyone, and draws on his experience to provide encouragement and advice for those who might be wanting to focus on (as existing or new designers) on design for AI.
The Thankless Complexity of Custom Form Validations
One of the least exhilarating but common development tasks are building forms, and form validations. While important, they’re often over-designed and easy to over-engineer as a result.
I believe we’ve gone too far with trying to accomodate all kinds of form validation while building reusable input fields –– especially for component libraries and design systems.
As developer we can have a tendency to over-engineer our form validation–adding all manner of extraneous messages and cases–but perhaps we’ve gone too far suggests Jen Chan.
I’ve been mulling this topic for months now, and I’m pretty firmly of the opinion if you are attempting to do some layout in CSS, you should reach for display:grid first, followed by display:block, followed by display:flex. Grid allows the layout element to be in control of how things get placed, where as flex really relies on the children to define their widths, which most of the time is not how layout should function at all.
Now I rarely use grid and use flex all the time–perhaps it’s because my common use cases are different to Alex, perhaps because I’m familiar and comfortable with flex (it took some time to transition from floats to flex)–but perhaps it’s time for me reconsider.
The February TC39 meeting in Seattle wrapped up with significant updates and advancements in ECMAScript, setting an exciting trajectory for the language’s evolution. Here are the key highlights, proposal advancements, and lively discussions from the latest plenary.
If you want to know where JavaScript is headed, this recent roundup from the TC39 meeting (TC39 is the technical committee of ECMA International that oversees the standardisation of JavaScript).
Item Flow – Part 1: A new unified concept for layout
CSS Grid and Flexbox brought incredible layout tools to the web, but they don’t yet do everything a designer might want. One of those things is a popular layout pattern called “masonry” or “waterfall,” which currently still requires a Javascript library to accomplish.
Masonry layouts in CSS feature was first proposed by Mozilla and implemented in Firefox behind a flag in 2020. From the beginning, there’s been a hot debate about this mechanism. The WebKit team at Apple picked up where Mozilla left off, implemented the drafted spec, and landed it in Safari Technology Preview 163. This reinvigorated the conversation. By October 2024, there were two competing ideas being debated — to “Just Use Grid” or to create a whole “New Masonry Layout”. We wrote extensively about these ideas in a previous article on this website.
A lot has happened since October. Now, a third path forward is emerging — a path that would mean the CSS Working Group doesn’t choose either “Just Use Grid” or “New Masonry Layout”. It merges ideas from both with a completely-new idea to create a unified system of Item Flow properties. This article explains what Item Flow is, and its impact on both Flexbox and Grid. In Part 2, another article will more fully explain the implications for the future of masonry-style layouts.
Shadows in CSS can be multi-directional, layered, and are animate-able. On top of being all that, they don’t affect the layout and computed size of an element even though they can make it appear bigger or smaller, which makes them an efficient tool for making visual changes.
Shadows have been a staple of Web (and screen based) design for decades. Even a simple one pixel line can give the impression of depth to the otherwise flat UI of a screen. But CSS shadows can do much more than add a little depth as Preethi Sam explores here.
We’ve covered the issue of AI and accessibility a number of times now. There are broad claims made by some (typically not accessibility experts) for AI’s revolutionary opportunity to solve accessibility challenges.
But folks with deep accessibility knowledge are more sanguine. That’s not to say there aren’t valuable places where AI and other forms of automation can definitely help improve accessibility. Here’s a comprehensive roundup of WCAG Success Criterion and how testing them might be automated (and what to watch out for.)
How are developers really using AI, and what do they think about it? WIRED asked a bunch of them, across levels of experience and different kinds of employers.
Data Fetching Patterns in Single-Page Applications
When a single-page application needs to fetch data from a remote source, it needs to do so while remaining responsive and providing feedback to the user during an often slow query. Five patterns help with this. Asynchronous State Handler wraps these queries with meta-queries for the state of the query. Parallel Data Fetching minimizes wait time. Fallback Markup specifies fallback displays in markup. Code Splitting loads only code that’s needed. Prefetching gathers data before it may needed to reduce latency when it is.
This detailed article from Juntao QIU looks at 5 data fetching patterns in Single Page applications, (but doubtlessly more broadly applicable beyond SPAs).
To understand this identity crisis, we need to look at how deeply the craft of coding has shaped who we are. At its core, writing code is about mastery and control – skills we’ve spent years perfecting. Modern programming languages are much higher-level than those of days gone by, but they still require deep technical understanding. Few developers today deal with the nitty-gritty of pointers and memory management, yet we still take pride in knowing how things work under the hood. Even as frameworks do more heavy lifting, we’ve maintained our identity as artisans who understand our tools intimately.
Programming today is much more about stitching together APIs, frameworks, and libraries in creative ways to build something meaningful. In fact, recent research at Google suggests that creativity in software engineering centres on the concept of clever reuse over pure novelty. This makes sense to me – I’ve often commented that we’re all just ‘integration’ engineers nowadays, really.
Annie Vella explores how the advent of AI coding assistants is reshaping the role of software engineers. Traditionally, engineers have identified as builders, deriving satisfaction from hands-on coding and problem-solving. However, AI tools are shifting this dynamic, transitioning engineers from creators to orchestrators—roles that resemble management, or curation, more than craftsmanship.
This evolution challenges the core identity of software engineers, as they find themselves overseeing AI-generated code rather than directly crafting it. Vella highlights the irony that, while the industry has long emphasized that software engineering encompasses more than just coding, the increasing reliance on AI may diminish the hands-on aspects that many engineers cherish.
She underscores the importance of adaptability, suggesting that engineers must navigate this transformation by balancing their technical expertise with emerging skills in AI orchestration.
Nearly two dozen articles, talks and more this week, from new CSS that will finally, finally allow us to style the select elements, to a 3 and a half hour video on how LLMs work from someone who really knows.
We’ve also got a couple of bonus talk videos on Conffab (no signup required)
There’s a heavy dose of let’s call it ‘AI Native Software Engineering’ (I’m avoiding the V word, and this is about much more than coding), as I genuinely believe if you develop software this is something you should be thinking deeply about.
This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their “psychology”, and how to get the best use them in practical applications. I have one “Intro to LLMs” video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version.
Got 3 and a half hours and are keen to get a solid idea about how LLMs work? Andrej Karpathy was a founding member of OpenAI, and there’s few more knowledgable about the topic than he is.
Polyfills have long been a part of the web developer experience, as they attempt to provide support for web features that aren’t supported in all browsers. It would seem that polyfills are an indispensable tool in the web developer’s toolkit, but it’s nearly impossible to distill such a complex area of concern into a single, definitive statement.Knowing when to use a polyfill for a feature depends on its availability across browsers, and Baseline can be helpful in making that determination. While Baseline doesn’t tell you whether or not you should use polyfills, the clarity it brings to the availability of web platform features gives you an opportunity to be more selective, as excessive polyfills in an application can have significant drawbacks.
Web doesn’t have versions, it is continually evolving. Progressive enhancement is one approach we’ve developed to address this challenge–polyfills another. But how do we decide when to polyfill? Jeremy Wagner outlines some ideas.
Keep Product Teams Shipping Faster with Successful Frontend Migrations
The company wants more features, faster. You go to your boss and suggest that reducing build times could increase the team’s shipping velocity. They tell you that they aren’t willing to invest in any system migrations to allow developers to ship high quality code at a faster speed. The company also won’t hire more developers. The only option left is for the existing developers to work harder and faster, piling more debt onto a poor foundation until work comes to a relative standstill and the company’s star engineers burn out and quit.
Sometimes a migration–it could be tooling, frameworks, or a variety of things–is inevitable. But how do you know when? And how do you go about it? Mae Capozzi considers the challenges.
From Chrome 135, web developers and designers can finally unite on an accessible, standardized and CSS styleable element on the web. This has been many years in the making, many hours of engineering and collaborative specification work, and the result is an incredibly rich and powerful component that won’t break in older browsers.
One of the Web platforms significant shortcomings for decades now has been its limited support for standard controls. Yes we’ve had a lot of these for many years, right back into the 90s, but the ability to style them has been limited in many cases. So, developers have rolled their own controls (often missing accessibility corner cases), or relied on 3rd party libraries that come and go. Now we might finally be seeing this fixed, with the release of the first public draft of CSS Form Control Styling Level 1 and (in Chrome for now but progressively enhanceable) fully CSS stylable select elements.
It’s only relatively recently we’ve begin thinking in earnest abut the environmental impact of websites. Back in 2019, Asim Hussain gave a talk ‘Save the World one line at a time‘ at our Code Leaders conference, the first time I had ever really heard this spoken about. Here Chris Ferdinandi talks about the broader issue and what we can do about it.
With new tools and frameworks emerging weekly, it’s natural to focus on tangible things we can control – which vector database to use, which LLM provider to choose, which agent framework to adopt. But after helping 30+ companies build AI products, I’ve discovered the teams who succeed barely talk about tools at all. Instead, they obsess over measurement and iteration.
In this post, I’ll show you exactly how these successful teams operate. You’ll learn:
Right now when it comes to building products with AI, we are at the ‘obsess about the tools’ phase, but as Hamel Hussain observes, this is not what successful teams are doing–here he shares what they are.
Leading Effective Engineering Teams in the Age of GenAI
Using AI in software development is not about writing more code faster; it’s about building better software. It’s up to you as a leader to define what “better” means and help your team navigate how to achieve it. Treat AI as a junior team member that needs guidance. Train folks to not over-rely on AI; this can lead to skill erosion. Emphasize “trust but verify” as your mantra for AI-generated code. Leaders should upskill themselves and their teams to navigate this moment.
While AI offers unprecedented opportunities to enhance productivity and streamline workflows, it’s crucial to recognize its limitations and the evolving role of human expertise. The hard parts of software development – understanding requirements, designing maintainable systems, handling edge cases, ensuring security and performance – remain firmly in the realm of human judgment.
Discussion about user onboarding often focuses on teaching new users how to use a product’s interface. There are dozens of third-party plugins that offer various ways to point out product navigation, features, and affordances. However, this only scratches the surface of what onboarding can be about. The biggest opportunities for onboarding happen at higher levels. Similar to how The Society for Human Resource Management (SHRM) says that employee onboarding can happen at multiple levels, user onboarding can also be tiered. Here’s the 4 levels of user onboarding that I currently think about, from lowest highest level: interface orientation, process onboarding, new meanings onboarding, and systems understanding.
Krystal Higgins quite literally wrote there book on onboarding. Here she explores different kinds of onboarding (for example new features, versus an entirely new set of concepts) and key considerations for each.
Bonus Video: Better Onboarding by Krystal Higgins at Web Directions Summit ’22 (free, no signup required)
AI is fundamentally reshaping software development roles and activities. While the change is obvious, understanding the actual shifts taking place remains challenging. In this article, we introduce a first pass at four AI Native Dev patterns that are currently emerging. We’re asking for community feedback to evolve these patterns in the open. The aim is for patterns to help you grasp what’s on the horizon and how to position yourself effectively in this changing landscape.
We’re very interested here in the whole area of AI aided software engineering–what some folks are calling ‘AI Native Dev’ especially beyond the obvious hot takes and vibe coding chatter. So this caught our attention.
Web Performance is one of those topics that’s constantly discussed as crucial for the modern web. It’s often framed as a key factor for user experience (UX), conversions, SEO, and overall success on the Internet. Every year kicks off with an article titled “New Challenges and Why Fast Websites Will Win in 20xx.”Yet when it comes to implementation, performance tasks in the backlog often don’t get the priority they deserve. This happens for two main reasons, two key questions arise:
Is it really that important? – Users aren’t complaining; everything loads quickly for me, too. Plus, our internal research hasn’t shown a clear correlation between performance metrics and search rankings. What even is a CLS of 0.4, and why should I care?
Is it worth the effort? – There are too many metrics; they’re confusing, and new ones keep popping up all the time. On top of that, the tools are complex and expensive. Wouldn’t it be better to invest resources in product features instead?
This guide-style article explores how to answer these questions in 2025 and how to build a balanced approach to web performance. As always, the balance is found between engineering decisions and business expectations
Why doesn’t performance get the focus and resources it deserves, given the potential business impact of improving performance? Antony Belov considers why and how to change that.
Previewing Content Changes In Your Work With document.designMode
You probably already know that you can use developer tools in your browser to make on-the-spot changes to a webpage — simply click the node in the Inspector and make your edits. But have you tried document.designMode? Victor Ayomipo explains how it can be used to preview content changes and demonstrates several use cases where it comes in handy for everything from basic content editing to improving team collaboration.
Every year I discuss the most important trends in the world of React. In this article, we will explore the React trends in 2025 that you should be aware of. Whether you are a beginner or an experienced developer, these trends will help you stay up-to-date with the latest developments in the React ecosystem.
Where’s React and its ecosystem ended in 2025? Robin Wieruch has been tracking React ecosystem trends for years and here’s his list of things to keep track of this coming year.
It’s still very early days in the history of humans working with LLMs to produce software. But some patterns are emerging. Here’s a set of areas of weakness (I prefer not to use the term they have used) one person (can’t find details about whom!) has observed about coding models particularly with Claude Sonnet.
How to Get a Standing Ovation as a Conference Speaker
I keep on attending otherwise interesting talks and conference sessions being flushed down the toilet by appalling deliveries. This is my plea to you: please, please, PLEASE: if you have to speak in public, follow these rules to ensure a standing ovation at the end of your talk.
I’ve delivered a fair number of talks in my time–and have definitely improved over the years of doing so. I also watch every single talk at our events in detail each year as I edit the transcripts (and most also in person at the events).
I have many strongly held thoughts about what in particular speakers should avoid, that aren’t entirely common–the top being never ever, ever, live code on a presentation–there’s nothing that well selected code excepts and possibly a screencast can’t demonstrate much more coherently and accessibly.
Here Adrian Kosmaczewski outlines his thoughts on what for excellent presentations. I agree with every one of these suggestions.
JavaScript runtimes are in any more places than you might think–of course in browsers, but also in Node, Bun, and Deno, and edge computing platforms like Fastly and Cloudflare workers. So, just which JavaScript features are supported in different runtimes? Use this to find out.
Explore step-by-step how Server-Side Rendering (SSR), pre-rendering, hydration, and Static Site Generation (SSG) work in React, their costs, performance impact, benefits, and trade-offs.
I’ve seen a handful of recent posts talking about the utility of the :is() relational pseudo-selector. No need to delve into the details other than to say it can help make compound selectors a lot more readable.
I’ve been using CSS since before the initial specification was finalised. And even way back then, CSS was 99.5% pretty straightforward, with the devil in the details. Once upon a time a lot of that was browser support gotchas, but CSS has always had subtleties that can bite. That’s still the case now as this quick look at the :where() and :is() selectors attest.
Since then, I’ve used border-image regularly. Yet, it remains one of the most underused CSS tools, and I can’t, for the life of me, figure out why. Is it possible that people steer clear of border-image because its syntax is awkward and unintuitive? Perhaps it’s because most explanations don’t solve the type of creative implementation problems that most people need to solve. Most likely, it’s both.
Rounded corners on objects as a design feature for the Web has been around a long long time. How we used to do this was create a table, with each corner and edge of the object being a table cell and the content of the object its own cell.
Yikes.
border-image was introduced to enable similar designs (and much more), though it’s not been widely supported as long as I’d have guessed looking at caniuse.com, though my instinct is it was supported well before the first browser versions that caniuse tracks.
border-radius probably replaced most of the common border-image use cases, and so that feature remains significantly under-used. Here Andy Clarke revisits border-image, and considers uses beyond simply rounded-corners.
Some topics are new, rapidly evolving, and exploring them doesn’t fit neatly into traditional conference structures. They can benefit instead from unconferences—participant-driven events emphasizing emerging conversations and collective exploration, rather than predefined, organizer-driven agendas.
So, Web Directions will host a new ad hoc series of these unconference-style gatherings, centered on intriguing and emerging topics. Each event aims to facilitate meaningful conversations, foster idea sharing, and connect passionate individuals eager to explore and contribute to these dynamic areas.
Our first takes place in Melbourne April 15th, from 2pm-5pm, and it’s free. We’ll be considering the question ‘what is software engineering in an era of Large Language Models and AI?’
Hosted by Deakin university on their Docklands campus, I hope you can make it!
Is there more great content being published? Are my antennae just getting better? Perhaps it’s an improved workflow? One thing’s certain—it’s definitely not more free time! Yet somehow, the weekly reading roundup keeps growing.
You might ask: What’s the point of these eclectic roundups? Why should you read them? And why do I invest time in compiling them?
In established professions, advances happen steadily but predictably, supported by structured channels like professional bodies and formal training. Communication paths are well-defined, and keeping pace is manageable.
But for those of us working with the web, digital products–design, or engineering–staying current is uniquely challenging. Our field moves quickly, innovation arrives daily, and there’s no single source to rely on.
That’s why I create these roundups. They’re partly for my benefit—when programming conferences, it’s important to keep up to date—but primarily they’re for professionals like you, who don’t have endless hours each week to sift through countless RSS feeds and social media channels.
And if you prefer bite-sized updates, consider subscribing to our “Elsewheres” RSS feed. Instead of one long list each week, you’ll get a steady trickle of insightful stories throughout your day.
I hope you find these roundups valuable. Enjoy this week’s selections!
Cache Control Headers are a powerful tool for controlling how browsers and caches store and serve your website’s content. By setting the right headers, you can improve your website’s performance, and in some cases, have your users experience near-instant page loads.
Front end developers often overlook the impact HTTP headers can have on security and performance. Learn how you can dramatically improve performance by setting them right.
Released by Anthropic last November, the Model Context Protocol is described as, “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments.” But even that description is a bit jargony.Here’s the simple version:
An MCP server exposes a bunch of end points, like any other API server, but it must have end points that list all the available functions on a server in a standard way an MCP client can understand.
MCP clients (usually LLM-powered), like Anthropic’s Claude Desktop, can then be connected to MCP servers and immediately know what tools are available for them to use.LLMs connected to MCPs can now call MCP servers using the specs provided by the API.
That’s it! It’s incredible simple, a standard to enable the Web 2.0 era for LLM applications, giving models plug-and-play access to tools, data, and prompt libraries.
Currently there’s discussion of ‘vibe coding’ – that is, using LLMs to create code by iterating with prompts, quickly producing workable prototypes, then finessing them toward an end.I’ve found myself ‘vibe designing’ in the last few months – thinking and outlining with pencil, pen and paper or (mostly physical) whiteboard as has been my habit for about 30 years, but with interludes of working with Claude (mainly) to create vignettes of interface, motion and interaction that I can pin onto the larger picture akin to a material sample on a mood board.
One of the premises of the new popover attribute is that it comes with general accessibility considerations “built in”. What does “built in accessibility” actually mean for browsers that support popover?
But I decided to finally get this post in a publishable state because I still see people asking questions about popover. Like, what is it? How does it differ from a dialog (modal or non)? Why doesn’t it do this one thing that I need it to? I’m not going to answer every question about popover. But I should get back into the habit of writing this stuff down once and pointing people to it, rather than providing repetitive one-off answers whenever I happen across someone asking about it.Here we (finally) go.
Popover is a relatively new API, we’ve covered here a bit at Conffab including this recent talk we highly recommend. This is a great comprehensive overview by Scott O’Hara.
Some critics question the agnostic nature of Web Components, with some even arguing that they are not real components. Gabriel Shoyomboa explores this topic in-depth, comparing Web Components and framework components, highlighting their strengths and trade-offs, and evaluating their performance.
Modern frameworks all have some sort of concept of components, which differs somewhat from Web Components. So how are they similar? How are they different?
There are many ways to create and style counters, which is why I wanted to write this guide and also how I plan to organize it: going from the most basic styling to the top-notch level of customization, sprinkling in between some sections about spacing and accessibility. It isn’t necessary to read the guide in order — each section should stand by itself, so feel free to jump to any part and start reading.
The visions of the open access movement have inspired countless people to contribute their work to the commons: a world where “every single human being can freely share in the sum of all knowledge” (Wikimedia), and where “education, culture, and science are equitably shared as a means to benefit humanity” (Creative Commonsa).
if many people enjoy unfettered access to a finite, valuable resource, such as a pasture, they will tend to overuse it and may end up destroying its value altogether
It is a self serving neoliberal concept that is rarely subject to even the most basic critical enquiry. Here Molly White observes the genuine tragedy of the commons with large technology companies is while relying heavily on the commons of collective human creation (be the the likes of Wikipedia, or open source software, or more broadly) they degrade the sources, diminish their capacity for being sustainable. Everyone should read this piece.
Lazy Load Background Images with the IntersectionObserver API
While we can defer offscreen images using the loading HTML attribute, lazy loading background images takes a bit more work. Since they are added by CSS rather than HTML, we need to use JavaScript to detect when offscreen background images are about to enter the user’s viewport.It would be nice to have a native background-loading: lazy property in CSS as well, but unfortunately, it doesn’t currently exist. Luckily, the IntersectionObserver API provides a performance-friendly solution to lazy load background images without having to manually add JavaScript event listeners and perform viewport calculations, or use a third-party library.In this article, we’ll look into how to lazy load background images using the background CSS property and the IntersectionObserver JavaScript API.
We can’t (yet) add lazy loading for background in CSS–but with little JavaScript and the IntersectionObserver API we can implement such a feature as Anna Monus details here.
We tend to focus on the generative aspect of large language models, the outputs they can create, the text, images and videos they produce. But the largest disruption from the current evolution of AI will come from bringing agency to computers.
2025 definitely feels like the year of Linux agents on the desktop–but to be honest I’m not convinced we’re there just yet. But it’s definitely something that many are focussing on (the just gone AI Engineering conference was all about agents). And in a world of agentic systems, then a significant percentage of your users will be, well, agents. So, what does their experience look like? Mathias Biilmann has some thoughts.
Less Effort, More Completion: The EAS Framework for Simplifying Forms
Summary: Use the EAS framework — Eliminate first, Automate where possible, and Simplify what remains — to minimize user effort and improve form completion rates.Filling out a form is rarely anyone’s idea of fun. Users are goal-oriented — they want to accomplish their goals quickly and efficiently. The more effort a form demands, the more likely users are to abandon it midway. Yet, simplifying a form isn’t just about reducing the number of fields. Sometimes, longer forms are necessary to collect essential information. The key is to balance the organization’s information needs with users’ desire for simplicity and efficiency.
There are two ways to improve form usability: make users do less and make what they do easier. In this article, we focus on the first approach — minimizing user effort so they are more likely to complete your forms.
Recently someone ‘complained’ to me the registering for one of our conferences was ‘to easy’–they had reached the invoice page before they expected. Turns out I’d been following the principles outlined here.
DeepSeek-R1 Uncensored, QwQ-32B Puts Reasoning in Smaller Model, and more…
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct […] than that it will become all-powerful. More and more, computers will program themselves.” Statements discouraging people from learning to code are harmful!
Perhaps it’s ‘cope’, a whistling past the graveyard, as someone who has invested a significant majority of his professional life into writing code, but I share Andrew Chen’s intuition around this. Perhaps software engineering and development won’t look like it does today–IDEs and C-like syntax, but as Chen observes (as did Bret Taylor recently), we don’t punch holes in cards anymore.
I haven’t written anything that is remotely close to assembly code, let alone machine language for 40 years, but the understanding I developed about how CPUs basically work by doing so has stood me in good stead in the decades since. As will learning to code now for an entirely new generation of software developers.
We’ve been successfully removing all friction from our apps — think about how effortless it is to scroll through a social feed. But is that what we want? Compare the feeling of doomscrolling to kneading dough, playing an instrument, sketching… these take effort, but they’re also deeply satisfying. When you strip away too much friction, meaning and satisfaction go with it.
Think about how you use physical tools. Drawing isn’t just moving your hand—it’s the feel of the pencil against paper, the tiny adjustments of pressure, the sound of graphite scratching. You shift your body to reach the other side of the canvas. You erase with your other hand. You step back to see the whole picture.
We made painting feel like typing, but we should have made typing feel like painting
Have you ever loaded a page with tons of content and noticed how slow it feels? The browser has to process everything at once – even the content you can’t see yet! That’s where the content-visibility property comes in. It’s a CSS feature that tells browsers to skip rendering off-screen content until it’s needed.
There are all kinds of CSS and HTML properties that can help improve performance in specific cases that a lot of developers may not be aware of. content-visibility is one of these. Michael Hladky had a great talk on this and much more at Hover ’22, which is now available with a free account.
Popovers and dialogs are similar in many ways. That’s particularly the case since HTML introduced the closedby attribute for the dialog element, enabling light-dismiss functionality. So how are they different? A dialog can be modal or non-modal, whereas a popover is never modal*. Let’s compare both kinds of dialog to the popover API.
Three big budget movies were released in 1995 that had internet themes: the Keanu Reeves flick Johnny Mnemonic in May, The Net with Sandra Bullock in July, and Hackers in September. (Two other 1995 movies had a virtual reality premise: Virtuosity, starring Denzel Washington and Russell Crowe, and Kathryn Bigelow’s Strange Days.)
The cyberspace movies all reflected the growing importance of the internet in our culture, although reviewers tended to see the computer plots as a gimmick. Roger Ebert’s 3-star review of The Net noted that it “dresses up its plot with a trendy front end, by using the Internet as a hook.” But he praised Bullock’s performance, despite the flimsy story. Ebert also gave Hackers three stars, saying it was “smart and entertaining […] as long as you don’t take the computer stuff very seriously.”
How was the internet perceived culturally in the early days of the Web? Richard McManus looks at three internet themed movies of the mid 1990s, as the Web became part of mainstream consciousness.
Single-page applications (SPAs) have unique page speed optimization challenges. Let’s go through some common web performance issues with SPAs, and how to optimize them.
Depending on what you read, and who you believe, AI is either the ultimate solution or armageddon in motion, so in this talk, Léonie is going to cut through the clickbait, dodge the doomscrollers, and focus on the facts to bring you the good, the bad, and the bollocks of AI and accessibility.
I’m constantly surprised by the native HTML spec. New features are regularly added, and I often stumble on existing, handy elements. While often not as versatile as their JS counterparts, using them avoids bloating your app with extra Javascript libraries or CSS hacks.
…
Native HTML can handle plenty of features that people typically jump straight to JS for (or otherwise over-complicate).I cover some great HTML elements in this article — modals, accordions, live range previews, progress bars and more. You might already know some of these, but I bet there’s something new here for you too.
HTML gives us an increasing number of native elements (most recently dialog) so we don’t have to rely on 3rd party components or roll our own. Not only does this reduce effort on our part, it’s most certain their accessibility will be better than anything you can do yourself. Here’s a roundup of some you will know about and others you may not.
This week we’ve got another roundup of great articles and more, across CSS, JavaScript, performance, accessibility, design, some history with an ancient website and even more ancient computer, and of course, more than a little about writing software with AI.
Last week I had the privilege to record a conversation with Jeremy Howard, now of Answer.AI, a genuine pioneer of AI and LLMs (and much more beside–recording coming).
One particular idea stood out–Jeremy observed that a lot of code has always been written not by ‘professional’ developers, but by all kinds of people solving problems for themselves, their team, or their organisations, in everything from spreadsheets, to Visual Basic and database products like FileMaker.
Using AI for writing such software will only supercharge such efforts.
Meanwhile Software engineering, the writing of systems that last and evolve over long time frames, is a lot more than generating code-a huge part of it is maintaining code.
But for 15 years or more (VC fuelled/hackathon/time to demo culture) we’ve entirely privileged “time to first demo” and AI code generation is only going to fuel that (”a quarter of startups in YC’s current cohort have codebases that are 90% AI-generated’ per Benedict Evans)
The tech debt these code bases are about to accrue…
All that is to say, AI and LLMs are already transforming what it is to write software, as we have covered extensively in the weekly roundups for some time now. If you write software, or manage software projects, this is something I think you should be giving quite a bit of thought to.
Vibe coding, some thoughts and predictions – by Andrew Chen
We’ve all been surprised by LLMs being good at writing/brainstorming/generating text, but along the way, we also discovered it was surprisingly good at writing code. This was first harnessed by coding co-pilot features in IDEs like Cursor, but as many of you have followed, “vibe coding” is the new thing, coined by the great Andrej Karpathy:
There have always been ‘easier’ ways to write software. From Visual Basic, to Foxbase and FileMaker, even spreadsheets, a huge amount of useful, valuable software has been written, to automate workflows, increase productivity, run whole companies. Traditionally a subset of software engineers have looked down on such approaches to writing software a “not real programming”.
But as Andrew Chen observes, “vibe coding is happening”. Whether you’re particularly enamoured with the term or not, using LLMs to generate code is already transforming how software is written. But as Chen also observes, ‘we are in the command line interface days of vibe coding…’. What it ends up looking like is purely speculation right now–but remember what Alan Kay once said ‘the best way to predict the future is to invent it’.
Will the future of software development run on vibes?
Instead of being about control and precision, vibe coding is all about surrendering to the flow. On February 2, Karpathy introduced the term in a post on X, writing, “There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He described the process in deliberately casual terms: “I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”
As Jeremy Howard observed to me in a recent conversation (recording coming soon) there’s always been a divide between people who write software to do tasks they need done-traditionally perhaps with Visual Basic, or consumer databases, and capital S software engineering. So does a shift toward ‘vibe coding’ really represent a shift at all?
Welcome to the Jam: The looney story of the decades-old ‘Space Jam’ website
In 1996, that team had an especially big movie to market starring the Looney Tunes and NBA legend Michael Jordan: “Space Jam.” What differentiates this site from the rest is something so unlikely that it sparked a viral moment of collective glee among nostalgic “Space Jam” fans and web nerds alike: this 29-year-old website is still up. Not only that, but the site still looks and functions today exactly as it did in 1996.
WBUR, the Boston’s NPR radio station has a story on one of the Web’s first breakout sites, Space Jam, the site built for an animated/live action cross over Disney film of the mid 90’s featuring Bugs Bunny and Michael Jordan. It, unlike almost the entire web at the time, is still online and unchanged.
This is the story of Jack Tramiel, one of the most explosive and ruthless founders the computer industry has ever seen. It is the story of four machines—the PET, VIC-20, Commodore 64, and Atari ST—and of the man who ruled home computing for over 20 years.
If you think the days of Mac versus Windows, let alone Android versus iPhone were tribal, then you should have been around in the early days of micro computers (AKA home computers, and eventually Personal Computers, of the ‘PC’). I knew grown men much older (and more well off) than me, adherents to the Apricot computer, who would refer to Apple as ‘the other fruit flavoured computers’. Commodore ruled the roost, and this is the story of its founder.
I think there’s value in knowing there history of the machines and technologies that have so shaped the modern world. I hope you enjoy what was a walk down memory lane for me, though ancient history for most tin our industry these days!
We love a pseudo class at Conffab. Here Ollie Williams looks at the little known ::open, and how we can use it to style all manner of elements, like dialog, select and various inputs that open a picker, when that picker is showing.
Your accessibility tooling deserves the same first class treatment as the rest of your stack.
TLDR: Get your accessibility tooling off from your developers’ machines and into the CI. This is one of those pieces I would share with folks who are new to advocating for automated accessibility testing in their engineering orgs; especially if you feel stuck at that phase of “I know all the tools we should be using, but now what?”
The latest version of Chrome (134) comes with a new light-dismiss behavior for the dialog element, which enables a native click-outside feature. That’s fantastic! Reading the announcement, I wondered how many ways there are to close a dialog element.
The specification states that a user can send a close request to the user agent, indicating that the user wishes to close something currently being shown on the screen, such as a popover, menu, or dialog. There are also close watchers, which listen to other close or cancel actions.
My goal is to collect all close requests and watchers in this blog post. Please get in touch if anything is missing.
And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S).
I recently shipped some updates to my blog. Through the design/development process, I had some insights which made me question my knee-jerk reaction to building pieces of a page as JS-powered interactions on top of the existing document.
With cross-document view transitions getting broader and broader support, I’m realizing that building in-page, progressively-enhanced interactions is more work than simply building two HTML pages and linking them.
I’m calling this approach “lots of little HTML pages” in my head. As I find myself trying to build progressively-enhanced features with JavaScript — like a fly-out navigation menu, or an on-page search, or filtering content — I stop and ask myself: “Can I build this as a separate HTML page triggered by a link, rather than JavaScript-injected content built from a button?”
For years Web development has tended toward over engineering, and Byzantine solutions to what could be simple problems. The SPA architecture is perhaps them most elaborate of these–a response to the slick interactions and navitgation of native mobile apps, we built a complex stack of technologies.
But a lot of this over engineering may no longer be needed, with new technologies like View Transitions. Here Jim Nielsen explains how he replaced complex JavaScript enabled navigation and interaction with simple page loads, which then animated with View Transitions give a modern experience.
At Conffab we use for the most part a pretty traditional MPA approach. When you visit a conference page or a presentation page, a speaker page, they are all individual, stand alone, HTML pages. We have used the simplest possible View Transitions to animate the navigation between them for a long time now–though originally the effect was one apparent only in nightly versions of Chrome then in Chrome then Technology Previews of Safari and now in Safari, though still not in Firefox.
But that’s ok–it’s progressive enhancement at its finest. We sprinkle the tiniest amount of CSS
@view-transition {
navigation: auto;
}
on our and now animations when we move between them are smooth.
Simply shipping a product that works is no longer enough, everyone can do that, especially now with AI. It’s not the differentiator anymore as people expect things to work. What makes a product stand out is the brand, design, how intuitive it is, the overall experience. Taste is what matters.
“In a world of scarcity, we treasure tools. In a world of abundance, we treasure taste.”
Anu Atluru, Taste is Eating Silicon Valley
But what is good taste? It’s commonly mistaken as personal preference, but it’s more than that — it’s a trained instinct. The ability to see beyond the obvious and recognize what elevates. It’s why some designs feel effortless while others feel contrived. So the real question is, how do you train that instinct?
Steve Jobs famously said of Microsoft that “they have no taste”. But what is it, and how might you develop taste? We quoted a similar piece from Elizabeth Goodspeed in April last year.
AI is reshaping UI — have you noticed the biggest change yet?
The way we interact with software is anything but static. Sometimes it’s a gentle evolution, other times a jarring leap. Today, a growing wave of design pioneers, including Vitaly Friedman, Emily Campbell and Greg Nudelman are dissecting emerging patterns within AI applications, mapping out the landscape that refuses to stand still. At first glance, this might seem like yet another hype cycle, the kind of breathless enthusiasm that surrounds every new tech trend. But take a step back, and a deeper transformation becomes apparent: our interactions with digital systems are not just changing; they are shifting in their very essence.
Developers call the style of programming where the code tells a computer how to do something ‘imperative’ programming, and where code tells a computer what needs to be done declarative programming.
Our UIs too have traditionally been imperative–from the command line where literally step by step we tell the computer what do, to WYSIWYG and WIMP interfaces where we select the object to be acted on, and apply the action (make this text bold, copy this file to this folder).
But Generative AI systems are much more declarative in nature–we tell the system the outcome we want, not the steps we take to get there. Prompts are a description of what we want done. Here Tetiana Sydorenko considers how AI is reshaping interactions as we know them, driving a new UI paradigm–more declarative than imperative.
Over the years, we have been used to using CSS pre-processors like Sass for a use case like applying opacity to a pre-defined color. Today, we have a new way to do that and more with CSS relative colors. In this article, I aim to shed the light on that and introduce how it works along with many practical examples.
Ahmad Shadeed is one of the very best communicators and educators on CSS, web design and development. Here he covers CSS relative colours, a powerful and valuable addition to CSS, now baseline.
WebGPU is a modern graphics API designed to provide high-performance graphics and computation capabilities across different platforms, including web browsers, desktops, and mobile devices. It is intended to be a successor to the WebGL API, offering more advanced features, better performance, and greater flexibility for developers. It offers several advantages over the WebGL API: Enforces the use of asynchronous calls for various operations, reducing bottlenecks on the main thread. Introduces compute shaders (i.e., the ability to run computations on the GPU). Allows rendering to multiple HTML canvas elements using only one WebGPU device.
Recently we mentioned Your first WebGPU app, a tutorial to learn WebGPU programming using JavaScript (implementing Conway’s game of life). If that took your fancy, here’s a more in-depth WebGPU tutorial to get stuck into.
In recent months, I’ve returned to writing code daily. It’s been a lot of fun. While I enjoy Swift, Python, and Ruby, we’ve been building in TypeScript lately since it’s a good fit for our latest project.
After about a decade away from regularly writing JavaScript, it’s been fun to catch up on ten years of progress all at once. For example:
React has evolved from a little experiment thought to boost performance, into a sprawling ecosystem thought to hinder performance.
Platform features like ES Modules, fetch, view transitions, and async/await have made the web a nicer platform to build directly for
Serverless has gone from a wild new idea to well-understood
Cursor is especially good at working in TypeScript, which mostly eliminates boilerplate tedium
Modern build and packaging tools like vite, pnpm, and esbuild have made the tooling around JS nicer and much faster
All of the above has taken universal JS – sharing code between the client and the server – from barely-possible to well-supported
These changes have each boosted the ecosystem in its own way. And each has fuelled one dynamic that has not changed: choosing the right JavaScript framework is hard, man.
Allan Pike recently returned to full time coding after a decade away–and he has some lessons and thoughts. I can’t imagine what it would be like come back after a few months, let alone a decade!
JavaScript | 2024 | The Web Almanac by HTTP Archive
JavaScript is essential for creating interactive web experiences, driving everything from basic animations to advanced functionalities. Its development has significantly enhanced the web’s dynamic capabilities.
However, this heavy dependence on JavaScript involves compromises. Each stage—from downloading and parsing to execution—demands substantial browser resources. Using too little can compromise user experience and business objectives while overusing it can lead to sluggish load times, unresponsive pages, and poor user engagement.
In this chapter, we will re-evaluate JavaScript’s role on the web and offer recommendations for designing smooth, efficient user experiences.
After a hiatus of a year, the Web Almanac returns with a whole new in depth look at how we really use web technologies today–including of course JavaScript.
But today I want to figure out what’s really going on. How do you use AI? What tools have you tried? And will the robots uprising happen before we get GTA 6?
We appreciate your time and input, and we look forward to sharing the results with the community. Let’s explore how AI is shaping the future of web development together!
The devographics folks who bring you the ‘State of…’ surveys have a new one–the State of Web Dev AI. Whether you are a proponent or not, take it and let them know of your experience and thoughts.
In 2007, on my first trip to New York City, I grabbed a brand-new DSLR camera and photographed all the fonts I was supposed to love. I admired American Typewriter in all of the I <3 NYC logos, watched Akzidenz Grotesk and Helvetica fighting over the subway signs, and even caught an occasional appearance of the flawlessly-named Gotham, still a year before it skyrocketed in popularity via Barack Obama’s first campaign.
But there was one font I didn’t even notice, even though it was everywhere around me.Last year in New York, I walked over 100 miles and took thousands of photos of one and one font only.
This in depth essay on the font Gorton by Marcin Wichary (author of this wonderful book on the history of keyboards) got a lot of attention when it came out a couple of weeks back, and rightly so. I had the privilege of spending a few hours with Marcin a couple of years back when he visited Sydney. His passion for such things is palpable.
A page feels sluggish and unresponsive when long tasks keep the main thread busy, preventing it from doing other important work, like responding to user input. As a result, even built-in form controls can appear broken to users—as if the page were frozen—never mind more complex custom components.scheduler.yield() is a way of yielding to the main thread—allowing the browser to run any pending high-priority work—then continuing execution where it left off.
This keeps a page more responsive and, in turn, helps improve Interaction to Next Paint (INP).scheduler.yield offers an ergonomic API that does exactly what it says: execution of the function it’s called in pauses at the await scheduler.yield() expression and yields to the main thread, breaking up the task. The execution of the rest of the function—called the continuation of the function—will be scheduled to run in a new event-loop task.
A great talk by Nishu Goel a couple of years back covered yielding as a performance technique (JavaScript being single threaded means long tasks can make for sluggish (or frozen) interfaces. Here’s an in-depth look at scheduler.yield, a way to yield to the main thread. While Chromium browsers only, it can be used in a progressively enhancing way, which is covered as well.
Over the past few decades, there has been a great deal of fascinating research into how human beings engage with technology. These studies – many of which have findings that have persisted over the years – demonstrate that we don’t just want our technology to be fast, but at a deep neurological level, we need it to be fast. Because our “need for speed” is deeply rooted in our neural wiring, it is unlikely to change, no matter how much we might wish it could.
The folks at SpeedCurve look at the psychology of speed, and why performance matters from the human perspective for websites. Selling investment in performance improvements, especially to decision makers who use fast networks and devices is not always easy–this might help.
We’ve just released the first round of speakers for this year’s Code conference thing place in Meliruen (and online) June 12th and 13th. So, what’s Code focussed on in 2025 and why should you and your team attend?
The front end grows up
For too long, front-end development has been viewed as mere decoration—the attractive surface laid over the “real” engineering happening behind the scenes. But the web is now mission-critical for every organisation, demanding robust solutions in performance, security, testing, and architecture. Today’s front end is sophisticated, complex, and critical to you and your organisation’s success.
At Code, we’ll explore the technologies and practices vital to modern front–end engineering in 2025. Join us in Melbourne on June 12th and 13th to learn, connect, and shape the direction of your career and the sites you build.
Super Special Offers (Until March 31st!)
To kick off 2025, here’s some great offers available only until March 31st
$1000 OFF an All-Access ticket to all 4 Web Directions conferences (Code, Code Leaders, Dev Summit and Next, excluding UX Australia) – for just $1995. Use the code 1000off2025allaccess or register here.
$100 OFF Conffab Premium – all our conferences streamed live, live partners conferences + a huge on-demand catalog for just $595. Use code conffab25proearlybird or subscribe here.
Register for Code, or any 2025 conference (in person or streaming) before March 31stand get a 12-month Conffab Pro membership (worth $195) FREE! Just register for an event and it’ll be all setup.
The sessions
Here’s our first round of sessions, with more to be announced soon.
Was it a noisy neighbour on the shared database? Something malicious? Solar flares?!’ When systems crash despite normal logs and metrics, it’s time to explore the third pillar of observability: tracing.
David Bell will walk us through how OpenTelemetry tracing uncovers those mysterious production issues that logs and metrics miss.
Bring your debugging nightmares and leave with practical tools to illuminate your system’s black boxes.
Ryan Seddon demonstrates how advances in browser-based LLMs are transforming web applications. Discover the critical tradeoffs between model size, performance, and capability as on-device AI moves from experimental to production-ready.
If you’re building AI-enhanced products but concerned about server costs, latency, or privacy implications, this session reveals how the browser itself is becoming an AI powerhouse.
Remember when websites worked without JavaScript? Siobhan Willoughby makes a provocative case for moving beyond React and rethinking our dependency on client-side rendering.
Drawing on her extensive career spanning pre-SPA, SPA-dominated, and now potentially post-SPA eras, she offers a historical perspective few current developers have experienced firsthand.
If you’re questioning whether our current development patterns have veered too far from the web’s fundamental strengths, this talk articulates the growing pushback against JavaScript-dependent architectures.
Adobe’s Stephanie Eckles explores the strange and often unexpected behavior of CSS across shadow boundaries. Learn why global styles that work perfectly elsewhere suddenly break within the shadow DOM, and discover techniques to leverage rather than fight against this architectural difference.
For anyone building or using web components, this session provides crucial knowledge for maintaining visual consistency and developer sanity.
Security isn’t a feature—it’s a core business requirement. Janna Malikova reveals how the Secure by Design initiative is transforming development practices worldwide.
Learn how to move beyond reactive security patches to proactively building systems that anticipate and prevent vulnerabilities and walk away with a toolkit of resources, best practices, and implementation strategies that prepare your applications for the realities of today’s threat landscape.
JavaScript Maps and Sets are getting a functional programming upgrade. Zach Jensz demonstrates how the latest JavaScript additions bring the expressiveness and clarity of Array methods to these collection types.
See how these new methods transform verbose, error-prone iteration patterns into concise, readable operations. If you work with complex data structures or care about writing maintainable JavaScript, these additions will immediately improve your everyday coding workflows.
Server-side rendering and cloud-dependent architectures have become our default, but at what cost? Kritiketan Sharma examines how our reliance on centralized servers creates single points of failure, latency issues, and frustrated users.
Discover how local-first architecture flips the model: using servers as lightweight mediators rather than authorities, enabling truly resilient, collaborative applications. If you’re building apps that need to work everywhere, regardless of connectivity, this talk challenges everything you thought you knew about modern web architecture.
What if reactive programming became a native JavaScript feature instead of a framework-specific implementation? Julian Burr examines the TC39 Signals proposal and its potential to transform how we build user interfaces.
This isn’t just about a new API—it’s about the JavaScript language itself evolving to address the fine-grained reactivity problem frameworks have been solving separately for years. For developers navigating today’s framework landscape, understanding Signals provides crucial context for what’s coming next.
AI-generated tests that actually cover edge cases? Security vulnerability detection before deployment? Yas Adel Mehraban reveals advanced AI techniques that extend far beyond simple code completion.
Discover how leading development teams are using GenAI for automated test case design, continuous security monitoring, and proactive defense mechanisms. If you’re currently using GitHub Copilot or CursorAI merely for coding suggestions, this session will completely transform how you integrate AI into your development workflow.
The sustainable web isn’t just about efficient code—it’s about energy awareness. Fershad Irani will explore how our applications could dynamically respond to power grid conditions. When clean energy is abundant, rich features flourish; when it’s limited, sites automatically optimize. This isn’t just theoretical—it’s the next frontier for environmentally conscious development.
Learn how you can start building applications that harmonize with our planet’s energy rhythms.
This week we released the videos from the ‘front of the front end’ track at Web Directions Developer Summit, on the heels of the recent keynote videos.
All are available to Conffab Pro members (that’s just $19.95 a month or $195 a year), alongside nearly 1,000 presentations from past Web Directions (and other) conferences.
But for our wonderful readers, we’ve got two of these to watch right now, no signup required!
CSS:has(.everything)
Anton Ball
Anton Ball explores the latest and most interesting CSS advancements coming to browsers, including CSS Layers, new selectors like the parent selectors (:has), new color functions, layout modes and more.
Elements that ‘popover’ the contents of a page–dialog boxes, tooltips, modals, menus, and notifications have always been a lot of work to get right. But with the new HTML popover element it’s got a whole lot easier.
Writing high-quality developer documentation is a challenging task. One reader visiting your site may have a tricky bug to solve, another may be looking for guidance, while a third just wants to get started. Plus, everyone has their own unique blend of experience and learning preferences—how do you cover all this in your docs?
This is my personal approach to crafting holistic, comprehensive documentation that prioritizes developer experience and meets the needs of all.
I’ve written my fair share of documentation in my time, and read orders of magnitude more. It’s vital, and not usually considered glamorous. Here Chris Nicholas from Liveblocks provides some thoughts on how to do it well.
From Design doc to code: the Groundhog AI coding assistant (and new Cursor meta)
Today, alongside with teaching you the technique I’m announcing the start of a new open-source (yes, I’m doing this as pure OSS and not my usual proprietary licensing) AI headless agentic coding agent called “groundhog”.
Groundhog’s primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
We’ve cited Geoff Huntley a couple of times in recent weeks and he’s just announced a new AI headless agentic coding agent called “groundhog”. I’m still a little skeptical (this seems to put me in a minority right now) about how ‘agentic’ agents might be in the coming months (my experience with all manner of tools for code generation is they tend to drift over time and need to be corrected and nudged back toward the right focus, but the entire field of AI Engineering seems to think otherwise so…) but I’ll certainly be giving this a look at.
Custom functions allow authors the same power as custom properties, but parameterized
They are used in the same places you would use a custom property, but functions return different things depending on the argument we pass. The syntax for the most basic function is the @function at-rule, followed by the name of the function as a + ()
As we’ve observed before we generally focus on things that are broadly possible now at Conffab, but some things are potentially so impactful that we like to get out in front of them even if they’re not quote ready for primetime. CSS functions are one such thing–and they are even available behind flags in Chrome to experiment with. Learn more with Juan Diego Rodríguez at CSS Tricks.
The Modern CDN Means Complex Decisions for Developers
Whether you were building a web site or an application, hosting choices used to be about bandwidth, latency, security and availability (as well as cost), with content delivery networks (CDNs) handling static assets and application delivery networks relying on load balancing for scale.
All those things still matter, but there are also many more choices to take into account — from the backend implications of frontend decisions, to where your development environment lives. CDNs have become complex, multilayer distributed computing systems that might include distributed databases, serverless functions and edge computing. What they deliver is less about static (or even dynamic and personalized) assets and more about global reach, efficiency and user experience.
Eleven years ago, comedy sketch The Expert had software engineers (and other misunderstood specialists) laughing to tears at the relatability of Anderson’s (Orion Lee) situation: asked to do the literally-impossible by people who don’t understand why their requests can’t be fulfilled.
Dan Q approaches numerous LLMs with an impossible programming challenge, first found in a satirical Youtube video from a decade ago. How do the models respond? A fun idea but also instructive.
Roughly, TypeScript is JavaScript plus type information. The latter is removed before TypeScript code is executed by JavaScript engines. Therefore, writing and deploying TypeScript is more work. Is that added work worth it? In this blog post, I’m going to argue that yes, it is. Read it if you are skeptical about TypeScript but interested in giving it a chance.
Hot on the heels of his recent ‘What is TypeScript? An overview for JavaScript programmers‘ Axel Rauschmayer has a sales pitch for TypeScript. I’ll confess, the scale at which I use JavaScript (not for full scale apps, but relatively small pieces), and with an aversion to build steps and the like, TypeScript is likely overkill for my purposes. But for many it’s become essential. There are proposals at TC39, the standards organisation for JavaScript to introduce type annotations into JavaScript itself, but these are still far off at this stage.
Hallucinations in code are the least dangerous form of LLM mistakes
A surprisingly common complaint I see from developers who have tried using LLMs for code is that they encountered a hallucination—usually the LLM inventing a method or even a full software library that doesn’t exist—and it crashed their confidence in LLMs as a tool for writing code. How could anyone productively use these things if they invent methods that don’t exist?
Hallucinations in code are the least harmful hallucinations you can encounter from a model.The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!
I definitely find the LLM tools I use hallucinating APIs, and solutions to problems I’m attempting to work with them to solve. It feels this is happening less over time. Here Simon Willison observes that these are among the least problematic of situations for hallucination.
Notes from my Accessibility and Gen AI podcast appearence
I actually use Large Language Models for most of my alt text these days. Whenever I tweet an image or whatever, I’ve got a Claude project called Alt text writer. It’s got a prompt and an example. I dump an image in and it gives me the alt text.I very rarely just use it because that’s rude, right? You should never dump text onto people that you haven’t reviewed yourself. But it’s always a good starting point.
We chat with Rachel Andrew of Google — a leading voice in web standards and innovative design and a member of the CSS Working Group of the W3C – about the current and future state of web standards and browser compatibility. Some key topics will be the Baseline initiative to clarify browser support for web platform features and Interop 2025, an initiative focused on enhancing interoperability across browser engines.
Source: Youtube
One of our favourite past speakers, Rachel Andrew, talks about cross browser efforts to improve browser compatibility.
One of the most common questions I’m asked is, “is information architecture still relevant now that we have AI?”
Of course, not everyone puts it like that. Instead, they’ll say things like “we won’t need navigation if we have chat” or “AI will organize the website” or “in a world with smart agents, we won’t need UI” or something like that. The gist is the same: Do we need structured information in a world with AI?
One of Generative AI’s strengths is helping humans sift through and navigate information–summarisation and semantic search are two things it does very very well. So what place does Information Architecture have in this world? Jorge Arango argues AI needs Information Architects–read more about why.
Dear Student: Yes, AI is here, you’re screwed unless you take action…
It’s just facts, I’m a straight shooter. I’d rather tell it to you straight and provide actionable advice then placate feelings.
The steps that you take now will determine your success rate with obtaining a SWE role going forward. If you are a high autonomy person then you’re not fucked, as long as you take action.
Geoff Huntley has been a software engineer a long time. He’s seen booms and busts in the market for Software Engineers come and go. He’s been writing extensively about the impact of LLMs on the profession of software engineering recently, and his latest is a strong wake call.
What is TypeScript? An overview for JavaScript programmers
Read this blog post if you are a JavaScript programmer and want to get a rough idea of what using TypeScript is like (think first step before learning more details):
How TypeScript code is different from JavaScript code.
How to run TypeScript code.
How to edit TypeScript code in an IDE.
Etc.
Note: This blog post does not explain why TypeScript is useful. If you want to know more about that, you can read my TypeScript sales pitch.
The always valuable Axel Rauschmayer has a detailed introduction to TypeScript for JavaScript developers. Follows on from his sales pitch for Typescript we linked to earlier in the post.
Web-AI Client-Side AI for Developers: Jason Mayes-Google’s Web AI Lead
In this episode of ‘Ventures with David,’ host David converses with Jason Mayes, Google’s web AI lead, about the innovations and implications of web AI and product management. They discuss how web AI shifts model deployment from server to client-side, enhancing user testing and reducing costs, latency, and carbon footprint. Jason shares his career journey at Google, focusing on exploring emerging technologies like TensorFlow.js.
They delve into practical applications of web AI, such as 2 billion parameter models running in browsers, enabling cost-effective, real-time functions. Jason emphasizes the significance of JavaScript in production outside academia and urges junior developers to create practical, problem-solving applications. The conversation aims to inspire learning in rapid AI tool deployment and the fundamentals of technical product management.
Source: Youtube
Web AI, the ability to run language models and machine learning in the browser is something we’re excited about at Conffab. Here, Jason Mayes who coined the term Web AI discusses the technology’s applications.
I propose that the advent and integration of AI models into the workflows of developers has stifled the adoption of new and potentially superior technologies due to training data cutoffs and system prompt influence.
I have noticed a bias towards specific technologies in multiple popular models and have noted anecdotally in conversation and online discussion people choosing technology based on how well AI tooling can assist with its usage or implementation.
While it has long been the case that developers have considered documentation and support availability when choosing software, AI’s influence dramatically amplifies this factor in decision-making, often in ways that aren’t immediately apparent and with undisclosed influence.
I’ve definitely found LLMs will default to common, even if outmoded practices–such as using floats for page layout with CSS. But when prompted to use the particular techniques and technologies you want them to, that is usually enough to get what you want.
Recently I’ve been thinking a lot about the question ‘what is Software Engineering in an age of LLMs’. So much so that I’ve started a Linkedin Group for those similar interested–so if that’s you, please come join.
So this week we return to a grab bag of interesting things I came across the last week or so. Something for everyone I hope!
Comparing local large language models for alt-text generation
Trusting AI to describe my photos wasn’t easy. But after 9,000 images, I had to admit: it often did the job better than me, and at a fraction of the cost.
The use of LLMs to generate alt text is somewhat contentious, with concerns about the accuracy of the descriptions generated. But automating a process that it has proved challenging to get humans to do, for whatever reason, may ensure far more alt text than we get at present, particularly on social media.
At Conffab we use LLMs to provide text descriptions for images in slides in presentations for our accessible slides feature. This is something we did originally by hand, but when a conference might have many thousands of slides, and a reasonable percentage contain images of some sort, this is a very costly and time consuming exercise.
An LLMs based approach very, very significantly sped this up. We do check the output to ensure these are good descriptions, but we have found these are often better descriptions than those done by humans–especially when they are of complex diagrams and charts. Here Dries Buytaert, the founder of Drupal tested 10 LLMs for doing this task, and concluded in a follow up post
Trusting AI to describe my photos wasn’t easy. But after 9,000 images, I had to admit: it often did the job better than me, and at a fraction of the cost.
It can be surprising for new clients to see just how much of our design process happens in HTML, CSS and (light) JavaScript. While we do plenty of ideation exercises, sketching, wireframes, mockups and more, we like to get our hands dirty in the browser as soon as we can.
There are business and process benefits to this approach, which we’ve written about before. In this article, I hope to answer a much smaller question:
What do I, a designer of 20+ years with many static mockups to his name, personally enjoy about designing in-browser with web standards in 2025?
The earliest web didn’t need design tools–all we had were simply a few heading levels, paragraphs, lists, a few inline elements. No colors or fonts or even images. For years these things were added to HTML piecemeal, while designers discovered techniques (better described as ‘hacks’) for creating page layouts and designs wth these rudimentary tools.
These approaches could be complex and error prone–using tables with images to create space was a technique that remained for years. Around the same time, though it took years to mature and longer to get widely adopted, CSS introduced more sophisticated layouts with absolute positioning (we discovered the use of float for creating layouts some years later).
And alongside these new technologies and techniques came hybrid web design/development tools like HotPage (I think it was called? From Adobe?), Microsoft Front Page, and then DreamWeaver, the killer app for Web Design. These WYSIWYG tools ushered in a new era of web design.
In the decades since, we’ve oscillated between coding designs directly, and using design tools, either to prototype or to implement these designs. Douglas Engelbart, a giant of human computer interfaces called WYSIWYG, ‘what you see if all you get’–observing that when we use such tools we are constrained to only what such tools enable.
When it comes to the web, many designers have continued the practice of designing in the browser. Perhaps first widely articulated by designer extraordinaire Andy Clarke, here Tyler Sticka talks about the benefits of designing in the browser in the age of Figma. A good companion read to the recent What do developers want in a design handoff? (see below).
You can find the element all over the web these days. We were excited about it when it first dropped and toyed with using it as a menu back in 2019 (but probably don’t) among many other experiments. John Rhea made an entire game that combines with the Popover API!Now that we’re 5+ years into , we know more about it than ever before. I thought I’d round that information up so it’s in one place I can reference in the future without having to search the site — and other sites — to find it.
HTML is full of useful elements, some of longstanding, that are much less used than they ought to be. Here Geoff Graham goes into detail about <details>, their use and styling.
Before WebGPU, there was WebGL, which offered a subset of the features of WebGPU. It enabled a new class of rich web content, and developers have built amazing things with it. However, it was based on the OpenGL ES 2.0 API, released in 2007, which was based on the even older OpenGL API. GPUs have evolved significantly in that time, and the native APIs that are used to interface with them have evolved as well with Direct3D 12, Metal, and Vulkan.
WebGPU brings the advancements of these modern APIs to the web platform. It focuses on enabling GPU features in a cross-platform way, while presenting an API that feels natural on the web and is less verbose than some of the native APIs it’s built on top of.
Build Conway’s ‘Game of Life’ using WebGPU and vanilla JavaScript and web technologies. Being a bit obsessed with the Game of Life, and interested in learning more about WebGPU, and not being terrible at JavaScript, I have to make time to do this!
The UX Researcher’s Guide to Getting Started with Accessibility Research
Many UX researchers want to conduct more meaningful accessibility testing with their products – and that means including people with disabilities in the process. While organizational commitment to introducing accessible user research methods may be high, progress can easily get bogged down by endless discussions and brainstorming sessions. Despite best intentions, over-analysis can block the path forward.
The truth is that achieving your inclusive product design goals doesn’t have to be a massive undertaking from day one. You can start small and build at a pace that makes sense for your organization. But there are some foundational steps to get right before you jump into recruiting and screening research participants. This article is a great place to start.
Keep reading to access practical advice on how to build on your existing research skills, scope your accessibility research smarter, and achieve the early wins that keep enthusiasm high and momentum strong.
For those rooting for the web, 2025 has now exciting news! Both Chrome and Safari shipped a new and fairly straightforward way to add animations and transitions to your sites — say “Hello” to the View Transitions API.Let me show you how view transitions work by recreating this animation effect with a few freckles of modern web technology. A web technology that is probably the most significant web platform update in years.
An excellent introduction to View Transitions, that goes beyond the basic. I cannot get enough of View Transitions–so sorry not sorry, you’ll be reading a lot more about them here.
Implementing a Product Model in a non-product enterprise
Product-centric companies like Facebook, Instagram, and Canva aren’t merely companies that happen to have a product; the product is virtually the whole company. There is no distinction between the company, its brand, and the SaaS product they offer to customers.
However, this model doesn’t translate neatly into the enterprise context. Digital products, particularly those delivered through a SaaS model, are essentially services. Furthermore, the approach needed for success varies greatly between contexts, which are many and varied in an enterprise setting vs the simplicity of a startup.
Enterprises like banks, media companies, or healthcare providers cannot simply adopt the product models that work for tech companies. Their challenges are more complex, requiring tailored strategies that provide additional methods and nuanced application in the spaces above and in-between these ‘products’.
Digital sovereignty is a real problem that matters to real people and real businesses in the real world, it can be explained in concrete terms, and we can devise pragmatic strategies to improve it. To do this, I will go through the following steps:
First, I will briefly define sovereignty to make sure that we are on the same page, and explain how digital sovereignty is built from digital infrastructure.
Second, I will explain why it comes into conflict with democratic sovereignty, and how both the politics of tech companies (that have been authoritarian for a long time) and the current geopolitics make the situation particularly challenging.
Finally, I will offer a series of high-level strategies that can be deployed to improve digital sovereignty.
Robin Berjon argues that digital sovereignty is important because control over digital infrastructure translates to power—both politically and economically. He sees digital sovereignty as a means of reinforcing democratic values, protecting autonomy, and ensuring that digital infrastructure serves the public good rather than the interests of a few dominant corporations or geopolitical powers.
Here he highlights the conflict between corporate control by tech giants and democratic sovereignty, noting the authoritarian tendencies of these corporations and the geopolitical challenges they present. Incredibly timely.
Everything you need to know about Invoker Commands | London Web Standards
command and commandfor attributes are a brand new web platform feature coming in 2025. Here’s everything you need to know about them, plus some things you really don’t.
Invokers are something a lot of people have wanted in the Web platform for a long time–a way of declaratively triggering built-in behaviors without JavaScript.
For example, the <button popovertarget=”myPopover”> attribute opens a <div popover id=”myPopover”>, enabling native pop-ups without extra scripting.
Here Keith Cirkle goes into them in detail.
And keep an eye out for a fantastic talk on popovers coming up soon on Conffab
What do developers want in a design handoff? We asked them
When designers finish designing an app and deliver the design specs to the development team to build, mutual frustration frequently ensues. The design team is worried the development team won’t realize their vision and the development team is worried about all the back-and-forth that will be necessary to make sure they’re actually building what needs to be built.
I have been building so many small products using LLMs. It has been fun, and useful. However, there are pitfalls that can waste so much time. A while back a friend asked me how I was using LLMs to write software. I thought “oh boy. how much time do you have!” and thus this post.
(p.s. if you are an AI hater – scroll to the end)
I talk to many dev friends about this, and we all have a similar approach with various tweaks in either direction.
Here is my workflow. It is built upon my own work, conversations with friends (thx Nikete, Kanno, Obra, Kris, and Erik), and following many best practices shared on the various terrible internet badplaces.
This is working well NOW, it will probably not work in 2 weeks, or it will work twice as well. ¯\_(ツ)_/¯
It is such early days of working with LLMs for code generation that we’re still just beginning to work out workflows–which is why it is fascinating to see people share theirs as Harper Reed does here. I’ve tried some his suggestions and suggest you give them a try too.
Thoughts On A Month With Devin
As a team at Answer.AI that routinely experiments with AI developer tools, something about Devin felt different. If it could deliver even half of what it promised, it could transform how we work. But while Twitter was full of enthusiasm, we couldn’t find many detailed accounts of people actually using it. So we decided to put it through its paces, testing it against a wide range of real-world tasks. This is our story – a thorough, real-world attempt to work with one of the most hyped AI products of 2024.
One of the areas where LLMs seem to do particularly well is in software engineering. In my experience it can make me far more productive, and as Simon Willison has observed, make developers more adventurous.
I’ve been starting many more projects I know could do, but didn’t know how long they might take, which otherwise I would not even have commenced. I’ve built numerous internal tools for streamlining the production of Conffab content using these technologies and a vector search engine for all of Taylor Swift’s lyrics using Python, a language I had only passing familiarity with. But these are quite constrained, and relatively low stakes problems–the ambition of the likes of of Replit and Devin are far greater-to replace the writing of code almost entirely, not simply to augment existing experienced developers.
The folks at Fast.AI put Devin to the test here–with less than stellar results, at least for now.
I’ll be honest and say that the View Transition API intimidates me more than a smidge. There are plenty of tutorials with the most impressive demos showing how we can animate the transition between two pages, and they usually start with the simplest of all examples.
I said there as more View Transitions content coming! I do agree with Geoff Graham it can feel like the step from simple to anything more is not trivial when it comes to this new web technology. Here he looks at taking the next step beyond the basic.
Natural language UIs are touted by many to be the future of user interfaces. But right now, they still look a lot like the past: a text-based prompt, with text-based answers. (If you squint, it could be MS-DOS.) While there’s growing support to bring richer, more graphical interfaces to these systems, they mostly rely on proprietary, centralized integrations. Will the future of computing look like a handful of privately controlled platforms, or an open thriving ecosystem like the web?
Web Applets are a new open standard for allowing language models to use rich, graphical software, built upon the web.
They are small, local-first, interoperable bits of software that can be used and read by both a human and a machine. This talk introduces the problem that Web Applets aims to solve, shows how they work, and reveals a few examples in action. Attendees will be encouraged to integrate Web Applets into their own applications and collaborate with us in shaping the technology’s future, to help keep software open!
Source: Youtube
Rupert Manfredi thinks a lot about the future of the human experience of the Web and technology. He’s worked at Mozilla, Google and elsewhere on the ideas and is now developing an open Web based platform for AI agents to interoperate. Here he talks about Unternet, that platform.
This is almost certainly the first in a series of Not A Tech Bro posts that could share the same title. Today, I want to talk about diversity, equity and inclusion (DEI) programs, and why I worked so hard to establish them at Cloudera. MAGA hates DEI, and stole the term “woke” to insult folks who embrace it. I’m stealing it back to explain why they’re wrong.
I am, proudly, pretty damn woke.
If you’re the CEO of a tech company doing innovative work, you are in a constant struggle with every other tech company CEO to recruit and retain top-tier talent. Especially in its early days, the major threat to a start-up isn’t competition. It’s mediocrity.Received wisdom among the MAGA crowd is that DEI programs drive mediocrity. Of course if you elevated skin tone or gender above everything else in hiring, that’d be true. But of course that’s a dumb mistake and easy to avoid. It for sure isn’t how I used DEI at my company.There are three things that concentrating on DEI can do for you.
Right now a lot of incredibly powerful and wealthy folks in technology are distancing themselves from “DEI” initiatives (those terrible terrible things, diversity, equity and inclusion). But not everyone–Mike Olson, founder of Cloudera here articulates strongly the value of these initiatives. Initiatives we strongly support and actively embrace.
OK, that’s a fair bit to keep you occupied until next week!