
Software Adoption Revolutions Come from Architectural Shifts, Not Performance Bumps
Tue 30 September 2025
Michael Labbe
#code
Choosing what to work on is one of the most interesting parts of building software. As developers, we often see tooling with suboptimal efficiency and aspire to rewrite it. In many cases, the performance gains can be substantial — but historically it has been user productivity gains that drive adoption in the large, not simply execution performance.
When looking at the history of software adoption revolutions, the productivity of the user was the dominating factor in making significant, sweeping changes to how we develop. Execution time, exclusive of other improvements, is commonly not sufficient to increase productivity enough to produce a large-scale software adoption revolution.
The more impactful thing we can do in designing software is to rethink the solution to the problem from first principles.
Key examples in the history of software that shaped current usage at large:
-
Interactive time-sharing terminals replaced batch-processed punch cards, giving a 10x increase in the number of feedback cycles in a day. Faster punch card processing would not have competed with this shift.
-
Software purchased and updated online: more stores and efficient disc storage could not win over the high convenience of automated bit synchronization afforded by platforms like Steam.
-
Owning server racks versus on-demand cloud: In many cases, developers traded provisioning time and cost for execution time and cost. This enabled a generation of developers to explore a business problem space without making a capital-intensive commitment to purchase server hardware upfront.
-
Virtualization: Isolated kernel driver testing, sandboxed bug reproduction, and hardware simulation greatly reduced the provisioning time necessary to reproduce system bugs. This underpins the explosion in kernel fuzzing and systems continuous integration (CI). Faster computers alone would not have delivered the same productivity multiplier.
-
Live linked game assets: Artists get real-time feedback when they make geometry changes to assets in modelling tools. While optimized exporters and importers are still important, this alone could not result in the productivity multiplier that comes from real-time feedback.
-
Declarative, immutable runtimes: Docker vastly reduced environment setup time. Consequently, this massively reduced guesswork as to whether a bug was isolated to a mutable execution environment. Highly optimized test environment provisioning would not have produced these productivity gains at the same scale.
Some of these productivity improvements, unfortunately, came with performance regressions. And yet, the software adoption revolutions occurred nonetheless.
Much of the software we use could benefit from being significantly faster. Reimplementing that software for better performance is like running farther in the same direction. Rearchitecting the software is like running in an entirely different direction. You should go as far as you can in the best direction.
We should write performant software because computing should be enjoyable, because slow software inefficiencies add up, and because high performance software is a competitive edge. The first step, however, is to consider how best to solve the problem. Performance improvements alone may not be deeply persuasive to large groups of users seeking productivity.

Human-computer Experiences are Thresholded, Not Linear
Thu 29 May 2025
Michael Labbe
#product
Many human-computer experiences are based on qualities that are challenging to achieve: cool, light, fast to boot, responsive. It is important to know when to invest in optimizations that deliver these results in meaningful quantities, and when to choose other priorities.
Common knowledge presumes that these qualities improve experiences linearly, but to put it simply: they do not. Human-computer interactions have soft thresholds that map to limits within human physiology. In some human-computer interactions, there are laws such as Fitts’s Law, derived from experimental studies that back an important intuition: thresholds exist in human perception and motor performance.
This has numerous implications for where optimizing engineers should prioritize time on a computer product to achieve the best human experiences and define the product’s position in the marketplace.
I am going to suggest some time durations as I lay out these implications. These are based on my own intuition and experience.
-
Compile times that exceed roughly 20 seconds prompt the programmer to do something else like read Reddit. This evicts your working memory and results in an absolute productivity cliff.
- Baddeley & Hitch (1974): Working Memory as an Active Model
-
Framerate and input responsiveness in FPS aiming: related to Fitts’s Law for pointing in two dimensions — aiming benefits from higher framerates. Each multiple of 30 up to 120 has benefits, with 60 being a greatly impactful improvement over 30, but 120 being not much better than 90 fps.
- NVIDIA: Latency of 30 ms Benefits First Person Targeting Tasks More Than Refresh Rate Above 60 Hz: “We [also] show small but statistically significant improvement in performance of some tasks at higher [>60fps] refresh rates.”
-
Battery life: “all day” battery life (a guaranteed 8+ hours) means leaving the charger at home and not even needing to think about battery management during the day. An all-day battery may be 200g heavier, but would save a laptop user from carrying a 400g charger. For many users, only hitting seven hours means carrying a charger, whereas eight would not. This crosses a threshold.
-
Game content creation tool response times affect how much effort the user makes to achieve an intended result. For instance, perfecting environment lighting is a highly iterative process. How many iterations an author is willing to make appears to be bounded by response times:
- 10ms: 1,000 changes
- 100ms: 100 changes
- 1,000ms: 10 changes
- 10,000ms: 1 change
There are numerous examples with varying rigour, such as the 400 millisecond Doherty threshold, but I would encourage you to experience as much of these directly as you can. As this is a qualitative matter, lean on your own intuition and judgment to draw your own conclusions where possible.
Because human-computer experiences are thresholded, not all optimizations are valued the same. Ask yourself:
- What experiences am I close to surpassing a threshold on?
- What is the cost of the optimization that gets me past that threshold?
- What experiences am I far from surpassing a threshold on?
- Can I trade against the distant threshold to optimize in favour of the closer threshold?
You can cross a human-computer experience threshold by choosing which experience to optimize for, thereby building meaningful gains into your product. This can produce superlinear productivity gains, or even just plain enjoyment.

What is Grain DDL?
Fri 09 May 2025
Michael Labbe
#code
Every modern game is powered by structured data: characters, abilities, items, systems, and live events. But in most studios, this data is fractured—spread across engine code, backend services, databases, and tools, with no canonical definition. Your investment in long-lived game data is tied to code that rots.
Grain DDL fixes that. Define your data once and generate exactly what you need across all systems. No drift. No duplication. No boilerplate. Just one source of truth.
Game data is what gives your game its trademark identity — its feel, the sublety of balance, inventory interactions, motion, lighting and timing. Game data is where your creative investment lives, and you need to plan for it to outlive the code.
Grain DDL is a new data definition language — a central place to define and manage your game’s data models. Protect your investment in your game data by representing it in a central format that you process, annotate, and control.
Typically, games represent data by conflating a number of things:
- What types does the data use?
- How and where are those types encoded in memory?
- How do those types constrain potential values?
Game data has a lifecycle: represented by a database, sent across a network wire, manipulated by a schema-specific UI and serialized to disk. In each step, a developer binds their data schema to their implementation.
- The types can be converted to and from database types
- The data might be serialized and deserialized from JSON in a web service language like Go
- The data needs to be securely validated and read from the wire into a C++ data structure
- The types likely need a UI implementation for author manipulation
In each of these steps, there is imperative code that binds the schema to an implementation. More importantly, there is no longer a canonical representation of a central data model. There are only partial representations of it strewn across subsystems, usually maintained by separate members of a game team.
Grain DDL protects your data investment by hoisting the data and its schema outside of implementation-specific code. You write your code and data in Grain DDL, and then generate exactly the representation you need.
You Determine What Is Generated
Grain DDL comes in two parts: a data definition language, and Code Generation Templates. Consider this simple Grain definition:
struct Person { str name, u16 age,}
From there, you can generate a C struct with the same name using code generation templates:
/* for each struct */range .structs `typedef struct {`. /* for each struct field */ range .fields tab(1) c_typename(.type) ` ` .name `;`. end `} ` camel(.name) `_t;`; end
You get:
typedef struct { char *name; uint16_t age; } person_t;
Code Generation Templates are a new templating language designed to make it ergonomic to generate source code in C++-like languages. Included with Grain DDL, they are the standard way to maintain code generation backends. Code Generation Templates contains a type system, a standard library of functions, a const expression evaluator and the ability to define your own functions inside of the templates.
However, if Code Generation Templates do not suit your needs, Grain DDL offers a C API, enabling you to walk your data models and types to perform any analysis, presorting or code generation you require. All of this happens at compile time, preventing the need for runtime complexity.
Having a specialized code generation templating system makes code generation maintainable. This is crucial to retaining the investment in your data.
Native Code First
Grain DDL’s types are largely isomorphic to C plain old data types. Grain is designed to produce data that ends up in a typed game programming language like C, C++, Rust, or C#. Philosophically, Grain DDL is much closer to bit and byte manipulation environments than it is a high-level web technology with dictionary-like objects and ill-defined Number types.
Even so, you can use Grain DDL to convert your game data to and from JSON and write serializers for web tech where necessary. It can also be used to specify RESTful interfaces and their models, bypassing a need to rely on OpenAPI tooling.
Crucially, Grain DDL never compromises on the precision required to specify data that runs inside a game engine.
Data Inheritance
Grain DDL lets you define base data structures, and then derive from them, inheriting defaults where derived fields are not specified. Consider:
blueprint struct BaseArmor { f32 Fire = 1.0, f32 Ice = 1.0, f32 Crush = 1.0,}struct WoodArmor copies BaseArmor { f32 Fire = BaseArmor * 1.5}
In this case, WoodArmor
inherits Ice
and Crush
at 1.0
, but Fire
has a +50%
increase. This flexible spreadsheet-like approach to building out combat tables for a game gives you a ripple effect on data that immediately permeates all codebases in a project.
Unlimited Custom Field Attributes
Grain DDL is the central definition for your types and data. In some locations, you need data in addition to its name and type to fully specify your type. Consider how health can limited to a range smaller than the type specifies:
struct player { i8 health = 100 [range min = 0, max = 125]}
Attributes in Grain DDL are a way of expressing additional data about a field, queryable during code generation. However, it is possible to define an new, strongly typed attribute that represents your bespoke needs:
attr UISlider { f32 min = -100.0, f32 max = +100.0, f32 step = 1.0,}struct damage { f32 amount [UISlider min = 0.0]}
This defines a new attribute UISlider
which hints that any UI that manipulates damage.amount
should use a slider, setting these parameters. Using data inheritance (described above), the slider’s max
and step
do not change from their defaults, but the minimum is raised to 0.0
.
Zero Runtime Overhead
Grain DDL is intended to run as a precompilation step on the developer’s machine, inserting code that is compiled in the target languages of your choice, and with the compiler of your choice. There is no need to link it into shipping executables like a config parsing library. Remove your dependence on runtime reflection.
Grain DDL runs quickly and can generate hundreds of thousands of lines of code in under a second. The software is simply a standalone executable that is under a megabyte. It is is fully supported on Windows, macOS and Linux.
Game developers do not like to slow their compiles down or impose unnecessarily heavy dependencies on developers. Grain DDL is designed to be as lightweight as possible. In practice, Grain replaces multiple code generators, simplifying building.
Salvage Your Game Data
This article only begins to cover the full featureset of Grain DDL.
As game projects scale, the same data gets defined, processed, and duplicated across a growing number of systems. But game data is more than runtime glue — game data is a long-term asset that outlives game code. It powers ports, benefits from analysis, and extends a title’s shelf life. Grain DDL puts that data under your control, in one place, with one definition — so you can protect and maximize your investment.
Grain DDL is under active development. Email grain@frogtoss.com to get early access, explore example projects, add it to your pipeline.
Page 1 of 12 ▶