Developer experiences from the trenches
Thu 29 May 2025 by Michael Labbe
tags product
Many human-computer experiences are based on qualities that are challenging to achieve: cool, light, fast to boot, responsive. It is important to know when to invest in optimizations that deliver these results in meaningful quantities, and when to choose other priorities.
Common knowledge presumes that these qualities improve experiences linearly, but to put it simply: they do not. Human-computer interactions have soft thresholds that map to limits within human physiology. In some human-computer interactions, there are laws such as Fitts’s Law, derived from experimental studies that back an important intuition: thresholds exist in human perception and motor performance.
This has numerous implications for where optimizing engineers should prioritize time on a computer product to achieve the best human experiences and define the product’s position in the marketplace.
I am going to suggest some time durations as I lay out these implications. These are based on my own intuition and experience.
Compile times that exceed roughly 20 seconds prompt the programmer to do something else like read Reddit. This evicts your working memory and results in an absolute productivity cliff.
Framerate and input responsiveness in FPS aiming: related to Fitts’s Law for pointing in two dimensions — aiming benefits from higher framerates. Each multiple of 30 up to 120 has benefits, with 60 being a greatly impactful improvement over 30, but 120 being not much better than 90 fps.
Battery life: “all day” battery life (a guaranteed 8+ hours) means leaving the charger at home and not even needing to think about battery management during the day. An all-day battery may be 200g heavier, but would save a laptop user from carrying a 400g charger. For many users, only hitting seven hours means carrying a charger, whereas eight would not. This crosses a threshold.
Game content creation tool response times affect how much effort the user makes to achieve an intended result. For instance, perfecting environment lighting is a highly iterative process. How many iterations an author is willing to make appears to be bounded by response times:
There are numerous examples with varying rigour, such as the 400 millisecond Doherty threshold, but I would encourage you to experience as much of these directly as you can. As this is a qualitative matter, lean on your own intuition and judgment to draw your own conclusions where possible.
Because human-computer experiences are thresholded, not all optimizations are valued the same. Ask yourself:
You can cross a human-computer experience threshold by choosing which experience to optimize for, thereby building meaningful gains into your product. This can produce superlinear productivity gains, or even just plain enjoyment.
Fri 09 May 2025 by Michael Labbe
tags code
Every modern game is powered by structured data: characters, abilities, items, systems, and live events. But in most studios, this data is fractured—spread across engine code, backend services, databases, and tools, with no canonical definition. Your investment in long-lived game data is tied to code that rots.
Grain DDL fixes that. Define your data once and generate exactly what you need across all systems. No drift. No duplication. No boilerplate. Just one source of truth.
Game data is what gives your game its trademark identity — its feel, the sublety of balance, inventory interactions, motion, lighting and timing. Game data is where your creative investment lives, and you need to plan for it to outlive the code.
Grain DDL is a new data definition language — a central place to define and manage your game’s data models. Protect your investment in your game data by representing it in a central format that you process, annotate, and control.
Typically, games represent data by conflating a number of things:
Game data has a lifecycle: represented by a database, sent across a network wire, manipulated by a schema-specific UI and serialized to disk. In each step, a developer binds their data schema to their implementation.
In each of these steps, there is imperative code that binds the schema to an implementation. More importantly, there is no longer a canonical representation of a central data model. There are only partial representations of it strewn across subsystems, usually maintained by separate members of a game team.
Grain DDL protects your data investment by hoisting the data and its schema outside of implementation-specific code. You write your code and data in Grain DDL, and then generate exactly the representation you need.
Grain DDL comes in two parts: a data definition language, and Code Generation Templates. Consider this simple Grain definition:
struct Person {
str name,
u16 age,
}
From there, you can generate a C struct with the same name using code generation templates:
/* for each struct */
range .structs
`typedef struct {`.
/* for each struct field */
range .fields
tab(1) c_typename(.type) ` ` .name `;`.
end
`} ` camel(.name) `_t;`;
end
You get:
typedef struct {
char *name;
uint16_t age;
} person_t;
Code Generation Templates are a new templating language designed to make it ergonomic to generate source code in C++-like languages. Included with Grain DDL, they are the standard way to maintain code generation backends. Code Generation Templates contains a type system, a standard library of functions, a const expression evaluator and the ability to define your own functions inside of the templates.
However, if Code Generation Templates do not suit your needs, Grain DDL offers a C API, enabling you to walk your data models and types to perform any analysis, presorting or code generation you require. All of this happens at compile time, preventing the need for runtime complexity.
Having a specialized code generation templating system makes code generation maintainable. This is crucial to retaining the investment in your data.
Grain DDL’s types are largely isomorphic to C plain old data types. Grain is designed to produce data that ends up in a typed game programming language like C, C++, Rust, or C#. Philosophically, Grain DDL is much closer to bit and byte manipulation environments than it is a high-level web technology with dictionary-like objects and ill-defined Number types.
Even so, you can use Grain DDL to convert your game data to and from JSON and write serializers for web tech where necessary. It can also be used to specify RESTful interfaces and their models, bypassing a need to rely on OpenAPI tooling.
Crucially, Grain DDL never compromises on the precision required to specify data that runs inside a game engine.
Grain DDL lets you define base data structures, and then derive from them, inheriting defaults where derived fields are not specified. Consider:
blueprint struct BaseArmor {
f32 Fire = 1.0,
f32 Ice = 1.0,
f32 Crush = 1.0,
}
struct WoodArmor copies BaseArmor {
f32 Fire = BaseArmor * 1.5
}
In this case, WoodArmor
inherits Ice
and Crush
at 1.0
, but Fire
has a +50%
increase. This flexible spreadsheet-like approach to building out combat tables for a game gives you a ripple effect on data that immediately permeates all codebases in a project.
Grain DDL is the central definition for your types and data. In some locations, you need data in addition to its name and type to fully specify your type. Consider how health can limited to a range smaller than the type specifies:
struct player {
i8 health = 100
[range min = 0, max = 125]
}
Attributes in Grain DDL are a way of expressing additional data about a field, queryable during code generation. However, it is possible to define an new, strongly typed attribute that represents your bespoke needs:
attr UISlider {
f32 min = -100.0,
f32 max = +100.0,
f32 step = 1.0,
}
struct damage {
f32 amount
[UISlider min = 0.0]
}
This defines a new attribute UISlider
which hints that any UI that manipulates damage.amount
should use a slider, setting these parameters. Using data inheritance (described above), the slider’s max
and step
do not change from their defaults, but the minimum is raised to 0.0
.
Grain DDL is intended to run as a precompilation step on the developer’s machine, inserting code that is compiled in the target languages of your choice, and with the compiler of your choice. There is no need to link it into shipping executables like a config parsing library. Remove your dependence on runtime reflection.
Grain DDL runs quickly and can generate hundreds of thousands of lines of code in under a second. The software is simply a standalone executable that is under a megabyte. It is is fully supported on Windows, macOS and Linux.
Game developers do not like to slow their compiles down or impose unnecessarily heavy dependencies on developers. Grain DDL is designed to be as lightweight as possible. In practice, Grain replaces multiple code generators, simplifying building.
This article only begins to cover the full featureset of Grain DDL.
As game projects scale, the same data gets defined, processed, and duplicated across a growing number of systems. But game data is more than runtime glue — game data is a long-term asset that outlives game code. It powers ports, benefits from analysis, and extends a title’s shelf life. Grain DDL puts that data under your control, in one place, with one definition — so you can protect and maximize your investment.
Grain DDL is under active development. Email grain@frogtoss.com to get early access, explore example projects, add it to your pipeline.
Fri 09 August 2024 by Michael Labbe
tags code
Sometimes you want to parse a fragment from a string and all you have is C. Parsers for things like rfc3339 timestamps are handy, reusable pieces of code. This post suggests a convention for writing stack-based fragment parsers that can be easily reused or composed into a larger parser.
It’s opinionated, but tends to work for most things so adopt or adapt to your needs.
The idea is pretty simple.
// can be any type
typedef struct {
// fields go here
} type_t;
int parse_type(char **stream, size_t len, type_t *out);
Pass in a **stream
pointer to a null-terminated string. On return, **stream
points to the location of an error, or past the end of the parse on success. This means that it can point to the null terminator.
Pass in the length of the string to parse to avoid needing to call strlen, or to indicate if the end of a successful parse occurs before the null terminator.
Return can be an int
as depicted, or an enum of parse failure reasons if not. The key thing is that zero is success. This allows multiple parses to OR the results and test for error once for trivial code.
That’s the whole interface. You can compose a larger parser out of smaller versions of these. So, if you want to parse a float (a deceptively hard thing to do) in a document, or key value pairs with quotes or something, you can build, test and reuse them by following this convention.
When you implement a fragment parser you end up needing the same few support functions. This suggests a convention.
Testing for whether the stream was fully parsed works well works with a macro containing a single expression:
#define did_fully_parse_stream \
(*stream - start == (ptrdiff_t)len)
int parse_type(char **stream, size_t len, type_t *out) {
char *start = *stream;
if (!did_fully_parse_stream)
return 1;
}
Test the next token for a match:
static int is_token(const char **stream, char ch) {
return **stream == ch;
}
Test the next token and bypass it if it matches. By convention, use this if a token failing to match is not an error.
static int was_token(const char **stream, char ch) {
if (is_token(stream, ch)) {
(*stream)++;
return 1;
}
return 0;
}
Test the next token to be ‘ch’, returning true if it is. While this functionally does the same thing as was_token
, it is semantically useful to use it to mean an error has occurred if it does not match.
static int expect_token(const char **stream, char ch) {
return !was_token(stream, ch);
}
Token classification is very easy to implement using C99’s designated initializers. A zero-filled lookup table can be used to test token class and to convert tokens to values.
static char digits[256] = {
['0'] = 0, ['1'] = 1, ['2'] = 2, ['3'] = 3, ['4'] = 4, ['5'] = 5,
['6'] = 6, ['7'] = 7, ['8'] = 8, ['9'] = 9,
};
void func()
{
// is it a digit?
if (digits[**stream]) {
// yes, convert token to stored integral value
int value = digits[**stream];
}
// skip token stream ahead to first non-digit
while (digits[**stream]) (*stream)++;
}
Fri 02 August 2024 by Michael Labbe
tags rant
Recently I had a conversation with a composer who was planning on buying a $5,499 Mac Studio to record music. “It’s the only computer I’ll need to run all of my VSTs and play back all of my tracks”, he remarked. With 24 cores and 64GB of RAM, it sure seemed likely to me. “Are you sure you couldn’t do that on a MacBook Air?” I prompted, genuinely curious about where the resources were going. He seemed taken aback that it might even be a possibility.
Whether or not he needed the extra headroom — and you can make the argument that you would weigh down a lighter recording computer with VSTs and track layering — it was a good reminder that marketing like Apple’s makes people equate professional significance with higher end devices. Today, most base model CPUs are good enough for most people. Most professionals do not quantify their computing needs before making a purchase, and so many computers being sold are unnecessarily overpowered. Device marketing encourages this. Over the past couple of decades we benefitted from those gains but I don’t believe it’s true anymore for many tasks if you choose the right software.
A hobby of mine is to achieve my intended computing result with the least amount of computing power and dollars I reasonably can. For example, this blog post is being written on a refurbished $250 Thinkpad humming along on a Linux Mint MATE desktop running only Emacs.
It is refreshing to not be precious about an expensive laptop, and to be able to just toss it in a bag. Unlike modern buggy gaming laptops that lack ports, it also wakes from sleep with 100% consistency. I still own a highend workstation, but I am finding many computing needs can be covered by devices that shipped 5+ years ago: browsing, editing, messaging, some music composing and even coding smaller scale (read: not Unreal) projects.
Microsoft has announced the end of life of Windows 10 on October 14, 2025. Many highly capable computers including Threadrippers, Dell XPS laptops and other older highend configurations will be unable to run Windows 11 in a secure, supported way. That said, Steam Hardware Survey, as of this writing, has Windows 10 counted as more popular than 11.
This situation has created a rising tension, and one of two things is likely to happen:
Next year will be a great opportunity to pick up refurbished hardware that can do most computing tasks after doing an install of FreeBSD or a Linux distribution. Linux Mint is my preference — it feels supportive of the user like Windows 2000 did, has no obvious subversive agenda, is Ubuntu package compatible, and is entirely snappy on lowend hardware that is slated for deprecation by Microsoft.
Turn Microsoft’s e-waste into your next workhorse computer.
On-device AI is being shoehorned where it has no business going because it is perceived as being able to push tech company valuations. It is being foisted on consumers whether they understand it or not. Meanwhile, we are being told we have to upgrade to new processors and operating systems to receive these fun new experiences.
There is a lot to be said about AI, but as far as my computing device goes — I’m totally fine with staying on the beach while the corporate agendafied first wave hits everybody who jumps in the water. Consumer on-device AI is not going to be a part of my professional workflows until the waters have settled and the hype has passed.
The refurbished device market stands to become very saturated if AI features motivate users to abandon their existing computers. Buy the dip!
By using a refurbished device on Linux you are virtually guaranteed to avoid the first generation of consumer on-device AI which is likely to involve annoying or even dangerous missteps.
Users coming from Apple to other operating systems seem to demonstrate a sensibility — they want to love their new PC, tablet or phone. This is because the device is the nexus of the experience in the Apple world. The hardware is second to none and you can end up experiencing entirely bug free workdays if you stay on a well-manicured path. Jumping from a Macbook Pro to an unconfigured Thinkpad on Ubuntu would be like going from an Americano with cream to gritty camping coffee prepared with a hangover.
A shift in mindset about what matters is helpful. I have found it productive to not focus on the device so much as I focus on getting the result I am looking for in my work. Loving the hardware is not the point, and it can be freeing to find the workflow that gets you the result you need outside of loving a device. Imagine how much you can achieve outside the binds of device love!
October 2025 is looking like a great time to pick up a dirt cheap first generation 16-core Threadripper, install Linux on it and have it perform phenomenally for a decade or longer. Now, if Microsoft could just deprecate some of those previous-gen GPUs…
Page 1 / 9 »