Frogtoss Labs - title image with flowers
*

Software Adoption Revolutions Come from Architectural Shifts, Not Performance Bumps

Tue 30 September 2025
Michael Labbe
#code 

Choosing what to work on is one of the most interesting parts of building software. As developers, we often see tooling with suboptimal efficiency and aspire to rewrite it. In many cases, the performance gains can be substantial — but historically it has been user productivity gains that drive adoption in the large, not simply execution performance.

When looking at the history of software adoption revolutions, the productivity of the user was the dominating factor in making significant, sweeping changes to how we develop. Execution time, exclusive of other improvements, is commonly not sufficient to increase productivity enough to produce a large-scale software adoption revolution.

The more impactful thing we can do in designing software is to rethink the solution to the problem from first principles.

Key examples in the history of software that shaped current usage at large:

  1. Interactive time-sharing terminals replaced batch-processed punch cards, giving a 10x increase in the number of feedback cycles in a day. Faster punch card processing would not have competed with this shift.

  2. Software purchased and updated online: more stores and efficient disc storage could not win over the high convenience of automated bit synchronization afforded by platforms like Steam.

  3. Owning server racks versus on-demand cloud: In many cases, developers traded provisioning time and cost for execution time and cost. This enabled a generation of developers to explore a business problem space without making a capital-intensive commitment to purchase server hardware upfront.

  4. Virtualization: Isolated kernel driver testing, sandboxed bug reproduction, and hardware simulation greatly reduced the provisioning time necessary to reproduce system bugs. This underpins the explosion in kernel fuzzing and systems continuous integration (CI). Faster computers alone would not have delivered the same productivity multiplier.

  5. Live linked game assets: Artists get real-time feedback when they make geometry changes to assets in modelling tools. While optimized exporters and importers are still important, this alone could not result in the productivity multiplier that comes from real-time feedback.

  6. Declarative, immutable runtimes: Docker vastly reduced environment setup time. Consequently, this massively reduced guesswork as to whether a bug was isolated to a mutable execution environment. Highly optimized test environment provisioning would not have produced these productivity gains at the same scale.

Some of these productivity improvements, unfortunately, came with performance regressions. And yet, the software adoption revolutions occurred nonetheless.

Much of the software we use could benefit from being significantly faster. Reimplementing that software for better performance is like running farther in the same direction. Rearchitecting the software is like running in an entirely different direction. You should go as far as you can in the best direction.

We should write performant software because computing should be enjoyable, because slow software inefficiencies add up, and because high performance software is a competitive edge. The first step, however, is to consider how best to solve the problem. Performance improvements alone may not be deeply persuasive to large groups of users seeking productivity.

*

What is Grain DDL?

Fri 09 May 2025
Michael Labbe
#code 

Every modern game is powered by structured data: characters, abilities, items, systems, and live events. But in most studios, this data is fractured—spread across engine code, backend services, databases, and tools, with no canonical definition. Your investment in long-lived game data is tied to code that rots.

Grain DDL fixes that. Define your data once and generate exactly what you need across all systems. No drift. No duplication. No boilerplate. Just one source of truth.

Game data is what gives your game its trademark identity — its feel, the sublety of balance, inventory interactions, motion, lighting and timing. Game data is where your creative investment lives, and you need to plan for it to outlive the code.

Grain DDL is a new data definition language — a central place to define and manage your game’s data models. Protect your investment in your game data by representing it in a central format that you process, annotate, and control.

Typically, games represent data by conflating a number of things:

  1. What types does the data use?
  2. How and where are those types encoded in memory?
  3. How do those types constrain potential values?

Game data has a lifecycle: represented by a database, sent across a network wire, manipulated by a schema-specific UI and serialized to disk. In each step, a developer binds their data schema to their implementation.

  • The types can be converted to and from database types
  • The data might be serialized and deserialized from JSON in a web service language like Go
  • The data needs to be securely validated and read from the wire into a C++ data structure
  • The types likely need a UI implementation for author manipulation

In each of these steps, there is imperative code that binds the schema to an implementation. More importantly, there is no longer a canonical representation of a central data model. There are only partial representations of it strewn across subsystems, usually maintained by separate members of a game team.

Grain DDL protects your data investment by hoisting the data and its schema outside of implementation-specific code. You write your code and data in Grain DDL, and then generate exactly the representation you need.

You Determine What Is Generated

Grain DDL comes in two parts: a data definition language, and Code Generation Templates. Consider this simple Grain definition:

struct Person {   str name,   u16 age,}

From there, you can generate a C struct with the same name using code generation templates:

/* for each struct */range .structs  `typedef struct {`.    /* for each struct field */    range .fields      tab(1) c_typename(.type) ` ` .name `;`.    end  `} ` camel(.name) `_t;`; end

You get:

typedef struct {   char *name;   uint16_t age; } person_t;

Code Generation Templates are a new templating language designed to make it ergonomic to generate source code in C++-like languages. Included with Grain DDL, they are the standard way to maintain code generation backends. Code Generation Templates contains a type system, a standard library of functions, a const expression evaluator and the ability to define your own functions inside of the templates.

However, if Code Generation Templates do not suit your needs, Grain DDL offers a C API, enabling you to walk your data models and types to perform any analysis, presorting or code generation you require. All of this happens at compile time, preventing the need for runtime complexity.

Having a specialized code generation templating system makes code generation maintainable. This is crucial to retaining the investment in your data.

Native Code First

Grain DDL’s types are largely isomorphic to C plain old data types. Grain is designed to produce data that ends up in a typed game programming language like C, C++, Rust, or C#. Philosophically, Grain DDL is much closer to bit and byte manipulation environments than it is a high-level web technology with dictionary-like objects and ill-defined Number types.

Even so, you can use Grain DDL to convert your game data to and from JSON and write serializers for web tech where necessary. It can also be used to specify RESTful interfaces and their models, bypassing a need to rely on OpenAPI tooling.

Crucially, Grain DDL never compromises on the precision required to specify data that runs inside a game engine.

Data Inheritance

Grain DDL lets you define base data structures, and then derive from them, inheriting defaults where derived fields are not specified. Consider:

blueprint struct BaseArmor {  f32 Fire = 1.0,  f32 Ice = 1.0,  f32 Crush = 1.0,}struct WoodArmor copies BaseArmor {  f32 Fire = BaseArmor * 1.5}

In this case, WoodArmor inherits Ice and Crush at 1.0, but Fire has a +50% increase. This flexible spreadsheet-like approach to building out combat tables for a game gives you a ripple effect on data that immediately permeates all codebases in a project.

Unlimited Custom Field Attributes

Grain DDL is the central definition for your types and data. In some locations, you need data in addition to its name and type to fully specify your type. Consider how health can limited to a range smaller than the type specifies:

struct player {  i8 health = 100  [range min = 0, max = 125]}

Attributes in Grain DDL are a way of expressing additional data about a field, queryable during code generation. However, it is possible to define an new, strongly typed attribute that represents your bespoke needs:

attr UISlider {  f32 min = -100.0,  f32 max = +100.0,  f32 step = 1.0,}struct damage {  f32 amount  [UISlider min = 0.0]}

This defines a new attribute UISlider which hints that any UI that manipulates damage.amount should use a slider, setting these parameters. Using data inheritance (described above), the slider’s max and step do not change from their defaults, but the minimum is raised to 0.0.

Zero Runtime Overhead

Grain DDL is intended to run as a precompilation step on the developer’s machine, inserting code that is compiled in the target languages of your choice, and with the compiler of your choice. There is no need to link it into shipping executables like a config parsing library. Remove your dependence on runtime reflection.

Grain DDL runs quickly and can generate hundreds of thousands of lines of code in under a second. The software is simply a standalone executable that is under a megabyte. It is is fully supported on Windows, macOS and Linux.

Game developers do not like to slow their compiles down or impose unnecessarily heavy dependencies on developers. Grain DDL is designed to be as lightweight as possible. In practice, Grain replaces multiple code generators, simplifying building.

Salvage Your Game Data

This article only begins to cover the full featureset of Grain DDL.

As game projects scale, the same data gets defined, processed, and duplicated across a growing number of systems. But game data is more than runtime glue — game data is a long-term asset that outlives game code. It powers ports, benefits from analysis, and extends a title’s shelf life. Grain DDL puts that data under your control, in one place, with one definition — so you can protect and maximize your investment.

Grain DDL is under active development. Email grain@frogtoss.com to get early access, explore example projects, add it to your pipeline.

*

A Convention For Fragment Parsers in C

Fri 09 August 2024
Michael Labbe
#code 

Sometimes you want to parse a fragment from a string and all you have is C. Parsers for things like rfc3339 timestamps are handy, reusable pieces of code. This post suggests a convention for writing stack-based fragment parsers that can be easily reused or composed into a larger parser.

It’s opinionated, but tends to work for most things so adopt or adapt to your needs.

The Interface

The idea is pretty simple.

// can be any typetypedef struct {  // fields go here} type_t;int parse_type(char **stream, size_t len, type_t *out);

Pass in a **stream pointer to a null-terminated string. On return, **stream points to the location of an error, or past the end of the parse on success. This means that it can point to the null terminator.

Pass in the length of the string to parse to avoid needing to call strlen, or to indicate if the end of a successful parse occurs before the null terminator.

Return can be an int as depicted, or an enum of parse failure reasons if not. The key thing is that zero is success. This allows multiple parses to OR the results and test for error once for trivial code.

That’s the whole interface. You can compose a larger parser out of smaller versions of these. So, if you want to parse a float (a deceptively hard thing to do) in a document, or key value pairs with quotes or something, you can build, test and reuse them by following this convention.

Helping with Implementation

When you implement a fragment parser you end up needing the same few support functions. This suggests a convention.

Testing for whether the stream was fully parsed works well works with a macro containing a single expression:

#define did_fully_parse_stream \    (*stream - start == (ptrdiff_t)len)int parse_type(char **stream, size_t len, type_t *out) {    char *start = *stream;    if (!did_fully_parse_stream)        return 1;}

Token Walking

Test the next token for a match:

static int is_token(const char **stream, char ch) {    return **stream == ch;}

Test the next token and bypass it if it matches. By convention, use this if a token failing to match is not an error.

static int was_token(const char **stream, char ch) {    if (is_token(stream, ch)) {        (*stream)++;        return 1;    }    return 0;}

Test the next token to be ‘ch’, returning true if it is. While this functionally does the same thing as was_token, it is semantically useful to use it to mean an error has occurred if it does not match.

static int expect_token(const char **stream, char ch) {    return !was_token(stream, ch);}

Token Classification

Token classification is very easy to implement using C99’s designated initializers. A zero-filled lookup table can be used to test token class and to convert tokens to values.

static char digits[256] = {    ['0'] = 0,  ['1'] = 1,  ['2'] = 2,  ['3'] = 3,  ['4'] = 4,    ['5'] = 5,  ['6'] = 6,  ['7'] = 7,  ['8'] = 8,  ['9'] = 9,};void func(){    // is it a digit?    if (digits[**stream]) {       // yes, convert token to stored integral value       int value = digits[**stream];    }    // skip token stream ahead to first non-digit    while (digits[**stream]) (*stream)++;}

Page 1 of 7