License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

1. Introduction

1.1. Abstract

Writeup about how writing and reading structured data is mostly done manually.

1.2. Motivation

Writeup how boring it is to write similar code each time when trying to read or write structured data. How easy it is to make mistakes or cause unportable and unoptimized code. Write how filespec can help with reverse engineering and figuring out data structures, how it can be used to generate both packers and unpackers giving you powerful tools for working with structured data.

1.3. Overview

Goal of Filespec is to document the structured data and relationships within, so the data can be understood and accessed completely.

1.4.1. Kaitai

Kaitai is probably not very well known utility, that has similar goal to filespec.

Explain cons:

  • Depends on runtime

  • Can only model data which runtime supports (only certain compression/decompression available for example, while in filespec filters can express anything)

  • Mainly designed for generated code, not general utility

  • Uses YAML for modelling structured data which is quite wordy and akward

2. Modelling Structured Data

2.1. Filespec Specifications

Brief of Filespec specifications and syntax

Modelling ELF header
enum foo {
   foo: 0x1;
   bar: 0x2;
   eaf: 0x3;
   eaf: 0xDEADBEEF;
   bar;
};

struct elf64 {
   e_entry: u64 hex;
   e_phoff: u64;
   e_shoff: u64;
};

struct elf {
   ei_magic: u8[4] | matches('\x7fELF') str;
   ei_class: u8 hex; // word size
   ei_data: u8 hex; // endianess
   ei_version: u8;
   ei_osabi: u8;
   ei_abi_version: u8;
   padding: u8[7] nul;
   e_type: u16 hex;
   e_machine: u16 hex;
   e_version: u32;
   elf64: struct elf64; // fspec needs union to parse ei_class != 2 type
   e_flags: u32 hex;
   e_ehsz: u16;
   e_phentsize: u16;
   e_phnum: u16;
   e_shentsize: u16;
   e_shnum: u16;
   e_shstrndx: u16;
};

2.2. Keywords

struct name { … }

Declares structured data

enum name { … }

Declares enumeration

union name (var) { … }

Declares union, can be used to model variants

Struct member declaration syntax

Parenthesis indicate optional fields

member_name: member_type (array ...) (| filter ...) (visual hint);

2.3. Types

Basic types to express binary data.

struct name

Named structured data (Struct member only)

enum name

Value range is limited to the named enumeration

u8, s8

Unsigned, signed 8bit integer

u16, s16

Unsigned, signed 16bit integer

u32, s32

Unsigned, signed 32bit integer

u64, s64

Unsigned, signed 64bit integer

2.4. Arrays

Valid values that can be used inside array subscript operation.

expr

Uses result of expression as array size

'str'

Grow array until occurance of str

$

Grow array until end of data is reached

Reading length prefixed data
num_items: u16 dec;
items: struct item[num_items];
Reading null terminated string
cstr: u8['\0'] str;
Reading repeating pattern
pattern: struct pattern[$];

2.5. Filters

Filters can be used to sanity check and transform data into more sensible format while still maintaining compatible data layout for both packing and unpacking. They also act as documentation for the data, e.g. documenting possible encoding, compression and valid data range of member.

Filters are merely an idea, generated packer/unpacker generates call to the filter, but leaves the implementation to you. Thus use of filters do not imply runtime dependency, nor they force that you actually implement the filter. For example, you do not want to run the compression filters implicitly as it would use too much memory, and instead do it only when data is being accessed.

It’s useful for Filespec interpreter to implement common set of filters to be able to pack/unpack wide variety of formats. When modelling new formats consider contributing your filter to the interpeter. Filters for official interepter are implemented as command pairs (Thus filters are merely optional dependency in interpeter)

matches(str)

Data matches str

encoding(str, …)

Data is encoded with algorithm str

compression(str, …)

Data is compressed with algorithm str

encryption(str, key, …)

Data is encrypted with algorithm str

Validating file headers
header: u8[4] | matches('\x7fELF') str;
Decoding strings
name: u8[32] | encoding('sjis') str;
Decompressing data
data_sz: u32;
data: u8[$] | compression('deflate', data_sz) hex;

2.6. Visual hints

Visual hints can be used to advice tools how data should be presented to human, as well as provide small documentation what kind of data to expect.

nul

Do not visualize data

dec

Visualize data as decimal

hex

Visualize data as hexdecimal

str

Visualize data as string

mime/type

Associate data with media type

3. Relationships

To keep Filespec specifications 2-way, that is, structure can be both packed and unpacked, specification has to make sure it forms the required relationships between members.

Compiler has enough information to deduce whether specification forms all the needed relationships, thus it can throw warning or error when the specification does not fill the 2-way critera.

3.1. Implicit Relationships

Implicit relationships are formed when result of member is referenced. For example using result of member as array size, or as a filter parameter.

Array relationship

In packing case, even if len would not be filled, we can deduce the correct value of len from the length of str if it has been filled. We can also use this information to verify that length of str matches the value of len, if both have been filled.

len: u16;
str: u8[len] str;
Parameter relationship

In packing case, the same rules apply as in array relationship. Implicit relationship is formed between decompressed_sz member and compression filter.

decompressed_sz: u32 dec;
data: u8[$] | compression('zlib', decompressed_sz);

3.2. Explicit Relationships

Sometimes we need to form explicit relationships when the structure is more complicated.

TODO: When we can actually model FFXI string tables correctly, it will be a good example.

4. Implementation

4.1. Compiler

Compiler is implemented with Ragel. It parses the source and emits bytecode in a single pass. The compiler is very simple and possible future steps such as optimizations would be done on the bytecode level instead the source level.

4.2. Validator

Validator takes the output of compiler and checks the bytecode follows a standard pattern, and isn’t invalid. Having validator pass simplifies the code of translators, as they can assume their input is valid and don’t need to do constant error checking. It also helps catch bugs from compiler early on.

4.3. Bytecode

The bytecode is low-level representation of Filespec specification. It’s merely a stack machine with values and operations. To be able to still understand the low-level representation and generate high-level code, the bytecode is guaranteed to follow predictable pattern (by validator).

To make sure all source level attributes such as mathematical expressions can be translated losslessly to target language, the bytecode may contain special attributes.

TODO: Document bytecode operations and the predictable pattern here

4.4. Translators

Translators take in the Filespec bytecode and output packer/unpacker in a target language. Translators are probably the best place to implement domain specific and language specific optimizations and options.

4.5. Interpreters

Interpreters can be used to run compiled bytecode and use the information to understand and transform structured data as a external utility. For example it could give shell ability to understand and parse binary formats. Or make it very easy to pack and unpack files, create game translation tools, etc…

Interpreters can also act as debugging tools, such as visualize the model on top of hexadecimal view of data to aid modelling / reverse engineering of data.