@rbxts/bytebuf
v1.0.0
Published
ByteBuf is a self-describing serialization library for Roblox.
Downloads
2
Maintainers
Readme
ByteBuf
A high-performance, schema-based binary serialization library for Roblox TypeScript (roblox-ts). ByteBuf provides efficient serialization and deserialization of structured data with support for schema evolution, optional/nullable fields, and partial deserialization.
Features
- Schema-based serialization - Define your data structure once and serialize/deserialize efficiently
- Schema evolution - Add new fields without breaking compatibility with older data
- Easily extensible - Modular type system allows custom types and seamless schema expansion
- Type safety - Full TypeScript support with type-safe field definitions
- Optional and nullable fields - Flexible field definitions for evolving schemas
- Partial deserialization - Deserialize only specific fields for performance
- Compact binary format - Uses varint encoding for efficient space usage
Quick Start
import { createSerializer, FieldTypes } from "@rbxts/bytebuf";
// Define your schema
const playerSerializer = createSerializer([
FieldTypes.string(1), // playerId
FieldTypes.string(2), // playerName
FieldTypes.u32(3), // level
FieldTypes.f32(4), // health
FieldTypes.boolean(5, { optional: true }) // isOnline
], {
1: "playerId",
2: "playerName",
3: "level",
4: "health",
5: "isOnline"
});
// Serialize data
const playerData = {
playerId: "player123",
playerName: "John",
level: 42,
health: 85.5,
isOnline: true
};
const buffer = playerSerializer.serialize(playerData);
// Deserialize data
const deserializedData = playerSerializer.deserialize(buffer);
print(deserializedData.playerName); // "John"Supported Types
Numeric Types
u8,u16,u32,u64- Unsigned integersi8,i16,i32,i64- Signed integersf32,f64- Floating point numbers
Other Types
string- Variable-length stringsboolean- Boolean valuesarray<T>- Arrays of any supported typemap<K, V>- Key-value maps
Field Options
optional- Field can be omitted from serialization if undefinednullable- Field can be explicitly set to null/undefined
import { FieldTypes } from "@rbxts/bytebuf";
// Optional field (omitted if undefined)
FieldTypes.string(1, { optional: true })
// Nullable field (serialized as null if undefined)
FieldTypes.u32(2, { nullable: true })
// Both optional and nullable
FieldTypes.array(3, FieldTypes.string(0), { optional: true, nullable: true })Advanced Usage
Complex Data Structures
import { createSerializer, FieldTypes } from "@rbxts/bytebuf";
// Nested structures using arrays and maps
const gameStateSerializer = createSerializer([
FieldTypes.string(1), // gameId
FieldTypes.array(2, FieldTypes.string(0)), // playerIds
FieldTypes.map(3, FieldTypes.string(0), FieldTypes.u32(0)), // playerScores
FieldTypes.f64(4), // timestamp
FieldTypes.boolean(5, { optional: true }) // isActive
], {
1: "gameId",
2: "playerIds",
3: "playerScores",
4: "timestamp",
5: "isActive"
});
const gameState = {
gameId: "game_456",
playerIds: ["player1", "player2", "player3"],
playerScores: new Map([
["player1", 1500],
["player2", 2100],
["player3", 800]
]),
timestamp: tick(),
isActive: true
};
const serialized = gameStateSerializer.serialize(gameState);
const deserialized = gameStateSerializer.deserialize(serialized);Extensibility
ByteBuf is designed to be easily extensible, allowing you to grow your data structures over time without breaking existing code or data.
Adding New Fields
Simply add new fields to your serializer definition with unique field IDs. Existing serialized data will continue to work:
// Original schema
const userSerializer = createSerializer([
FieldTypes.string(1), // username
FieldTypes.u32(2) // coins
], { 1: "username", 2: "coins" });
// Extended schema - just add new fields!
const extendedUserSerializer = createSerializer([
FieldTypes.string(1), // username (existing)
FieldTypes.u32(2), // coins (existing)
FieldTypes.array(3, FieldTypes.string(0), { optional: true }), // friends (new)
FieldTypes.f64(4, { optional: true }), // lastLogin (new)
FieldTypes.map(5, FieldTypes.string(0), FieldTypes.u32(0), { optional: true }) // achievements (new)
], {
1: "username",
2: "coins",
3: "friends",
4: "lastLogin",
5: "achievements"
});
// Old data works with new serializer
const oldData = userSerializer.serialize({ username: "player1", coins: 1000 });
const newFormat = extendedUserSerializer.deserialize(oldData);
// Result: { username: "player1", coins: 1000, friends: undefined, lastLogin: undefined, achievements: undefined }Creating Custom Types (Up to 64 types)
Extend the type system by implementing the Type<T> interface:
import { Type, FieldOptions, TypeCode } from "@rbxts/bytebuf";
// Custom Vector3 type
class Vector3Type extends Type<Vector3> {
constructor(fieldId: number, options: FieldOptions = {}) {
super(fieldId, options);
}
serialize(value: Vector3 | undefined, buff: buffer, offset: number): number {
if (value === undefined) {
error("Cannot serialize undefined Vector3");
}
buffer.writef32(buff, offset, value.X);
buffer.writef32(buff, offset + 4, value.Y);
buffer.writef32(buff, offset + 8, value.Z);
return offset + 12;
}
deserialize(buff: buffer, offset: number): [Vector3, number] {
const x = buffer.readf32(buff, offset);
const y = buffer.readf32(buff, offset + 4);
const z = buffer.readf32(buff, offset + 8);
return [new Vector3(x, y, z), offset + 12];
}
getPayloadSize(value: Vector3 | undefined): number {
return value === undefined ? 0 : 12; // 3 * 4 bytes for f32
}
getTypeCode(): TypeCode {
return TypeCode.F32; // Reuse existing type code or define custom ones
}
}
// Use your custom type
const positionSerializer = createSerializer([
FieldTypes.string(1), // objectId
new Vector3Type(2), // position
new Vector3Type(3, { optional: true }) // velocity
], { 1: "objectId", 2: "position", 3: "velocity" });Modular Schema Composition
Build complex schemas by composing smaller, reusable serializers:
// Base player info
const basePlayerFields = [
FieldTypes.string(1), // playerId
FieldTypes.string(2), // playerName
FieldTypes.u32(3) // level
];
// Combat stats extension
const combatFields = [
FieldTypes.f32(10), // health
FieldTypes.f32(11), // mana
FieldTypes.u32(12) // experience
];
// Social features extension
const socialFields = [
FieldTypes.array(20, FieldTypes.string(0), { optional: true }), // friends
FieldTypes.string(21, { optional: true }), // guildId
FieldTypes.u32(22, { optional: true }) // reputation
];
// Compose different player serializers for different contexts
const basicPlayerSerializer = createSerializer([...basePlayerFields], {
1: "playerId", 2: "playerName", 3: "level"
});
const combatPlayerSerializer = createSerializer([...basePlayerFields, ...combatFields], {
1: "playerId", 2: "playerName", 3: "level",
10: "health", 11: "mana", 12: "experience"
});
const fullPlayerSerializer = createSerializer([...basePlayerFields, ...combatFields, ...socialFields], {
1: "playerId", 2: "playerName", 3: "level",
10: "health", 11: "mana", 12: "experience",
20: "friends", 21: "guildId", 22: "reputation"
});Version-Safe Evolution
Use field IDs strategically to maintain compatibility across versions:
// Reserve field ID ranges for different feature areas
// 1-99: Core player data
// 100-199: Combat system
// 200-299: Social features
// 300-399: Economy system
// 400+: Future expansion
const playerV1 = createSerializer([
FieldTypes.string(1), // playerId (core)
FieldTypes.string(2), // playerName (core)
FieldTypes.u32(100), // health (combat)
FieldTypes.u32(200) // friendCount (social)
]);
// Later versions can safely add fields in their designated ranges
const playerV2 = createSerializer([
FieldTypes.string(1), // playerId (core)
FieldTypes.string(2), // playerName (core)
FieldTypes.u32(3, { optional: true }), // level (core - new)
FieldTypes.u32(100), // health (combat)
FieldTypes.f32(101, { optional: true }), // mana (combat - new)
FieldTypes.u32(200), // friendCount (social)
FieldTypes.u32(300, { optional: true }) // coins (economy - new)
]);This approach ensures your serialization format can grow organically with your application while maintaining full backward and forward compatibility.
Partial Deserialization
For performance-critical applications, you can deserialize only specific fields:
// Deserialize only specific fields by ID
const partialData = playerSerializer.deserializeFields(buffer, [1, 2]); // Only playerId and playerName
// Deserialize a single field by name
const [playerName, offset] = playerSerializer.deserializeFieldByName(buffer, "playerName");
// Get all field IDs present in the buffer
const fieldIds = playerSerializer.getFieldIds(buffer);Schema Evolution
ByteBuf supports forward and backward compatibility:
// Original schema
const v1Serializer = createSerializer([
FieldTypes.string(1), // name
FieldTypes.u32(2) // age
], { 1: "name", 2: "age" });
// Evolved schema - added new optional field
const v2Serializer = createSerializer([
FieldTypes.string(1), // name
FieldTypes.u32(2), // age
FieldTypes.string(3, { optional: true }) // email (new field)
], { 1: "name", 2: "age", 3: "email" });
// v2 can read v1 data (email will be undefined)
const v1Data = v1Serializer.serialize({ name: "Alice", age: 30 });
const v2Data = v2Serializer.deserialize(v1Data); // { name: "Alice", age: 30, email: undefined }
// v1 can read v2 data (email field is ignored)
const v2SerializedData = v2Serializer.serialize({ name: "Bob", age: 25, email: "[email protected]" });
const v1Data2 = v1Serializer.deserialize(v2SerializedData); // { name: "Bob", age: 25 }Debugging and Utilities
Print Raw Buffer Data
import { printRawBytes } from "@rbxts/bytebuf";
const buffer = playerSerializer.serialize(playerData);
// Print as decimal bytes
printRawBytes(buffer);
// Print as hexadecimal
printRawBytes(buffer, true);Serializer Inspection
// Print serializer schema
print(playerSerializer.toString());
// Get field information
const fieldNames = playerSerializer.getFieldNames();
const playerId = playerSerializer.getFieldId("playerName"); // Returns field ID
const fieldName = playerSerializer.getFieldName(2); // Returns field namePerformance
ByteBuf is designed for high-performance scenarios and can outperforms some other serialization libraries in both speed and memory usage for big data.
