Skip to main content

The Problem with Traditional Feature Flags

Most feature flag systems evaluate flags by making API calls to a remote server: This creates:
  • Latency: 50-200ms per flag check
  • Dependencies: Can’t work offline
  • Costs: API call for every flag evaluation
  • Scale issues: More users = more API calls = higher costs

FlagKit’s Approach

FlagKit evaluates flags locally using a decision tree compiled at build time:
Zero API calls during flag evaluation
< 0.1ms evaluation time
Offline capable - works without network
Unlimited scale - no per-evaluation cost

How It Works

1. Decision Tree Generation

When you run flagkit generate, FlagKit compiles your flag rules into a decision tree:
.flagkit/generated/decision-tree.json
{
  "newCheckout": {
    "default": false,
    "rules": [
      {
        "condition": {
          "field": "rolloutPercent",
          "op": "lte",
          "value": 9
        },
        "value": true
      }
    ]
  }
}

2. Local Evaluation

At runtime, the client evaluates this tree in memory:
import { flags } from "./.flagkit/generated/client";

// This happens in < 0.1ms, no network!
const isEnabled = flags.get("newCheckout", {
  rolloutPercent: 7, // User in bucket 7
});

// isEnabled = true (7 <= 9)

Evaluation Algorithm

1

Load Context

Gather user context (userId, email, plan, etc.)
2

Check Rules

Evaluate targeting rules in priority order
3

Return Value

Return first matching rule’s value, or default

Example Evaluation

// User context
const context = {
  userId: "user_123",
  rolloutPercent: 15,
  plan: "pro",
  country: "US",
};

// Decision tree
const rules = [
  // Rule 1: 10% rollout
  {
    condition: { field: "rolloutPercent", op: "lte", value: 9 },
    value: true,
  },
  // Rule 2: All pro users
  {
    condition: { field: "plan", op: "eq", value: "pro" },
    value: true,
  },
];

// Evaluation:
// Rule 1: rolloutPercent (15) <= 9 ? NO ❌
// Rule 2: plan ('pro') == 'pro' ? YES ✅
// Result: true

Performance Benchmarks

All benchmarks run on MacBook Pro M1, single-threaded
OperationTimeComparison
FlagKit local evaluation< 0.1ms-
LaunchDarkly API call~100ms1000x slower
Split.io API call~150ms1500x slower
Reading local variable~0.001msSimilar

Scale Test

// Evaluate 1 million flags
const users = Array.from({ length: 1_000_000 }, (_, i) => `user_${i}`);

console.time("1M evaluations");
for (const userId of users) {
  const rolloutPercent = calculateRolloutPercent(userId);
  flags.get("newCheckout", { userId, rolloutPercent });
}
console.timeEnd("1M evaluations");

// Result: ~250ms total
// = 0.00025ms per evaluation

Trade-offs

Advantages ✅

  • Zero latency - No network calls
  • Offline capable - Works without connectivity
  • Unlimited scale - No per-request costs
  • Simple architecture - No backend dependency for evaluation
  • Privacy-friendly - No user data leaves the client

Limitations ⚠️

  • Rule changes require deploy - Can’t toggle instantly from dashboard
  • Bundle size - Decision tree included in app bundle (~5-50KB)
  • Client-side visibility - Users can inspect decision tree

When to Use Local vs. Remote Evaluation

High-traffic applications - Reduce API costs
Performance-critical paths - Zero latency matters
Offline-capable apps - Mobile, PWAs, edge functions
Privacy-sensitive - Keep user data local
Predictable rollouts - Changes via deploys are acceptable

Hybrid Approach

FlagKit supports a hybrid model for emergency situations:
// Emergency override via environment variable
const EMERGENCY_KILL_SWITCHES = {
  newCheckout: process.env.KILL_NEW_CHECKOUT === "true",
};

export function getFlag(name, context) {
  // Check kill switch first
  if (EMERGENCY_KILL_SWITCHES[name] !== undefined) {
    return EMERGENCY_KILL_SWITCHES[name];
  }

  // Otherwise use local evaluation
  return flags.get(name, context);
}
Deploy this kill switch mechanism once, then toggle via environment variables without code changes.

Next Steps

Deterministic Rollouts

Learn how FlagKit implements percentage-based rollouts