Meet Her First
Before anything else: she's a fully-rendered Live2D Cubism 4 character. Green hair in twin buns, green eyes, white kimono with a green-and-orange floral pattern, red obi, traditional geta sandals. She breathes. She blinks. Her eyes and head follow my cursor. She sways gently when idle and nods when my agent completes a task.
The window is transparent and always-on-top — that's the technical trick. She is not. She's a 2048×2048 textured character with 5 named expressions (Smile, Blush, Angry, Scared, Surprised) and a full motion set driven by a Rust state engine.
Best For
- Independent developers who want a non-trivial Tauri 2 + Rust project up and running without spending a week reading docs first.
- Folks exploring Live2D / pixi-live2d-display on the desktop — the trickiest parts (transparent window on modern macOS, PIXI ↔ Cubism4 integration, hot-swappable models) are all solved here.
- Anyone running long AI agent tasks who wants a tangible, ambient signal of "is the agent still working?" without switching windows.
Skills / Features I Leaned On
Two-Sentence Background
I run a lot of long-running AI coding tasks and wanted something that gave me a physical-feeling signal of "your agent is still alive and working" — not another toast notification, not another log file. A Live2D companion felt right: visible at all times, animated, emotionally expressive, and most importantly, reactive* to whatever my agent is doing*.
- What I actually did: Pair-programmed the whole thing over ~7 days of evenings. Verdent did the cross-file scaffolding, wrote the Objective-C bridge, and handled the bulk of the state-inference engine. I did the product calls and the Live2D tuning.
- Final timeline: 7 days.
- Result (real numbers from the repo):
1,236lines insrc-tauri/src/main.rs(Tauri app, WebSocket server, NSWindow config, gallery, 20+ commands)1,954lines across the Ruststate_inference/module (engine, poller, config, visuals)635lines of Rust TTS module (system + AI backends)1,838lines acrosssrc/React/TypeScript (components, hooks, types)
WebSocket API with 9 message types (state_change,dialog,trigger_expression,trigger_motion,move_window,drag_window,get_position,simulate_click, plus TTS controls)
Hot-swappable Live2D models from~/.verdent/model/with zero code changes
Cross-platform manager discovery (macOS / Windows) in a single refactor pass
Step-by-Step Walkthrough
Step 1 — Scaffold the Tauri 2 + React + Live2D project
- What I did: Asked Verdent to bootstrap a Tauri 2 app with React + TypeScript + Vite +
pixi.js@6+pixi-live2d-display@0.4, pre-wired for a chrome-free window that would later host the Live2D character. - Why: Version pinning matters here —
pixi-live2d-display@0.4requires PIXI v6 (not v7+), and Tauri 2 changed a lot of APIs (@tauri-apps/api/corevs. v1's@tauri-apps/api/tauri). I wanted Verdent to pick compatible versions up front so I wouldn't eat a day on dep hell. - Result: A working
npm run tauri devwith an empty transparent window in ~5 minutes of Verdent-time.package.jsonended up with the right versions on the first try.
Step 2 — Make the window transparent so only the character is visible
- What I did: Asked Verdent to configure
NSWindowvia thecocoa+objccrates so the window is transparent, shadowless, always on top, joins all Spaces, and survives fullscreen apps. Result: no window chrome, no rectangle, just the character floating on the desktop. - Why: Tauri's JS-level transparency config is not enough — on modern macOS you have to set
setOpaque: NO,setBackgroundColor: NSColor.clearColor,setHasShadow: NO, and thecollectionBehaviorbitflags. I did not want to learn Objective-C FFI at 11pm on a Tuesday. - Result: Verdent produced the
configure_ns_window+apply_ns_window_configpair insrc-tauri/src/main.rs(lines 735-789) — including the fourcollectionBehaviorbits (CanJoinAllSpaces | Stationary | IgnoresCycle | FullScreenAuxiliary) andsetLevel: 3. First run: the character appeared sitting directly on the desktop wallpaper, with no window frame anywhere. No Objective-C learned.
Step 3 — Get the Live2D character rendering and alive
- What I did: Had Verdent wire up
pixi-live2d-display/cubism4, register it with the PIXI ticker, loadmodel3.jsonfrom Tauri's bundle, and plumb procedural animations — breathing (ParamBreath), blinking (ParamEyeLOpen/ROpen), head/eye tracking to the cursor (ParamAngleX/Y/Z,ParamEyeBallX/Y), and a subtle idle sway (ParamBodyAngleX). - Why: The PIXI v6 ↔ Cubism4 integration has sharp edges — forgetting
registerTicker(PIXI.Ticker)results in a frozen model with no error. Exposingwindow.PIXIglobally is required bypixi-live2d-displayfor internal lookups. These are the kinds of footguns that cost hours. - Result:
src/hooks/useLive2DModel.ts— 584 lines, handles bundled models, external models from~/.verdent/model/,asset://URL conversion, texture cache-busting, and graceful fallback when motions/expressions are missing. The character started breathing and tracking my cursor on the first build.
Step 4 — Build the WebSocket control API
- What I did: Added a Rust WebSocket server on
ws://127.0.0.1:8765so any external tool can push state / dialog / expression / motion / window commands into the pet. - Why: I wanted the renderer to be dumb. Anything — the state inference engine, a shell script, another agent — should be able to drive her over a single well-known port.
- Result:
process_ws_message()inmain.rsroutes 9 message types to Tauri events. Testing from the shell takes one line:
echo '{"type":"trigger_expression","name":"Smile"}' | websocat -1 ws://127.0.0.1:8765Step 5 — Wire up the state inference engine (the meta-magic)
- What I did: This is the part I'm proudest of. I asked Verdent to build a Rust engine that polls the Verdent app's own task/message list — via Unix socket IPC if available, CLI subprocess if not — and infers a character state from it:
User just sent a message →attention(she looks up alert)
Agent is replying →speaking
Task running < 10s →thinking
Task running > 10s →working
Task just completed →happy+ celebration dialog
Task just failed →error+ error dialog
No activity for 30s →idle(dozing motion) - Why: The meta-angle — using Verdent to build a companion that watches Verdent work — is the actual killer feature. When I kick off a long refactor, I don't need to alt-tab. I glance at her: if she's in her
workingexpression, the agent is working. If she smiles and a bubble pops up saying "task complete!", I know it's safe to review. - Result:
src-tauri/src/state_inference/— 4 files, 1,954 lines:engine.rs— pure rule-based inference, no network calls, fully unit-testablepoller.rs— Unix socket IPC + CLI fallback, with platform-specific path discoveryconfig.rs— JSON-configurable thresholdsvisuals.rs— state → (motion, expression) mapping for the frontend
Step 6 — Hot-swappable Live2D models
- What I did: Added support for dropping any Live2D Cubism 4 model into
~/.verdent/model/and having the app pick it up on restart — or hot-swap it via a WebSocket event without even restarting. - Why: I wanted the project to be a platform, not a one-character toy.
MODEL_GUIDE.mdwalks non-coders through it in under 5 minutes. Any model that uses standard parameter names (ParamAngleX/Y/Z,ParamBreath,ParamEyeBallX/Y, etc.) works out of the box. - Result: A validation layer that gracefully handles missing motions / expressions, texture cache-busting via
?v=query strings, and a full remount cycle triggered by React'skeyprop. My kimono character as default, any Cubism 4 model as a drop-in replacement.
Step 7 — Cross-platform path resolution (the final polish)
- What I did: One single refactor pass to make the verdent-manager discovery platform-aware (macOS / Windows / Linux), in both config defaults and runtime fallbacks. Socket IPC gracefully skips on non-Unix platforms and falls back to CLI.
- Why: I started with hardcoded macOS paths (
/Applications/Verdent Alpha.app/...). Opening the repo meant fixing that before anyone on Windows could even build it. - Result: Commit
ec178ce: cross-platform discovery, compile-time#[cfg]guards, zero hardcoded macOS paths left in runtime code.
Final Result
Next steps:
I plan to open-source the Verdent Pet project after Verdent's next major update. It will provide great emotional value, and I hope everyone enjoys playing with it.