Pipeline

How it works

Recommended material

Want the in-depth process?

This page explains the pipeline. The separate deep-dive page explains how the model family was tuned, why the current checkpoint won, and how the dated updates connect to the final deployment-facing bundle.

Open deep dive

Input

4-channel Muse 2 EEG

Live EEG arrives through LSL and is recorded with structured labels the system can learn from.

Core unit

0.25 s windows

The model learns from short, overlapping windows stepped every 50 ms for responsive decoding.

Output

Action + finger predictions

The live system predicts REST, OPEN, or CLOSE, assigns an active finger to movement states, and separately estimates whether a finger label is meaningful enough for downstream control.

AlphaHand pipeline diagram

End-to-end breakdown

Click any step to expand the higher-level story and the technical handoff behind it.

Instead of jumping straight into prediction, AlphaHand starts with a reproducible capture layer. Each session stores lossless raw EEG shards, event logs, and metadata so every figure, score, and demo remains tied to the original signal.

During recording, the operator marks finger and action cues in real time. That gives the system clean examples of thumb-through-pinky movement plus REST, OPEN, and CLOSE, creating the labeled foundation needed for reliable decoding.

Verified handoff

Technical note: Step 1 is built around a 4-channel recording setup and writes raw shards plus an authoritative `events.jsonl` log.

What comes out: A reproducible session directory containing raw EEG, event labels, metadata, and timebase information.

What makes the pipeline strong

AlphaHand is not just a model. It is a full pipeline built so collection, training, evaluation, and deployment connect cleanly enough to support a real product vision.

Built to be auditable

Session directories, raw shards, event logs, and saved run artifacts make AlphaHand easier to audit, explain, and trust.

Fast enough to feel live

The default learning unit is a 0.25-second EEG window stepped every 50 milliseconds, balancing responsiveness with temporal detail.

Designed for usable control

A CNN plus LSTM predicts action state, active-finger identity, and finger applicability separately, which makes the output more practical for assistive robotics.

Engineered beyond the demo

Live inference adds smoothing, confidence thresholds, and applicability-aware actuation gates before commands ever reach the robotic hand.

Next read

Deep dive into tuning and selection

If you want the larger iceberg behind the pipeline, the deep-dive page collects the sweep scale, ablation work, replay criteria, and historical checkpoints that explain how the featured run was chosen.

Deep dive