If you open the Insights tab in Refit today, you will see something most health apps don't want to commit to: a Patterns card that says things like "Your sleep is trending down," "You hit your water goal on 5 of 7 days," "More sleep tends to lift your mood." You'll see a Weekly Digest summarizing the last seven days with week-over-week deltas. You'll see chart points pulsing when a night is an unusual outlier.

None of that was computed by a server. None of it was sent anywhere. It is all statistics, run in your browser, on data that never leaves your device.

This post explains how, and why we think this approach will extend cleanly into AI coaching without breaking the privacy promise.

The current engine, plainly

Under the hood, the Insights tab is powered by a small analysis engine that runs entirely in your browser. It is boring, on purpose. Boring is good. Boring is auditable.

The primitives are the kind of thing any stats textbook lists:

  • Moving averages to smooth noisy trends without the lag of a simple weekly mean.
  • Linear regression (ordinary least squares) to decide whether a metric is genuinely trending up or down.
  • Pearson correlation for relationships like sleep and mood, hydration and mood, activity and sleep, meal-consistency and weight.
  • Z-score anomalies to flag nights, blood pressure readings, or body measurements that fall outside a rolling normal range.
  • Adherence rate for medications and goal hit rates.
  • Conditional probability for questions like "given a poor night, is the next day's mood below baseline?"

On top of those primitives sit per-tracker extractors (one per category), plus food-meta signals and a forward hook for future custom trackers. They turn raw entries into numeric sequences the primitives can consume.

The output is three user-visible surfaces:

  1. The Patterns card on Insights, up to five short bullets, ranked by signal strength.
  2. The Weekly Digest on Home, six quick stats plus week-over-week deltas.
  3. Chart anomaly markers on the sleep, blood pressure, and measurement charts, pulsing when a point is an unusual outlier.

Everything runs on the same localStorage read that renders your daily log. Total cost: a few milliseconds on open. No request, no auth, no spinner.

Why on-device is not a compromise

There is a common assumption that doing analytics on-device means you get a worse version. For a typical health tracker, the opposite is true. The signals that actually matter for daily wellness are simple: trends over weeks, outliers against your own baseline, how one of your habits relates to another. You do not need a trillion-parameter model to notice that your sleep goes to hell on nights you eat after 9pm.

You need:

  • Your own data, clean and complete.
  • Stable statistical primitives with documented thresholds.
  • Honest copy that says what the math found and nothing more.

All three are easier, not harder, when the compute stays on your phone. The data is already there. Your baseline is your baseline, not a population average that doesn't match you.

The Patterns bullets in Refit are tuned with conservative thresholds, on purpose. A trend has to persist. A correlation has to clear a significance bar. An anomaly has to be at least 1.5 standard deviations from your rolling mean with enough samples to matter. We would rather say nothing than say something confident and wrong. That bar is much easier to enforce when the same team writes both the statistics and the product copy.

What the current engine does not do

We want to be precise about limits.

  • It does not predict your future blood pressure. It tells you your last 14 readings are trending in a direction.
  • It does not diagnose anything. It surfaces outliers; interpretation is yours and your clinician's.
  • It does not claim "significance" where it has none. A correlation on 10 days of sparse data will not appear; thresholds exist for a reason.
  • It does not run sentiment analysis on your mood notes. Your notes are private even from the analyzer.

Everything the engine outputs can be traced back to a specific, inspectable function, with inputs you can read in your own devtools. There is no black box, because we do not have one to offer you.

Where AI fits in the Refit story

Now the part people ask about: "Are you building an AI coach?"

Yes, and not yet, and not the way most companies mean.

Let's take the question apart. "AI coach" as the industry sells it today usually means:

  • A server-side LLM reads your health data, often continuously.
  • The provider logs prompts and completions, often for training or "quality."
  • Your data becomes a row in a vector database for personalization, also server-side.
  • The company's retention policy is a promise, not a property.

We will not ship that. It directly contradicts every architecture decision we've made. If we did, "private daily wellness tracker" stops being true, and we lose the one thing we are actually good at.

What we will ship, over time, is coaching that respects the same rule as the rest of the product: your data doesn't leave. Three paths could get us there, not mutually exclusive. None of these have a shipping commitment yet; they are directional, not a roadmap:

1. Smarter on-device analysis (shipping now, getting deeper)

The analysis engine already does correlations, trends, and anomaly detection. The next layers, still pure functions, still local:

  • Conditional-probability narratives. "On nights you exercise before 6pm, you fall asleep 34 minutes faster on average."
  • Multi-signal stacks. "Three of your last five low-mood days shared late dinners and short sleep. Not causation, but worth noticing."
  • Streak-and-goal coaching. "You're close to your water goal on average, but consistently missing on weekends."

None of this requires a neural network. It requires good statistics and honest copy. The Patterns card is the foundation; "coaching" is just the same engine with a friendlier voice.

2. On-device inference (exploratory)

On-device language models are getting genuinely useful. Browser-embeddable runtimes can load a small quantized model entirely client-side, with no network call after the initial download. Such a model could receive a structured summary of your own data (the same output the Patterns card already computes) and produce conversational coaching text.

Your health data never leaves. The model runs in the same process that renders the UI. The "AI coach" becomes a view on top of the same local analysis, not a separate surveillance apparatus.

3. Bring your own key (exploratory)

Some users will want the full power of a frontier model. One option we are considering is an explicit, opt-in flow: you paste your own API key for a provider of your choice; a button on a specific insight sends a structured, scoped prompt to that provider from your browser, directly. No Refit server in the middle, no background calls. The data that leaves would be only what you chose to send, to a provider you pay, under your own terms with that provider.

This is the same pattern that mature privacy-respecting tools use. It puts the user in charge and the company out of the data path.

Why this is consistent with everything we've said

Our previous posts have made a few promises:

  • "Your health data belongs on your device." Still true.
  • "No AI-generated insights designed to trigger re-engagement." Still true, and the opposite of what we're building. The engine is tuned to say less, not more.
  • "No AI coach analyzing your patterns in the cloud." Specifically still true. A future on-device or BYO-key coach is not that.

The line we draw is not "no AI." The line is: no component of Refit, ever, reads your health data on a server we operate. Everything we've shipped, and everything in the roadmap, sits on the correct side of that line.

What it looks like a year from now

We do not want to over-promise timelines, so treat this as direction, not a ship date. None of the items below are committed features:

  • The Patterns card could grow into a narrative coach, still pure-function, still on-device.
  • The Weekly Digest could gain a "what to try next" section backed by the same correlation and anomaly engine.
  • A small on-device language model could offer optional natural-language summaries of what the engine found, with no network access in its hot path.
  • A bring-your-own-key flow could let power users get full-model coaching with a clear, inspectable request.
  • The engine itself could be published as readable source with docs, so advanced users can verify, extend, and audit.

Underneath all of that, the same unchanged architecture. Your data, on your device, merged across your devices through zero-knowledge sync, analyzed in-place, summarized in-place, and shipped in an export format you could read in a text editor.

An AI coach that works this way is a feature. An AI coach that requires your data to leave is a liability we refuse to inherit. Those two things are not the same app, and Refit is very deliberately only going to be one of them.