Croco Track
Customer Success
ExecutiveEvents

CRM

UsersLocations

Client Health

Summary HealthHealth LabBetaCSM ActionsCSM KanbanBeta

Revenue

FinanceSubscription Status

Product & Ops

Product AdoptionGeolocationData Quality

Tools

UTM BuilderDocs
Settings
← Back to Health Lab

Rules — Review queue

Candidate rules proposed by the weekly miner. Single-reviewer governance (Baptiste).

How this works+−

What is a rule?

A rule is an additive score adjustment triggered by a condition on a signal. Example: "if a trial has 10+ days without login, subtract 20 pts". Rules extend the 5-pillar formula (activation, engagement, depth, team, momentum) without rewriting it — they capture patterns the pillars miss.

Where do candidates come from?

Every Monday at 05:30 Paris, the rule miner brute-forces thousands of combinations of (signal × threshold × cohort × outcome) across 30 days of training data. For each one, it asks: "does the outcome actually happen at a different rate when this condition fires, or could it just be luck?" (the technical test is a two-proportion Z-test). Because we try thousands of patterns at once, a few will look "significant" by pure chance — so we apply a correction that caps the expected false-positive rate among kept patterns at ~10% (Benjamini-Hochberg FDR, q=0.10). The 20 strongest survivors (ranked by effect size × sample strength) get surfaced here. Tautological patterns (e.g. "logged in today → less churn" — already in the engagement pillar) are filtered upstream.

What can you do with a candidate?

  • Accept — pattern looks genuine and actionable. Moves the rule to the Accepted pool (still NOT active — no live score impact yet).
  • Attach to formula — pick a candidate formula (shadow), copy the rule into its definition.rules[]. The rule then runs in shadow for 14 days; if the candidate beats the active formula, promotion to live is a manual migration.
  • Reject with a note — pattern is tautological, confounded, already covered by a pillar, or domain-implausible. The reject note is kept to help future mining runs learn.

Column cheatsheet

  • Support — how many (location, day) rows matched the condition during the training window. Think of it as the sample size. We require ≥ 30: with fewer observations, any difference we see could easily be random noise rather than a real pattern.
  • Lift — the gap between the outcome rate inside matching rows and outside (rate_in − rate_out), expressed in percentage points. Lift of +12 pp means the outcome happens 12 pp more often when the condition is true than when it's false. Whether that's good or bad depends on the outcome — a positive lift on "churned" is bad news; on "converted" it's good news.
  • p-value — how likely it is that this pattern showed up by pure coincidence, if the condition actually had no real effect. 0.01 means "if there were truly no signal, we'd see something this strong about 1 time in 100". We only surface patterns under 0.05 (max 1 in 20 could be a fluke) and additionally pass a multiple-testing correction (BH-FDR q=0.10) so that ≤ 10% of what we show you is expected to be noise. Lower p = less likely to be coincidence.
  • Δ — the score adjustment this rule would add (or subtract) if attached, in points (−30 to +30). Signed so that a "bad" outcome lift becomes a negative delta (penalty), and a "good" outcome lift becomes a positive delta (bonus). It's a starting suggestion — you can tune it before attaching.

Full runbook: docs/product/health-lab-rules-governance.md

Worked example — see a real candidate explained

Imagine the miner surfaces this rule. Can we tell whether it's genuine insight or structural noise?

Candidate A — tautological (reject)

Condition
is_trial = true
Cohort
All
Outcome
Trial converted 60d
Support
1337
Lift
+0.364
p-value
~0
Δ
+15 pts

During the training window, 1337 (location, day) rows matched 'is_trial = true'. Among them, +36 percentage points MORE converted in 60 days than non-matching rows. p ≈ 0, so the pattern is statistically unmistakable.

But a trial conversion can only happen from a trial. This rule is essentially restating 'only trials convert' — a structural tautology, not a novel signal. The +15 delta would just shift every trial score by a constant. Reject with a note.

Candidate B — genuine insight (attach)

Condition
days_since_trial_converted ≤ 99
Cohort
No sandbox
Outcome
Churn 60d
Support
801
Lift
+0.290
p-value
8.3e-141
Δ
−12 pts

The miner also finds rules like this one — 801 recently-converted customers (within their first ~3 months post-trial) churn +29 percentage points more than longer-tenured ones.

Review queue 29Active rules 0Retired 10
ConditionCohort⇅Outcome⇅Support⇅Lift▼p-value⇅Δ⇅AI triage⇅Status⇅Proposed⇅Actions
Trial status = true AND Days since CRM touch ≤ 17R_candidate_2026-04-27_012
TrialTrial converted 60d1420.6308.6e-158+25
62promising
Proposed2026-04-27
Trial status = true AND Days since CRM touch ≤ 25R_candidate_2026-04-27_010
TrialTrial converted 60d1700.5911.5e-162+24
74promising
Proposed2026-04-27
Days since trial converted ≤ 99R_candidate_2026-04-24_016
No sandboxContraction 60d8010.3149.0e-153−13
85high signal
Proposed2026-04-24
Days since trial converted ≤ 99R_candidate_2026-04-24_021
No sandboxChurn 60d8010.2908.3e-141−12
83high signal
Proposed2026-04-24
Trial status = false AND Days since trial converted ≤ 101R_candidate_2026-04-27_003
SubscriptionContraction 60d9910.2666.0e-281−11
64promising
Proposed2026-04-27
Days since trial converted ≤ 102.5R_candidate_2026-04-27_002
AllContraction 60d10020.2650.0e+0−11
83high signal
Proposed2026-04-27
Days since trial converted ≤ 102R_candidate_2026-04-24_019
AllContraction 60d9960.2650.0e+0−11
52redundant
Proposed2026-04-24
Trial status = false AND Days since trial converted ≤ 101R_candidate_2026-04-27_009
SubscriptionChurn 60d9910.2461.3e-264−10
62promising
Proposed2026-04-27
Days since trial converted ≤ 102.5R_candidate_2026-04-27_007
AllChurn 60d10020.2457.6e-321−10
77promising
Proposed2026-04-27
Days since trial converted ≤ 102R_candidate_2026-04-24_026
AllChurn 60d9960.2457.5e-320−10
50redundant
Proposed2026-04-24
Days since trial converted ≤ 180R_candidate_2026-04-24_015
No sandboxContraction 60d14370.2371.1e-120−9
68promising
Proposed2026-04-24
Days since trial converted ≤ 144R_candidate_2026-04-24_024
No sandboxContraction 60d11970.2316.9e-107−9
62promising
Proposed2026-04-24
Days since trial converted ≤ 182R_candidate_2026-04-27_001
No sandboxContraction 60d14560.2301.5e-112−9
70promising
Proposed2026-04-27
Days since trial converted ≤ 146R_candidate_2026-04-27_008
No sandboxContraction 60d12170.2222.9e-98−9
66promising
Proposed2026-04-27
Days since trial converted ≤ 180R_candidate_2026-04-24_020
No sandboxChurn 60d14370.2205.7e-112−9
65promising
Proposed2026-04-24
1–15 rows / 29
Page 1 of 2

This 'honeymoon gap' isn't captured by any existing pillar. Attaching the rule penalizes new conversions so CSMs focus on them earlier. Worth attaching to a candidate formula for a 14-day shadow test.

Rule of thumb: if the condition restates the cohort/outcome filter, it's tautological. If it introduces a new signal (time-since-event, usage pattern, revenue threshold…), it's a discovery worth attaching.