7737062097 lorenzo@qwarter.com
Select Page
Contextual Intelligence in Adaptive Wellness

Contextual
Intelligence in
Adaptive Wellness

When the right nudge arrives at the right moment: a conceptual framework and study protocol for understanding why and when SMS-based interventions change behavior in emerging adults.
Lorenzo Scardicchio · Anjala Krishen · Journey Research Group
Working Paper — April 2026
“That first set of data would be interesting to analyze — what are the correlations? What kind of relationships are there? What do you see at an initial level in terms of a model?” Anjala Krishen, research planning session

I. The Problem

Most nudge research answers the wrong question

The dominant question in the digital wellbeing literature is: do nudges work? That question is too blunt. It treats nudges as a monolithic treatment, treats students as a monolithic population, and treats engagement as the outcome that matters. The result is research that can tell you average effects exist — usually small ones — while telling you almost nothing about why they exist, for whom, or under what conditions.

Journey’s nine-month university pilot produced a directional signal strong enough to suggest something more precise is happening. Students did not respond uniformly. Timing appeared to matter more than content. Personalization functioned as a prerequisite, not a feature. Awareness increased before behavior changed. And the system’s scarcity constraint — only three nudges per week — forced every delivery decision to be a selection decision, not a scheduling decision.

That pattern raises a different question: what makes a nudge land? Not whether nudges work on average, but what specific combination of timing, context, personalization, and state-awareness turns a text message into a moment of genuine reflective interruption.

What most research asks

  • Did the intervention group improve more than control?
  • Is there a main effect of “nudging” on wellbeing?
  • Do students who receive messages report less screen time?
  • Is digital wellness messaging effective?

What this research asks

  • Why did some nudges land and others get ignored?
  • Does timing explain more variance than content?
  • Which students benefit and which disengage — and why?
  • Does the adaptive loop create lasting change or only short-term reaction?
“The interesting story to me here is how your initial state is affected. Do you actually create a permanent change? Or is it that we’re just working short term?”
— Anjala Krishen, research planning session

II. The Central Argument

Context is not a feature. Context is the mechanism.

Anind Dey defined context as “any information that can be used to characterize the situation of an entity.” In computing, context-awareness means using that information to provide relevant services. Journey extends that definition into behavioral intervention: context-awareness means using the student’s actual situation — their schedule, cognitive load, energy state, time of day, and likely emotional condition — to determine not just when to send a nudge, but what kind of nudge to send.

This is a stronger claim than “personalization helps.” It is a claim about mechanism. The hypothesis is that timing and context do more work than content in determining whether a nudge produces reflective interruption or gets dismissed. A perfectly worded nudge delivered at the wrong moment is noise. A simple question delivered at the moment of actual decision is an intervention.

Core Thesis
The Scarcity Constraint as Architectural Decision
Because Journey sends only three nudges per week, every delivery is a high-stakes selection decision. The question stops being “what should we say?” and becomes “when is the single best moment this week to say something this student can actually act on?”

Scarcity converts the timing engine from a scheduler into a selection engine. It forces the system to look for moments when an intervention is both possible (the student has a real break) and needed (the student’s likely state would benefit from a particular kind of support). The aim is not productivity. It is helping the student manage life holistically — across academics, stress, health, and connection.

Status: Observed in pilot

III. The Context Engine

Six dimensions evaluated before every nudge

Journey’s context engine draws on Dey’s four primary context categories — identity, location, activity, and time — and extends them with relational context and inferred internal state. The system does not send nudges and then hope they are relevant. It evaluates the student’s likely situation and selects the nudge type accordingly.

Fig. 1 — Context-Aware Nudge Selection
Schedule Time of Day Cognitive Load Student Profile Relational Context Internal State Selection Engine 3 nudges / week highest-leverage only NUDGE TYPE grounding / recovery / focus / social TONE hype / gentle / funny / deep TIMING pre-class / post / gap / evening CONTENT FRAME question / prompt / mirror

The Timing Logic

The system does not distribute nudges evenly across the week. It identifies specific windows where an intervention is both available (the student has a real break) and needed (the student’s likely state would benefit from a particular kind of support).

Pre-Class
Grounding & Intention

Before a class begins, a brief orienting prompt helps a student arrive mentally, not just physically. The nudge sets focus before the demand hits.

Post-Class
Capture & Decompress

After class, the student either consolidates or dissipates. A reflection prompt catches learning while it is still warm.

After Stacked Classes
Recovery & Regulation

After two or three back-to-back classes, the right nudge is not academic. It is physiological: stretch, walk, water, breathe. The system should know the difference.

Long Gaps
Focus Sprint or Social Prompt

A three-hour gap is an opportunity. Depending on the student’s goals and state, it could be a focus window or a connection window.

IV. The Tuesday Problem

Why Tuesday afternoon matters more than Monday morning

Consider a student with three consecutive classes on Tuesday, ending around 1:30 PM, followed by a gap before a 4 PM class. A generic system might send a focus reminder Monday morning because Monday feels like the “start” of the week. A contextually intelligent system recognizes that Tuesday afternoon — after cognitive depletion, before another demand — is the highest-leverage moment.

The nudge that lands there should not say “remember to study.” It should recognize that after three back-to-back classes, the student is likely drained. The most helpful intervention may be a recovery prompt: a walk, a stretch, a glass of water, a few minutes outside. That is not a concession. That is the system working correctly. It is selecting the intervention that matches the student’s likely nervous-system state, not their generic goal list.

Fig. 2 — A Student’s Tuesday: Where Context Selects the Nudge
Monday
9:00 Psych 101
2:00 English 201
Tuesday
9:00 Bio Lab
10:30 Chem 102
12:00 Stats 200
← NUDGE HERE
4:00 Seminar
Wednesday
9:00 Psych 101
2:00 English 201
Thursday
10:30 Chem 102
12:00 Stats 200
Friday
9:00 Bio Lab
“After stacked classes (two to three back-to-back): purpose is recovery, reset nervous system. The best intervention after cognitive overload may be decompression, not more academic effort.”
— Journey Empathy Algorithm, design principle

Because the system only sends three nudges per week, it has to be highly selective. It should look for the moments when an intervention is both possible and needed: when a student has a real break, when stress is likely peaking, when energy is dipping, or when a small action could meaningfully improve focus, recovery, connection, or wellbeing. The aim is not just productivity, but helping the student manage life holistically.

V. The Adaptive Wellness Model

A feedback loop, not a broadcast system

Journey is not a messaging tool that happens to be personalized. It is an adaptive feedback system. The student enters an initial state. The system assesses that state. Nudges are delivered based on context. The student’s responses — and non-responses — feed back into the system. Over time, the system should learn and the student should grow. That adaptive loop is the novel research story.

Fig. 3 — The Adaptive Wellness Feedback Loop
STEP 01 Initial Assessment STEP 02 Context Engine STEP 03 SMS Delivery STEP 04 Response & Behavior STEP 05 Pulse Survey & Advisor feedback: steps 04–05 update steps 01–02
“How does adaptive wellness work — to me, that would be interesting. The paper would be really mapped to the software you’ve created.”
— Anjala Krishen

That adaptive loop maps directly onto Anjala’s framing of the research as feedback-control analysis. The initial survey establishes the baseline state. The context engine produces the intervention. The response data and follow-up surveys measure the output. The novel claim is that the system’s intelligence lies not in any single nudge, but in the loop itself — the ability to start from a student’s declared state, intervene at contextually selected moments, observe the response, and adjust.

VI. The Illustrative Case

How a single coaching relationship revealed the mechanism

Before Journey was tested at scale, sixteen coaching transcripts from a single beta user produced 123 nudges and revealed three patterns that became foundational to the system’s design. These patterns illustrate, at the individual level, the mechanisms the larger study is designed to test at the population level.

Pattern 01
Mirror Material

The highest-performing nudges were not expert advice. They were the user’s own words — metaphors, self-descriptions, moments of clarity — reflected back during moments of decision. This suggests that what makes a nudge land is not its cleverness but its recognition: the student sees themselves in it.

Pattern 02
Inflection Points

Certain exchanges produced visible shifts: changes in wording, energy, or orientation. These inflection points occurred when the coaching surfaced a tension the person had not yet articulated. The system learned to create the conditions for insight rather than to deliver insight directly.

Pattern 03
Obligation vs. Agency Language

A recurring distinction emerged between “should” language (obligation, external pressure) and “pull” language (attraction, intrinsic motivation). When nudges were reframed from the obligation register to the agency register, engagement changed. This became a design rule: orient toward positive agency, not deficiency correction.

These patterns matter for the larger research because they suggest testable mechanisms. Mirror material tests whether self-referential content outperforms generic content. Inflection points test whether productive instability outperforms resolution. Obligation vs. agency language tests whether framing moderates engagement. Each pattern derived from n=1 becomes a hypothesis testable at n=74 and beyond.

VII. Research Questions

Five questions that move the field

These questions are organized hierarchically. The first is the central question. The remaining four collectively build the explanatory model: not “prove the product works,” but “build a serious model of what is happening, for whom, why, and over what time horizon.”

RQ1
Lasting Change or Short-Term Reaction?
Does an adaptive, context-aware SMS intervention produce lasting changes in awareness, wellbeing, and self-directed behavior among emerging adults — or only short-term reactions?

This is the longitudinal question. It requires before-and-after measurement using comparable instruments. It is the question Anjala most clearly wants answered: does the initial state actually change, or does the intervention produce momentary compliance that fades when nudges stop?

Status: Under test
RQ2
Timing as Primary Mechanism
Does contextual timing — matching nudge delivery to the student’s schedule, cognitive load, and likely energy state — explain more variance in engagement and outcome than nudge content alone?

This tests the core claim of the contextual intelligence model: that when you intervene matters as much or more than what you say. It requires comparing timing-matched vs. randomly timed nudges, or modeling timing features as predictors of response.

Status: Observed in pilot
RQ3
Engagement Segmentation
How do student engagement profiles — responders, avoiders, late responders, high-risk students — differ in baseline characteristics, response patterns, and longitudinal outcomes?

This tests whether the intervention works differently for different people, and whether those differences are predictable from intake data. It also addresses the critical sub-question: why do some students disengage, and what does that reveal about the intervention’s limits?

Status: Under test
RQ4
Proximal Mechanisms
Through what proximal mechanisms does the intervention operate — values alignment, mirror material, personalized tone, schedule-awareness, question-based framing, or scarcity — and can those mechanisms be distinguished from one another?

This is the question that prevents the paper from becoming a black-box evaluation. It tests whether observed effects can be attributed to specific ingredients.

Status: Theoretical
RQ5
Institutional Relevance
Do changes in awareness and values-behavior alignment predict downstream institutional outcomes — persistence, help-seeking, belonging, and academic self-efficacy?

This connects the awareness-first model to outcomes universities care about. It also tests the theoretical claim that awareness is upstream of behavior: if you increase a student’s capacity to notice and choose, downstream improvements follow.

Status: Theoretical

VIII. Study Design

Two studies, one explanatory model
Study 1
Baseline Landscape
What
Map the incoming student landscape before any intervention. Analyze initial survey data to identify correlations among goals, pain points, wellbeing, digital behavior, and demographic segments.
Protocol
Correlational analysis across intake variables. Cluster analysis to identify natural student segments. Path diagram showing baseline relationships among variables.
Measurement
Self-selected goals, stated challenges, wellbeing (ACIP / Healthy Minds Index), adapted QoL instrument, digital awareness items, demographic variables.
Output
A baseline model: what the world looks like before Journey does anything. No causal claims. The status quo, clearly mapped.
Study 2
Longitudinal + Mechanism Analysis
What
After a semester of adaptive nudging, test what changes, for whom, and through what mechanisms.
Protocol
Comparable follow-up survey administered before finals. Engagement segmentation from SMS data. Before-after comparison with paired change scores. Mechanism modeling: timing-match, personalization, and agency as mediators. Context features as predictors of individual nudge response.
Measurement
All Study 1 instruments repeated, plus: engagement profile (read rate, action rate, latency), timing-match score (system-logged), perceived personalization, perceived agency, self-compassion (abbreviated SCS), advisor notes (structured form).
Output
The core research contribution: does the adaptive loop produce durable change? For whom? Through what mechanisms?

The Correlation vs. Causation Problem

Anjala’s most persistent concern is the directionality question: are students with better wellbeing simply more likely to respond to nudges, or do nudges actually improve wellbeing? This is not a minor methodological footnote. It is the central threat to the study’s credibility. A paper that accidentally implies causation from correlational data will fail with skeptical readers.

The design addresses this at multiple levels. The before-and-after structure allows person-level change scores. Engagement segmentation allows trajectory comparison across groups. Mechanism variables test whether specific features predict change. Future iterations will incorporate micro-randomized components enabling causal claims about specific features.

Epistemic Scope
What This Design Can and Cannot Prove
This design can establish what the baseline looks like, whether changes occur, whether changes differ by engagement profile, and whether mechanism variables predict change. It cannot definitively establish that Journey caused those changes.

The paper should state this clearly. The strength is in the explanatory model, not the causal claim. That is what makes it publishable: honest about limits, rich in mechanism.

Status: Design acknowledged

IX. Variables & Measurement

What gets measured, and how
Variable Type Instrument / Source When
Wellbeing (ACIP) Outcome Healthy Minds Index (17 items) Baseline + End
Quality of Life Outcome Adapted college-student QoL (Krishen instrument, 5-pt, 5–7 items) Baseline + End
Digital Awareness Outcome Custom: noticing compulsive use, interrupting patterns, values alignment Baseline + End + Pulse
Doom Scrolling Outcome Self-report frequency + optional iOS Screen Time donation Baseline + End
Engagement Profile Moderator SMS data: read rate, action rate, latency, reply patterns Continuous (logged)
Timing Match Mediator System-generated: was nudge in high-leverage window? Per-nudge (logged)
Perceived Personalization Mediator Survey: “The nudges understood my situation” End + Pulse
Perceived Agency Mediator Survey: “I feel more able to direct my digital behavior” Baseline + End
Schedule Density Context Derived: consecutive classes, seated min, gap length Semester-level
Self-Compassion Mediator Self-Compassion Scale (abbreviated) Baseline + End
Goals / Pain Points Baseline Journey initial assessment Intake
Advisor Notes Qualitative Structured follow-up form (coded + open-ended) Per check-in
“Would it be possible to do the full survey again? Beginning and end — and we can look at a lot of cool things at that point.”
— Anjala Krishen

X. Engagement Segmentation

Not one population — many

One of Anjala’s clearest analytical instincts is that the research gets much stronger when it stops treating students as one undifferentiated group. Different students respond to nudges differently for intelligible reasons. The study should identify those segments and test whether outcomes differ across them.

Segment A
Nudge Responders

High read rate, high action rate, consistent engagement. The system works as designed. Were they already doing well, or did the intervention move them?

Segment B
Late Responders

Low initial engagement, increasing over time. The most interesting mechanism story: what changed? Did trust develop gradually? Did circumstances shift?

Segment C
Nudge Avoiders

Enrolled but disengaged. Not hostile. Notification fatigue? Anti-AI sentiment? Wrong tone? Wrong timing? This group is as important as the responders.

Segment D
High-Risk Students

Flagged by initial assessment or advisor observation. Does the intervention help those who need it most, or primarily serve the already-capable?

The avoider segment is not a data loss. It is a research finding. Understanding why some students do not engage — and whether their reasons are addressable — prevents the study from reading as one-sided or promotional. It also generates the most interesting questions for intervention refinement.

“It would be interesting to map out types of people too — the nudge responders, the nudge avoiders, whatever.”
— Anjala Krishen

XI. Hypotheses

What we expect to find — and what would falsify it
H1
Context-Matched Nudges Outperform Random Timing
Nudges delivered in identified high-leverage windows (post-stacked-classes, pre-class, long-gap) will produce higher response rates and stronger proximal outcomes than nudges delivered at arbitrary times.

Falsifiable if: timing-match scores do not predict engagement or outcomes.

Status: Observed
H2
Awareness Precedes Behavior Change
Students will report increased awareness of compulsive patterns before showing measurable reductions in problematic use. Awareness is upstream of behavior in the causal chain.

Falsifiable if: behavior change occurs without awareness change, or awareness change does not predict subsequent behavior change.

Status: Observed
H3
Engagement Profiles Predict Differential Outcomes
Responders, late responders, and avoiders will show different trajectories on wellbeing and awareness measures — and those trajectories will be partially predicted by baseline characteristics.

Falsifiable if: segments do not differ on outcomes, or baseline variables do not predict segment membership.

Status: Under test
H4
The Adaptive Loop Produces Durable Change
Students who engaged with the adaptive system across the full semester will show sustained improvement from baseline to follow-up, not merely spike-and-decay patterns.

Falsifiable if: improvements are concentrated in early weeks and decay, or post-intervention follow-up shows full regression to baseline.

Status: Under test

XII. Study Protocol

The concrete research plan

Phase 1: Baseline Analysis (Now)

Analyze existing initial survey data. Map correlations among goals, pain points, wellbeing indicators, and student characteristics. Identify natural clusters. Build the baseline model. Publish as Study 1.

Phase 2: End-of-Semester Survey (Before Exams)

Administer follow-up survey using comparable items: adapted QoL instrument, Healthy Minds Index, digital awareness items, perceived personalization, perceived agency. Deploy before students mentally check out.

Phase 3: Engagement Segmentation

Classify students into responder / late-responder / avoider / high-risk segments using SMS engagement data. Compare baseline characteristics across segments.

Phase 4: Longitudinal Analysis

Compare baseline vs. follow-up on all outcome variables. Test whether changes differ by segment. Model timing, personalization, and context-match as predictors. Core of Study 2.

Phase 5: Qualitative Integration

Code advisor notes using structured categories. Integrate as supporting evidence for the mechanism story. Create follow-up form for future cohorts.

Future
Micro-Randomized Trial (MRT) Component
What
Establish causal effects of specific nudge features by randomizing components (type, timing, tone, question vs. directive) within-person across occasions.
Proximal Outcomes
Time-to-open, session length, reply rate, self-reported urge/goal conflict, next-day digital behavior.
Moderators
Schedule density, time of day, baseline engagement profile, prior response pattern.
Why It Matters
The MRT converts the research from descriptive to genuinely causal at the feature level. It tests whether timing-matched outperforms random, question outperforms directive, mirror-material outperforms generic, recovery outperforms productivity — all within-person.

XIII. Preliminary Evidence

What the first pilot already shows

The UNH pilot was not designed as a formal study. But its descriptive findings establish the feasibility and directional signal that justify the protocol above.

74
First-year students enrolled
85%
Active nudge responders
14,222
Total actions recorded
3/wk
Nudges per student
89%
Survey subset stayed engaged
57%
Reported less doom scrolling
71%
Said nudges felt personalized
71%
Found it helpful for focus
Directional Pattern
Relevance-Driven Engagement Without Addictive Design

The system was not optimized for engagement, yet retention was high. It did not tell students to stop scrolling, yet scrolling decreased. It did not give answers, yet awareness increased. It did not maximize session length, yet students continued responding. That pattern — relevance-driven engagement without addictive design — is the signal that justifies testing the mechanism formally.

XIV. Theoretical Framework

Where this sits in the literature

The paper integrates five theoretical streams. None is new individually. The contribution is in showing how they interact within a single adaptive system and testing that interaction empirically.

Context-Aware Computing (Dey, 2001)

Any information that characterizes the situation of an entity — and its use to provide relevant services. Journey extends this from computing into behavioral intervention.

ACIP Framework (Davidson et al., 2020)

Awareness, Connection, Insight, and Purpose as trainable dimensions of wellbeing. The Healthy Minds Index operationalizes ACIP in 17 items.

Just-in-Time Adaptive Interventions (Nahum-Shani et al., 2018)

Delivering the right support at the right time by adapting to changing context and state. Journey’s context engine implements JITAI logic through schedule-awareness and scarcity-constrained selection.

Motivational Interviewing (Miller & Rollnick, 2013)

Resolving ambivalence by evoking the person’s own reasons for change. Journey’s nudge architecture draws on MI: question-first, reflective, permission-giving.

Digital Harm Reduction

An awareness-first frame replacing abstinence logic with agency-building. The target is not less screen time but more intentional technology use.

XV. The Contribution

Why this paper advances the field
Contribution 01
Context as Mechanism, Not Decoration

Most nudge studies treat timing as a delivery parameter. This paper treats it as the primary mechanism of action and tests that claim with schedule-derived features.

Contribution 02
Scarcity-Constrained Selection

The three-nudges-per-week constraint introduces a novel design logic: intervention value comes not from frequency but from precision.

Contribution 03
Adaptive Loop as Research Object

The paper does not treat the system as a black box. It maps the adaptive loop explicitly and tests whether the loop itself produces change.

Contribution 04
Honest About Causal Limits

Shows the baseline clearly, measures change, models mechanisms, accounts for segmentation, and states what the design can and cannot prove. Scientific honesty as contribution.

“The second study could be looking at how the nudges sort of work with what we find in the first study.”
— Anjala Krishen

A paper that says “nudges helped students” is common. A paper that says “here is the baseline model, here is the adaptive system, here is the observed change, here are the plausible mechanisms, here is who it works for and who it does not, and here are the causal limits” — that is a contribution. The discussion should maintain a healthy scientific tension: optimistic enough to show promise, sober enough to acknowledge what the design cannot prove.

The deepest question the discussion should address is whether adaptive, context-aware wellness interventions represent a genuinely different category of digital intervention — one built around awareness and agency rather than restriction and compliance — and whether that difference matters for how emerging adults develop the capacity to direct their own cognitive and emotional lives.

This document is a living research artifact. It is not yet submitted for publication. It presents the conceptual framework, study design, and preliminary evidence as an integrated signaling document for collaborative review and development.

Journey Research Group · Working Paper · April 2026
Correspondence: development@valerin-group.com
Contextual Intelligence in Adaptive Wellness