AST Tracker App
The AST Tracker App is the practical measurement tool of Affective Socialization Theory. It translates the theory’s variables into a working tracking system so users can log daily mood patterns, review weekly context conditions, monitor material strain, estimate effective socialization dose, track agency and behavioral control, and—if they choose—contribute anonymous aggregate data to the larger research program.
This page explains what the app currently does, how each part maps to the theory, and where the build is still experimental or in transition. The goal is accuracy, not hype. AST Tracker is already functional in important ways, but it is still a developing research instrument rather than a finished commercial platform.
What AST Tracker Is
AST Tracker is not just a mood diary and it is not just a theory explainer. It is meant to do both jobs at once. On the personal side, it helps users observe how different environments affect mood, stability, agency, and follow-through. On the research side, it is designed to generate the individual and aggregate measurements that AST needs in order to be tested and refined.
The central idea: the app treats emotion, context, and behavior as a recursive system. Your daily and weekly inputs do not only describe “how you feel.” They describe how your environment may be shaping learning, stability, and agency over time.
Personal tracking
Daily and weekly logging helps you see how your contexts are affecting your nervous system, your sense of agency, and your ability to carry through on goals.
Theory in practice
The app operationalizes core AST variables such as MAT, MSI, SED, AE, BCI, and the emergent context scores HMC, CCC, and HV.
Research instrument
With opt-in anonymous aggregation, the app can support broader testing of the theory’s threshold, moderation, and recursive feedback claims.
How to Use the App
The app’s current structure revolves around two main rhythms: daily logging and weekly review. Together they create the time-series data that the model uses for both descriptive tracking and provisional prediction.
1. Daily Check-Ins
Daily entries supply the raw mood series that the app uses to estimate week-level stability. In practical terms, the app asks you to keep logging often enough that your weekly pattern becomes visible rather than relying on memory at the end of the week.
The current build uses repeated daily entries to generate MSI and confidence values for that week, with stronger confidence when there are more unique days logged.
2. Weekly Review
The weekly layer is where the theory becomes more structural. You review your contexts, enter hours, clarity, consistency, agency, and quality values, score material strain, track behavior targets, and set or review Agency Expectancy for the next cycle.
This is also where context-level logic, recursive update logic, and research-quality checks become more visible.
Notifications: the current app code includes optional daily and weekly reminders, with a daily check-in reminder set for 8:00 PM and a weekly review reminder set for Monday at 8:00 PM when notifications are enabled.
What the App Measures
The current codebase already contains a fairly detailed AST instrument specification. The app measures both individual variables and emergent context variables, and it also includes tutorial text explaining how users should interpret them.
| Variable | What the App Tracks | How It Is Used |
|---|---|---|
| MAT | Objective material pressures plus subjective intensity | Material Strain is scored as MAT = (MAT-O × 2) + MAT-S. The current instrument includes seven objective pressure categories and a 0–10 subjective intensity scale. |
| MSI | Week-level mood stability derived from repeated daily entries | The current app computes MSI from daily valence-like logs and presents it as a stability score rather than a vague mood label. |
| SED′ Raw | Quality-adjusted exposure in each context | The app treats hours, clarity, consistency, and agency as separate inputs and calculates raw dose from them. |
| Q / Load Split | Supportive versus coercive direction of the same dose | Quality does not add dose; it splits the same exposure into supportive or coercive load. |
| AE | Weekly expectancy of whether action will work | The app distinguishes between context-level agency ratings used in SED and weekly Agency Expectancy as a separate state variable. |
| BCI | Behavioral Control based on adherence to targets | In the current build, BCI functions as a control/adherence score built from achieved versus targeted behavior counts. |
| HMC, CCC, HV | Emergent context-level scores derived from aggregated patterns | These scores are used to moderate effective dose and to give users a way of thinking about the broader character of their environments. |
MAT = (MAT-O × 2) + MAT-S
SED′ raw = Hours × Clarity × Consistency × Agency
SED′ effective = SED′ raw × HMC × CCC × (1 − HV)
Threshold gate: if MAT > 15, then SED′ effective = 0
The current logic and tutorial files present the MAT threshold of 15 as provisional and experimental, not as a finalized scientific constant.
Predictions, Dashboards, and Feedback
AST Tracker is not only descriptive. The app also attempts to compare observed and predicted change across repeated cycles. That makes it more than a journal: it becomes a running test of the model against lived data.
Observed change
The app looks at what actually happened from one period to the next, such as week-to-week changes in MSI and changes in behavioral control.
Modeled change
The theory layer presents recursive update formulas for MSI, AE, and predicted next-week BCI so users can compare model outputs with actual outcomes.
Risk text and system feedback
The current logic includes text-level risk assessment that flags patterns such as high strain plus coercive context or high instability, framing them as structural friction rather than mere personal failure.
Trend and quality views
The app also includes charting, evidence-status logic, and quality thresholds that help distinguish a weak descriptive signal from stronger repeated-case evidence.
Important alignment note about BCI: the current build already uses BCI as a real control score based on target adherence, which matches the newer direction of the theory. At the same time, parts of the theory-facing app content still reflect an older predictive-next-BCI layer, so this part of the app is still being aligned with the latest version of AST.
In other words, users should understand that the app’s core BCI tracking is functional now, while some of the predictive BCI interpretation remains an active theory-update area.
Research Features and Evidence Quality
A major strength of the current app is that it already treats user data as potential repeated-case evidence rather than as generic wellness content. The build includes research hypotheses, quality gates, and stronger-week criteria.
Research hypotheses in the app
The current code includes explicit research tracks for hypotheses such as short-term MSI drift from SED, MAT strain and weaker BCI, and AE mix shifting with context structure.
Quality gates
The app checks sample size, completeness, and confidence thresholds instead of treating all logged weeks as equally strong evidence.
Strong week logic
A high-quality week currently requires at least four unique logging days plus a completed weekly review, and the app reflects that in its evidence and XP logic.
Advanced Research Mode: The current build includes an advanced research mode that surfaces more explicit research and protocol fields. This should be understood as a beta feature rather than the default experience for most users.
Aggregate context data and multi-user evidence are one of the main reasons this app matters for AST as a scientific project. The theory requires recursive context scores, and the tracker is one of the tools built to help generate them.
Theory, Tutorials, and Group Tools Inside the App
The current app is also more educational than the current website suggests. It includes a theory screen, structured tutorials for key variables, a diagnostic translator component, and group-audit features that connect the individual user back to the wider AST framework.
Theory screen
The app includes built-in theory content explaining the recursive model, the SED stack, MSI, BCI, HMC, CCC, AE, and wider AST concepts in user-facing language.
Tutorial system
The logic files include tutorials for Hours, Clarity, Consistency, Agency, Quality, MAT, HMC, CCC, HV, effective dose, and BCI so users can learn what each variable means while using it.
Diagnostic translator
The code references a dedicated diagnostic translator component, which fits the broader AST aim of reframing symptoms as signals about environments rather than only as isolated personal defects.
Groups and audits
The app includes group audit features and broader collective tools, allowing the project to move beyond private self-tracking toward shared environmental analysis.
Global and country aggregates
The build also includes global and country-level aggregate views, presented as descriptive aggregate features rather than finished causal proof.
Gamification layer
XP, levels, streak bonuses, and quality rewards are present in the code as participation and habit-support features rather than the core scientific content of the app.
Privacy, Consent, and Data Control
The app’s privacy model is built around pseudonymous local identifiers, explicit consent gates, and separate research and community consent states. The goal is to support meaningful research participation without turning the app into a surveillance system.
| Privacy Layer | What the Current Build Does |
|---|---|
| Pseudonymous identity | The app creates stable local research and community IDs without requiring ordinary account identity for core tracking. |
| Consent gating | Cloud writes are gated behind explicit opt-in consent. Research and community participation are handled separately. |
| Whitelisted community payloads | Anonymous community contribution data is sanitized and reduced to a limited allowed set of fields. |
| Deletion | The current code includes local reset and best-effort server deletion helpers. |
| Backup / restore | The current UI includes backup import. Do not overstate unconfirmed export features. |
Participation in aggregate research is optional. Consent can be managed inside the app, and the project is being built as a research instrument without collapsing into a surveillance system.
Current Status, Strengths, and Limitations
The app is already strong in several important areas, but it is still evolving. The clearest way to describe it is to separate what is already working well from what is still being refined.
What is already strong
- The basic AST variable system is already meaningfully implemented.
- Daily and weekly tracking are already structured around the theory rather than generic wellness language.
- Consent and pseudonymous cloud logic are more developed than the current web page implies.
- Theory tutorials and explanatory content are already built into the app experience.
- The app already treats evidence quality and repeated measurement seriously.
What still needs to be treated as in progress
- The equations are still experimental and openly described that way in the app itself.
- Some context aggregate features are descriptive and exploratory rather than fully validated.
- The BCI layer is partly in transition between older prediction framing and the newer control/adherence framing.
- Advanced research mode is currently a beta feature.
- The app is not yet a finished scientific instrument in the strongest sense; it is a serious working instrument still being refined.
Some parts of the app are still in transition. The app is already useful and substantially aligned with AST, but some components are still being updated to match the latest version of the theory more closely.
Get the App and Follow Development
AST Tracker is currently best understood as an active working build: usable, informative, and already valuable for structured self-observation and research participation, but still evolving in public alongside the theory itself.
Android access is currently tied to testing. The web app is the easiest public way to see the project in its current form. The iOS build is still in development.
The most accurate invitation to users is this: use the app to audit your environments, learn the variables, watch the patterns, and—if you choose—participate in the larger attempt to test and refine AST with real repeated-case data.