Methodology
This page explains what we collect, how we protect anonymity, and how we compute and report the Wedding Regret Index. We publish aggregated patterns only—never individual responses.
The Wedding Regret Index is an aggregated view of what couples most often regret spending on, what they say was worth it, and which decisions tend to align with higher satisfaction.
This is not a review site. We do not score specific vendors, venues, or individuals—only broad categories and patterns.
- No names requested. We do not ask for your name, email, venue name, or vendor names.
- Optional fields are optional. You can submit without wedding year, approximate guest count, or free text.
- Avoid identifying details. If you add comments, please do not include names, exact locations, or unique identifiers.
- Anti-spam controls. We use a hidden honeypot field and lightweight abuse prevention signals.
- Time since wedding
- Wedding type
- Region (broad)
- Total budget range (and optional approximate number)
- Guest count range (and optional approximate number)
- Planning signals (optional): who paid most, engagement length, DIY level, went over budget
- Outcomes: stress score (1–5), satisfaction score (1–5), would do again (yes/no)
- Regrets: pick up to 3 categories and choose intensity (1–5)
- Worth-it: pick up to 3 categories and choose intensity (1–5)
- Top spend: pick up to 3 categories and rank them (1–3)
- Overrated / Underrated: optional single picks
- Regret rate = (# responses that selected the category as a regret) ÷ (total # responses)
- Worth-it rate = (# responses that selected the category as worth-it) ÷ (total # responses)
- Average regret intensity = mean intensity (1–5) among respondents who selected it as regret
- Average worth-it intensity = mean intensity (1–5) among respondents who selected it as worth-it
For quick comparison, we may show a net sentiment indicator such as: Worth-it rate − Regret rate and/or an intensity-weighted variant. This helps distinguish polarizing items from consistently positive or negative ones.
Top-spend selections include a rank order (1–3). For reporting, we can weight ranks: #1 = 3 points, #2 = 2 points, #3 = 1 point then sum per category and normalize by total responses.
When sample sizes allow, we may compare groups (e.g., average satisfaction among people who selected a category as worth-it versus those who did not). These are descriptive summaries and do not imply causation.
- Spam filtering: honeypot field + simple abuse detection patterns.
- Validation: intensity values are enforced 1–5; obvious malformed submissions are rejected.
- Reasonable bounds: date/year and optional numeric values are clamped to sensible ranges.
- Minimum sample sizes: we may suppress or label categories with low counts (e.g., “low data”).
- Rounding: percentages may be rounded to reduce false precision.
- No “winner” claims: we do not present results as universal advice—only observed patterns in the dataset.
- Storage: responses are stored in a database for aggregation and trend reporting.
- Retention: we keep responses to improve trend stability over time (older responses remain valuable for context).
- Removal: because submissions are anonymous, we generally cannot locate and remove a single response unless you provide a highly specific timestamp and exact response details (which you should not do). If you have concerns, submit a generic note via the survey comment field without identifying details.
- Security: access is restricted; we do not sell individual response data.
- Selection bias: respondents opt in; the dataset may not represent all couples.
- Recall bias: hindsight changes with time and circumstances.
- Category ambiguity: people interpret categories differently (e.g., “decor” vs “florals”).
- Small samples: early results can swing dramatically as new responses arrive.
Findings update as the dataset grows. If we change category definitions, scoring weights, thresholds, or reporting rules, we note it here.
- 2026-01-16: Expanded methodology disclosures (privacy, reporting thresholds, structured data).
If something looks incorrect or confusing in the findings, the simplest path is to submit feedback through a new survey response (optional comment field) without identifying details.