Methodology

This page explains what we collect, how we protect anonymity, and how we compute and report the Wedding Regret Index. We publish aggregated patterns only—never individual responses.

Purpose

The Wedding Regret Index is an aggregated view of what couples most often regret spending on, what they say was worth it, and which decisions tend to align with higher satisfaction.

This is not a review site. We do not score specific vendors, venues, or individuals—only broad categories and patterns.

Anonymity and privacy
  • No names requested. We do not ask for your name, email, venue name, or vendor names.
  • Optional fields are optional. You can submit without wedding year, approximate guest count, or free text.
  • Avoid identifying details. If you add comments, please do not include names, exact locations, or unique identifiers.
  • Anti-spam controls. We use a hidden honeypot field and lightweight abuse prevention signals.
Technical note: for abuse prevention, we may store a short user-agent string and a one-way hash of the IP address. These fields are not displayed publicly and are not used to identify individuals.
What we collect
Context (mostly multiple choice)
  • Time since wedding
  • Wedding type
  • Region (broad)
  • Total budget range (and optional approximate number)
  • Guest count range (and optional approximate number)
  • Planning signals (optional): who paid most, engagement length, DIY level, went over budget
  • Outcomes: stress score (1–5), satisfaction score (1–5), would do again (yes/no)
Index inputs
  • Regrets: pick up to 3 categories and choose intensity (1–5)
  • Worth-it: pick up to 3 categories and choose intensity (1–5)
  • Top spend: pick up to 3 categories and rank them (1–3)
  • Overrated / Underrated: optional single picks
We intentionally avoid collecting: emails, phone numbers, vendor/venue names, exact addresses, or any “account” identifiers. This dataset is designed for trends, not tracking.
How we compute the index
Core metrics per category
  • Regret rate = (# responses that selected the category as a regret) ÷ (total # responses)
  • Worth-it rate = (# responses that selected the category as worth-it) ÷ (total # responses)
  • Average regret intensity = mean intensity (1–5) among respondents who selected it as regret
  • Average worth-it intensity = mean intensity (1–5) among respondents who selected it as worth-it
A practical “net” signal

For quick comparison, we may show a net sentiment indicator such as: Worth-it rate − Regret rate and/or an intensity-weighted variant. This helps distinguish polarizing items from consistently positive or negative ones.

Top-spend ranking

Top-spend selections include a rank order (1–3). For reporting, we can weight ranks: #1 = 3 points, #2 = 2 points, #3 = 1 point then sum per category and normalize by total responses.

Satisfaction and stress context

When sample sizes allow, we may compare groups (e.g., average satisfaction among people who selected a category as worth-it versus those who did not). These are descriptive summaries and do not imply causation.

Important: this is self-reported hindsight. The index reflects how people feel after the fact, not an objective score of event success.
Quality controls
  • Spam filtering: honeypot field + simple abuse detection patterns.
  • Validation: intensity values are enforced 1–5; obvious malformed submissions are rejected.
  • Reasonable bounds: date/year and optional numeric values are clamped to sensible ranges.
Reporting rules (to reduce noise)
  • Minimum sample sizes: we may suppress or label categories with low counts (e.g., “low data”).
  • Rounding: percentages may be rounded to reduce false precision.
  • No “winner” claims: we do not present results as universal advice—only observed patterns in the dataset.
Early datasets are volatile. As more surveys are collected, rates and rankings can change meaningfully.
Data handling
  • Storage: responses are stored in a database for aggregation and trend reporting.
  • Retention: we keep responses to improve trend stability over time (older responses remain valuable for context).
  • Removal: because submissions are anonymous, we generally cannot locate and remove a single response unless you provide a highly specific timestamp and exact response details (which you should not do). If you have concerns, submit a generic note via the survey comment field without identifying details.
  • Security: access is restricted; we do not sell individual response data.
Limitations
  • Selection bias: respondents opt in; the dataset may not represent all couples.
  • Recall bias: hindsight changes with time and circumstances.
  • Category ambiguity: people interpret categories differently (e.g., “decor” vs “florals”).
  • Small samples: early results can swing dramatically as new responses arrive.
Updates and transparency

Findings update as the dataset grows. If we change category definitions, scoring weights, thresholds, or reporting rules, we note it here.

Changelog
  • 2026-01-16: Expanded methodology disclosures (privacy, reporting thresholds, structured data).
Questions or corrections

If something looks incorrect or confusing in the findings, the simplest path is to submit feedback through a new survey response (optional comment field) without identifying details.