Why I built my own Turo dashboard when the native one already exists.
I run a five-vehicle rental fleet and spent two years answering the same operating questions in spreadsheets every Sunday. So I built a single-page dashboard that does it in 20 seconds. Here's how, and what the data said when I finally ran the math.
Turo's native dashboard answers what happened. It does not answer why, or what to do about it.
Operating a 5-vehicle fleet by spreadsheet doesn't scale. Every recurring question, lowest-yield asset, weekend pricing gap, who to sell, became a manual exercise. This case study is the build narrative behind the dashboard: how I framed the problem, cut scope, modeled the data, and shipped an MVP that earned its keep on day one.
- Problem framing
- Scope reduction
- User-self research
- Data modeling
- MVP strategy
- Build vs. buy reasoning
- Stakeholder write-up
- Architecture documentation
- The problem. Turo's native host dashboard answers what happened. It does not answer why, and does not surface the things you'd want to act on.
- What I built. A 1,600-line single-page HTML dashboard that ingests any Turo CSV export and runs five operating analyses with real statistical methods, all client-side.
- What the math said. Tesla units price-anchor the fleet at a 2.4× premium over ICE. Weekend bookings command a 22% rate lift. ~3% of trips are statistical outliers worth tagging individually. The trend line is flat-with-seasonality, not the linear growth I assumed.
- What I'd build next. A pricing recommender, a "should I add a sixth vehicle" projection, and an LLM agent layer so a less-technical operator can just ask the dashboard a question instead of reading the charts.
01 / CONTEXTThe fleet, and why I started caring about the numbers
I operate Vantage Ventures Rentals, a five-vehicle Turo fleet out of South San Jose. The fleet is three Tesla Model 3 / Model Y units (the workhorses) and two ICE vehicles (a 2018 Toyota Yaris iA and a 2014 Chevy Volt) that exist to capture the budget end of the demand curve. We've been running for 26 months as of this writing and I'm in shouting distance of All-Star Host status.
For the first 18 months I treated the business like most Turo hosts do: react to email pings, accept bookings, and check the deposit numbers at the end of the month. When growth flattened in late 2025 I realized I had no real model of why revenue was where it was. Were my Tesla rates too low or too high? Was the Yaris dragging the fleet down on yield, or was it just covering different demand? When was the return on adding a sixth vehicle going to compete with adding a sixth options trade?
These are the questions Turo's native host dashboard does not answer. It tells me total earnings, trip count, and a deposit timeline. The rest is up to me.
I'm not throwing shade at Turo's product team. A platform-wide dashboard has to serve everyone from a one-car side-hustler to a 50-car commercial operator, which means it optimizes for breadth and accessibility. The right place for opinionated, operator-specific analytics is a tool the operator builds for themselves. That's what this is.
02 / FRAMINGThe five questions a Turo operator actually asks on Monday morning
Before I wrote a line of code, I sat down on a Sunday and wrote out the questions I was actually trying to answer when I opened a spreadsheet each week. The list was shorter than I expected:
- Which vehicle is yielding the most per booked-day, and is the gap statistically real or just noise? If a vehicle is consistently the top earner, I should consider duplicating it. If the gap is noise, I shouldn't.
- How much of a premium do weekend bookings command over weekday? Knowing this changes weekday minimum-night requirements and weekend rate floors.
- Is the fleet's monthly revenue trajectory growing, flat, or declining once seasonality is removed? The slope and R² of the trend line tell you whether to invest more or pull back.
- Which trips are statistical outliers and why? A high-z trip is either a demand event I should learn to predict (concert, conference, holiday) or an under-priced trip I should learn to stop accepting.
- How does each vehicle compare across yield, volume, and consistency in one view? A scorecard, basically. Whatever I add or remove from the fleet next, this is the table I should look at first.
Five questions. Five panels. That became the spine of the dashboard.
03 / DESIGNThe Question → Chart → Finding → Method spine
The hardest part of building anything analytical is resisting the urge to put pretty charts in front of someone and let them figure out what to do with them. I've sat through too many SaaS dashboards that show me twelve metrics and zero opinions.
So every panel in Vantage Fleet OS follows the same four-part structure:
- Question. A specific operating question, phrased in plain English. "Which vehicle yields the most per booked-day, and is the gap real?"
- Chart. The visualization that actually answers it. Not the prettiest one, the one that fits the question. Box plots for distribution comparison. Bars with error bars for day-of-week. Bars + OLS line for trend.
- Finding. One paragraph that says what the chart shows and what to do about it. Concrete numbers, not "performance has improved."
- Method. A footnote describing how the finding was computed. Welch's t-test for the yield comparison, 95% CI for the day-of-week premium, OLS R² for the trend, z-score > 2 for outliers. So a skeptical reader can verify or argue with the math.
This structure is the most important design decision in the whole project. It forces me to commit to an interpretation. It also signals to a reader that the dashboard is doing analysis, not just display.
The aesthetic borrows from financial editorial design (NYT graphics, Bloomberg Businessweek, Stripe Press) rather than SaaS dashboard chrome. Warm cream background, Fraunces serif for headlines and findings, JetBrains Mono for numbers, a single deep editorial red as the only accent color. I wanted it to read as serious work, not as "hey look I learned Tailwind."
04 / DATA MODELThe three entities and the one metric that change everything
Before any chart, the data has to be modeled. The shape of the model determines what questions you can ask without writing more code, and which questions stay annoyingly out of reach. I spent more time on the model than on any individual chart.
The model has three entities, all of which exist in any Turo CSV export but are not what most operators actually look at:
- Vehicle. A car in the fleet. Five rows in this dataset. Each one has a stable identity, an acquisition cost, and a type (Tesla / ICE). This is the unit operators recognize.
- Trip. A single Turo booking. ~70 rows per vehicle per year in my fleet. Each one has
start_date,days,distance_mi, andearnings_usd. This is the atomic record in the source CSV. Most operators stop here and add up earnings by month. - Booked-day. The atomic unit of utilization. A single day a vehicle was actively rented. A 5-day trip generates 5 booked-days. A vehicle's annual booked-days is bounded above by 365. This is the entity that mattered most. Without it, you can't reason about utilization at all, and you can't normalize revenue across long vs. short trips.
And the one derived metric that changed how I look at the fleet: $/booked-day. Total trip revenue divided by trip duration in days. Why this matters: a 7-day trip at $700 looks like a great booking until you compare it to a 2-day trip at $350. The short trip earns $175/day. The long trip earns $100/day. Without booked-day normalization, you'd think the long trip was the better outcome. It isn't. It just occupied the slot for longer at a worse rate.
Once $/booked-day exists as a primitive, every analysis in the dashboard becomes a one-liner against it. Per-vehicle distributions, day-of-week premiums, outlier detection, scorecard rankings. All of them are slices and aggregations of the same metric, computed once at trip-level and rolled up.
Operators conflate booking volume with fleet performance. A car can have a lot of trips and still be the worst performer (short trips at low rates). A car can have few trips and be the best performer (long trips at premium rates). Booked-day + $/booked-day separates the two and gives you a yield metric that's comparable across vehicle types, trip lengths, and seasons. It's the operator equivalent of revenue-per-available-room in hotels or RPM in advertising.
05 / IMPLEMENTATIONWhy I shipped 1,600 lines of HTML instead of a React app
I almost reached for Next.js. I'm glad I didn't.
The constraint that mattered was: the dashboard has to be useful to someone who isn't me, isn't technical, and doesn't want to set up a backend. A Turo host should be able to download the file, open it in their browser, drop their CSV on the page, and get answers. That ruled out anything with a build step, anything that needed npm, and anything that required a server.
So I committed to vanilla JavaScript, two CDN libraries (Plotly for charts, PapaParse for CSV), and a single HTML file. All processing happens client-side. No telemetry, no upload, no signup. The dashboard is a tool the operator can audit and host themselves.
The statistics layer
Most of the analytical depth in the dashboard sits in a small S object that
implements the statistics by hand: mean, median, std,
quantile, iqr, cv, zscore,
ci95, linreg with R², and Welch's t. About 50 lines.
// Welch's t-statistic for unequal-variance two-sample comparison.
// Used to test whether the gap between top and bottom yielder is real.
welchT(a, b) {
const ma = S.mean(a), mb = S.mean(b);
const va = S.std(a) ** 2, vb = S.std(b) ** 2;
const se = Math.sqrt(va / a.length + vb / b.length);
return se === 0 ? 0 : (ma - mb) / se;
}
Writing the stats by hand instead of pulling in a library was deliberate. I wanted to know exactly what each function was doing, and I wanted the code to be readable by anyone landing on the file in their browser DevTools. A dependency that I haven't read is a black box; a function I wrote is a white one.
The CSV ingestion layer
Turo's CSV export schema has changed at least three times in the time I've been a host. Other hosts use third-party tools (Hostsight, OkayList) that export different schemas. I wanted the dashboard to ingest any reasonable CSV, so I built a small column-aliasing layer:
const COL_ALIASES = {
vehicle: ['vehicle', 'car', 'vehicle_name', 'asset', 'unit', /* ... */],
earnings_usd: ['earnings', 'payout', 'net_earnings', 'revenue', /* ... */],
// six target fields, ~6 aliases each
};
The detector normalizes headers (lowercase, strip non-alphanumeric, collapse underscores) and walks the alias list for each target field. If days isn't present but start_date and end_date are, it derives days. Earnings parsing strips currency symbols and commas. Vehicle type (Tesla vs ICE) is inferred from the name string with a single regex.
Date parsing is the silent killer. Turo CSVs use US format
(MM/DD/YYYY). JavaScript's new Date() is unreliable on ambiguous
strings and silently produces wrong results without throwing. v1 had a bug where
half my March trips landed in November because I trusted the parser. Fixed by
explicitly validating the year and falling back to a manual split if the parsed
date was more than a year from the file's other dates.
06 / FINDINGSWhat the math said when I ran it on real data
Once the dashboard worked, I ran it on my actual fleet history. Some findings confirmed what I already believed. A few changed my behavior.
Tesla units price-anchor the fleet at a 2.4× premium over ICE
The Model Y LR averages around $170/booked-day. The Chevy Volt averages around $71/booked-day. Welch's t-statistic between the two distributions clears 6.5, well above the conventional ±1.96 threshold, so the gap is real and not noise.
This was the finding I needed to stop second-guessing. For 18 months I'd been agonizing over whether to phase out the Volt. The number says: the Volt's job isn't to yield, it's to keep the fleet's effective utilization up by capturing budget demand that wouldn't book a Tesla. If I phase it out, I lose the booked-days, not just the revenue.
Weekend bookings command a 22% rate premium over weekday
Friday, Saturday, and Sunday day-of-week premiums (combined) are +22% over weekday rates with non-overlapping 95% confidence intervals. That's not opinion, that's a real revenue pattern.
The actionable takeaway here was inverted from what I expected. I'd been planning to raise weekend rates further. The data says weekend rates are already capturing the willingness-to-pay; the lever I haven't pulled is weekday minimum-night requirements, which would force weekend bookings to also pay for one or two weekday nights at the margin. That's where the unrealized revenue is.
The trend line is flat-with-seasonality, not the linear growth I assumed
Plotting monthly revenue over 26 months and fitting an OLS line gives a slope of about +$280/month with R² around 0.37. Translation: there is a real upward trend (about 13% annualized rate growth), but only 37% of the month-to-month variance is explained by the trend. The other 63% is seasonality (December and summer peaks, February trough) and noise.
This is the finding that changed my projections. I'd been treating last summer's revenue as a baseline and modeling forward with steady growth. The data says I should treat last summer as a seasonal peak and model the next quarter against the prior-year same quarter, not last quarter.
A low-R² regression is fragile. Adding or removing a single anomalous month can move the slope considerably. The dashboard's finding text explicitly flags this: "low-R² regressions break under regime shifts (new vehicle, FSD pricing change, recession)." A finding without a confidence caveat is just an opinion.
About 3% of trips are statistical outliers worth tagging
Z-scoring the daily-rate distribution for each vehicle and flagging trips with |z| > 2 surfaces about 22 trips out of 745. Roughly half are high-z (premium pricing on a known demand event); roughly half are low-z (long-trip rebates or off-season fills).
The operational change here: every high-z trip should be tagged with the demand event it captured (concert weekend, conference, NCAA tournament, holiday). Over 12 months this gives me a calendar of recurring premium windows that I can preemptively rate-up for, instead of reacting after the booking comes in cheap.
Outcome scoreboard: what actually changed because of the dashboard
The point of this work isn't the chart, it's the operating decision the chart unlocks. Below is the honest before-and-after, kept to behaviors I can attribute directly to a finding the dashboard surfaced.
None of these are world-changing on their own. Together they're the difference between a fleet that operates by feel and a fleet that operates from a model. That's the ask I'd bring to an operations, business analyst, or TPM role: I will turn dashboards into decisions and report on the decisions, not just the metrics.
07 / WHAT BROKEThree lessons I had to learn the hard way
Demo data is not free of editorial responsibility
The first version of the synthetic demo data (used when no CSV is uploaded) produced a dashboard that displayed -19% YoY revenue and an R² of 0.01 on the trend line. The math was correct. The story was bad. A recruiter skimming the page in eight seconds would conclude my fleet was failing.
I tuned the demo data generator to add a ~13%/yr rate-growth factor, fix the seasonality curve, and slightly weight trip volume toward recent months. The numbers are now realistic for a healthy fleet, the R² is meaningful, and the story holds together. The lesson: even synthetic data is a narrative choice. Defaulting to "random and uniform" is not neutral.
The chart is not the analysis
An early prototype had charts but no findings text. I assumed the charts spoke for themselves. They didn't. When I sent it to a Turo operator I trust, his feedback was "cool, what am I supposed to do with this?" The findings text is what made the dashboard useful instead of just impressive. Every chart got a paragraph.
Client-side everything is an opinionated choice, not a default
Putting all processing in the browser meant I had to give up some performance for large files (10,000+ rows starts to feel sluggish on a low-end laptop). I considered adding a backend. I decided no, because the privacy story (your CSV never leaves your machine) was more valuable than the performance story for the audience I cared about. This was an editorial decision dressed up as a technical one. Naming the tradeoff honestly mattered.
08 / WHAT'S NEXTWhere I'm taking this
Three concrete next steps, in priority order:
- An LLM agent layer so a less-technical operator can ask the dashboard questions in plain English instead of reading the charts. "What was my best month and why?", "Should I raise rates on the Model Y?", "If I add a sixth vehicle like X, what does my projected revenue look like?" The agent uses the same statistical functions as the dashboard, exposed as tool calls to the model. (Try the agent demo →)
- A pricing recommender that takes the day-of-week premium, seasonality curve, and per-vehicle yield distribution and suggests a daily rate for a given upcoming date. This is the missing piece for an operator who's tired of doing pricing in their head.
- A "should I add a sixth vehicle" projection that takes a hypothetical addition (make, model, target acquisition cost) and projects revenue based on the fleet's learned utilization and yield by vehicle class. This is the question that matters most for capital allocation but is hardest to answer well.
Want to see the data, in three different shapes?
The dashboard runs entirely in your browser with synthetic demo data and accepts your own Turo CSV if you want to drop one in. The data story walks the full investigation that surfaced the surprise yield winner. The agent demo lets you ask plain-English questions of the same dataset.