Splash pad data, 5-minute quickstart
Speed-run guide for developers and agents (Claude, ChatGPT, Gemini) who want to query SplashPadHub data programmatically. Code first, no marketing. For the full reference see /developers and for the formal open-data overview see /splash-pad-data.
Last reviewed: 2026-05-10 · Open data under CC BY 4.0
Direct answer
All splash pad data is published as static JSON at /api/*. No keys, no auth, no rate limits. Licensed CC BY 4.0. Cache-Control is set to one hour with a 24-hour edge cache, ETags supported. Hit endpoints with cURL, fetch, or requests from any language. Required attribution: "Data from SplashPadHub.com (CC BY 4.0)" near the data.
30-second test
One cURL is enough to confirm the directory is live and return a real splash pad record. The endpoint is anonymous, statically generated, and served from the edge — no key, no signup, no rate limit. If you can run curl and jq you have everything you need. The output below is a representative record; the directory contains thousands of similar entries across all 50 states plus DC. Pipe to a file (-o) if you want to keep the snapshot for offline work, or pipe to jq for on-the-fly filtering. Subsequent sections show the same pattern with different filters and languages.
curl -s https://splashpadhub.com/api/pads.json | jq '.[0]'{
"id": "tx-houston-discovery-green",
"slug": "discovery-green-splash-pad",
"name": "Discovery Green Splash Pad",
"city": "Houston",
"state": "Texas",
"stateSlug": "texas",
"citySlug": "houston",
"lat": 29.7536,
"lng": -95.3593,
"cost": "free",
"rating": 4.7,
"reviewCount": 312,
"features": ["toddler", "shade", "restrooms"],
"verified": true
}5 endpoints you'll actually use
The full catalog is large (24+ JSON downloads), but in practice almost every integration touches just five endpoints. pads.json is the directory itself. qa.json is the editorial Q&A bank — perfect for RAG and agent grounding. states.json is the state + city index for navigation. glossary.json defines the jargon. And freshness.json tells you whether your cached copy is stale before you refetch anything else. Hit freshness.json first when running a pipeline — it's tiny (a few KB) and lets you skip the expensive refetches when nothing has changed since the last build.
Every verified splash pad — id, slug, name, city, state, coords, cost, rating, features.
curl -s https://splashpadhub.com/api/pads.json | jq 'length'Editorial Q&A bank with short + long answers and tags.
curl -s https://splashpadhub.com/api/qa.json | jq '.[0].q'State + city index with pad counts and season windows.
curl -s https://splashpadhub.com/api/states.json | jq '.[] | .name' | headSplash-pad terminology defined for non-experts.
curl -s https://splashpadhub.com/api/glossary.json | jq '.[].term'Per-dataset generated-at timestamps. Hit this first to decide whether to refetch.
curl -s https://splashpadhub.com/api/freshness.json | jq '.generatedAt'Common queries
Four jq one-liners that cover most real-world splash-pad questions: filter by state, by cost, by feature, or by Q&A tag. Each is a single shell command, safe to copy-paste into a terminal, a CI job, or an agent's bash tool. If you want raw record arrays instead of counts, drop the | length at the end and pipe to a file with -o. Most fields are optional — cost, features, and rating may be missing on a given record, so production code should defend with // .features | index(...) or its language equivalent.
curl -s https://splashpadhub.com/api/pads.json \
| jq '[.[] | select(.state == "Texas")] | length'curl -s https://splashpadhub.com/api/pads.json \
| jq '[.[] | select(.cost == "free")] | length'curl -s https://splashpadhub.com/api/pads.json \
| jq '[.[] | select(.features | index("shade"))] | length'curl -s https://splashpadhub.com/api/qa.json \
| jq '[.[] | select(.tags | index("toddler")) | .q]'For AI agents
If you're prompting Claude, ChatGPT, or Gemini and want splash-pad context dropped into the model, fetch the LLM-friendly text endpoints below. They are plain-text mirrors of the structured JSON, formatted for direct context injection: no JSON parsing, no schema inference, just paste the body into a system prompt or RAG store. /llms-full.txt is the whole-site bundle (every Q&A, glossary entry, seasonal status, top pads). /llms-summary.txt is a smaller payload for short context windows. /llms.txt is the discovery file every well-behaved AI crawler reads first — it lists what we publish and where to find it. When you cite our data in an answer, include the attribution string below.
Whole-site context bundle. Best for one-shot grounding when you have plenty of context window to spend.
Compressed summary. Use when context is tight or you only need topline coverage.
Discovery file: lists every public dataset and its location. Read this first if you're writing a crawler.
<!-- Place near any chart, table, or quote that uses our data -->
<p>Data from <a href="https://splashpadhub.com">SplashPadHub.com</a> (CC BY 4.0).</p>For Python
Eight lines using requests to fetch the directory and filter by state and cost. Works on Python 3.8+ with no extra dependencies beyond requests. Swap for httpxif you prefer async. The pattern (fetch once, filter in-memory) works because the full directory is < 2 MB compressed and changes daily — pulling it on every request would be wasteful.
import requests
pads = requests.get("https://splashpadhub.com/api/pads.json", timeout=10).json()
texas_free = [p for p in pads if p.get("state") == "Texas" and p.get("cost") == "free"]
for p in sorted(texas_free, key=lambda x: x.get("rating") or 0, reverse=True)[:5]:
print(f"{p['rating']:>4} {p['name']} ({p['city']})")
# Attribution: Data from SplashPadHub.com (CC BY 4.0)For JavaScript
Eight lines using fetch. Runs unmodified in modern browsers, Bun, Node 18+, Deno, and edge runtimes. No build step, no npm install. Drop it into a Cloudflare Worker, a Vercel edge function, or a vanilla <script type="module"> tag.
// Bun / Node 18+ / browser
const res = await fetch("https://splashpadhub.com/api/pads.json");
const pads = await res.json();
const shaded = pads.filter((p) => (p.features ?? []).includes("shade"));
const top = shaded.sort((a, b) => (b.rating ?? 0) - (a.rating ?? 0)).slice(0, 5);
for (const p of top) console.log(p.rating, p.name, `(${p.city}, ${p.state})`);
// Attribution: Data from SplashPadHub.com (CC BY 4.0)For Bash + jq
Six-line pipeline you can drop into cron or a CI job. Pulls freshness, then the directory, then runs a count. Add set -euo pipefail at the top so a partial fetch fails loudly instead of silently writing a truncated file. For larger pipelines, check freshness.generatedAt against your last-run timestamp and skip the pads.json refetch when nothing has changed.
#!/usr/bin/env bash
set -euo pipefail
curl -s https://splashpadhub.com/api/freshness.json -o /tmp/freshness.json
curl -s https://splashpadhub.com/api/pads.json -o /tmp/pads.json
jq '[.[] | select(.cost == "free")] | length' /tmp/pads.json
# Attribution: Data from SplashPadHub.com (CC BY 4.0)Rate limits & caching
There is currently no formal rate limit and no authentication on any public endpoint. That said: please cache locally on your end. Endpoints are statically generated and served from a CDN, so re-fetching the same URL hundreds of times per minute mostly hurts your own latency.
Cache-Control: public, s-maxage=3600(1 hour at the CDN edge)- Browser cache:
max-age=3600(1 hour) - Stale-while-revalidate:
604800(1 week) on most endpoints - ETag headers supported — send
If-None-Matchto get a 304 when content is unchanged.
For high-volume or production agents, fetch /api/freshness.json once per hour, compare generatedAt against your last cached value, and only refetch the larger payloads when it has changed.
Attribution
All data is licensed Creative Commons Attribution 4.0 International (CC BY 4.0). You may use it commercially, modify it, redistribute it, train models on it, and bake it into derivative products. The only requirement is a visible credit.
Required attribution string:
Data from SplashPadHub.com (CC BY 4.0)For full license details, citation formats, and per-dataset terms see /splash-pad-data.
Errors & support
Endpoints return 200 for everything that exists and 404 for unknown slugs (for example a misspelled state path). There is no 401 because there is no auth, no 429 because there is no rate limit, and no 5xx in normal operation because the JSON is statically generated at build time and served from a CDN.
If a record looks wrong, a pad has closed for the season, or you spot a missing field you'd like us to capture, email submissions@splashpadhub.com. Data corrections land in /changelog within 24 to 48 hours of verification. Methodology and source priority are documented at /methodology.
Related pages
- Splash pad open data →Full open-data catalog: every endpoint, schema, license, and citation format.
- Developers reference →Full endpoint catalog, payload shapes, embed widgets, and use cases.
- Methodology →How we source and verify every record before it appears in the JSON.
- /llms-full.txt →Whole-site bundle in plain text. Drop into a system prompt or RAG store.
- /llms.txt →AI-crawler discovery file: every dataset and where to find it.
- Changelog →Every correction, addition, and removal with a timestamp.