Challenge 1: Expedition Safety Brief¶
What You're Starting With
From the lesson, you've got:
- The Three Pillars (Scope, Intent, Structure): use them every time you prompt
- User stories with acceptance criteria: your plan for each piece of the safety brief
- The Explore → Plan → Implement → Verify workflow: your process for building iteratively
- Right-sizing prompts: small asks in plain English, medium asks as user stories, big asks broken into smaller pieces
Some of you already started building during the lesson. In the mob session, your team created a small component (a conditions card, an alert status display, or a river crossing indicator). You can choose to build from there or start fresh.
Your AI chat tool already knows a lot about backcountry hiking, wilderness safety, and the Greater Yellowstone Ecosystem. Ask it. Explore. Let it help you understand what should go into a safety brief. That's the Explore step of your workflow.
The Challenge¶
Build a digital Expedition Safety Brief: a polished, interactive web page that a backcountry hiker could use to prepare for a multi-day route in the Greater Yellowstone Ecosystem.
You're building this entirely by talking to an AI chat tool. No code editor, no terminal. Just you, your team, and a conversation with AI. Everything you learned in the lesson (the Three Pillars, user stories, the Explore, Plan, Implement, Verify workflow) is where you put it to work.
Who You're Building For
A backcountry hiker planning a multi-day route in the Greater Yellowstone Ecosystem. They're checking conditions before heading out. They want to know: Are there active closures or alerts on their route? What's the weather doing? Are the river crossings passable?
They're not a wilderness ranger. They're someone who has planned this trip and wants to make smart, informed decisions before leaving the trailhead. Your safety brief should make the information clear, visual, and actionable.
What to Build¶
Items are listed in priority order. If time is tight, focus on the items near the top first.
For this challenge, you're building with hardcoded mock data (no live API calls yet). Your job is to get the structure, layout, and logic right so that when real data comes in later, the brief just works.
- A conditions card showing a mock NPS alert: one alert (Danger, Closure, Caution, or Information) with a title and a plain-language description of what it means for a hiker on this route
- A weather summary: mock forecast conditions for the route area (temperature range, precipitation chance, wind speed), laid out so a hiker can scan it at a glance
- A river crossing indicator: a mock streamflow reading for one Greater Yellowstone crossing with a clear safe/caution/do-not-cross status
- Visual design that feels like an actual field tool, not raw text; something with layout, color, and structure you'd trust before heading into the backcountry
These are options for teams that finish the baseline capabilities. Your team can also define your own stretch goals based on what interests you.
- Add expandable or collapsible sections so the brief doesn't overwhelm the reader on first load
- Create a gear checklist that responds to the mock conditions shown: what to carry given the alert status and weather summary
- Add an emergency response section: what a hiker should do if injured, overdue, or caught in rapidly deteriorating conditions
- Add multiple "pages" or a navigation menu that lets the user move between conditions, gear, and emergency sections
- Include route-specific notes for the Greater Yellowstone Ecosystem: known hazards, common crossings, what conditions to watch for
Tips
- Start with one section. Don't try to build the whole brief in your first prompt. Pick the conditions card or the river crossing indicator and get that right first. Then add the next piece.
- Use the workflow. Explore → Plan → Implement → Verify. Write a user story for each section before you ask AI to build it. Check the result against your acceptance criteria before moving on.
- Quick user story template: "As a backcountry hiker, I want [what you're building] so that [why it helps]. Given [a specific condition], when [something happens], then [what the user should see]." For example: "As a backcountry hiker, I want a river crossing indicator so that I can decide whether a crossing is safe. Given the mock streamflow is above the caution threshold, when I view the indicator, then it shows a yellow 'Caution' label with the CFS value."
- Verify criterion by criterion, then be specific. Don't eyeball the whole page and say "looks good" or "make it better." Go through each acceptance criterion. Pass or fail. When one fails, say exactly what's wrong: "the alert status should use color: red for Danger, orange for Closure, yellow for Caution, blue for Information." That specific feedback is what gets you a fix, not a re-roll.
- If your conversation gets long, start fresh. Remember the oxygen tank. Context windows fill up. If AI's responses start feeling off after many exchanges, open a new conversation and paste in what you want to keep building from. You'll be surprised how much better it works.