Movie Recommender

The problem
I'm the kind of person who spends 45 minutes picking a movie and ends up rewatching The Grand Budapest Hotel. Again. The problem isn't a lack of options — it's that browsing a catalogue of 50,000 titles with no real context is overwhelming. TMDb is a great API but it only lets you filter by genre, rating, and year. There's no way to say "I want something tense but not horror, dialogue-heavy, maybe foreign language" and actually get back good results. That's the gap I wanted to close.
What I built
A movie discovery app with two modes: browse and ask. Browse mode is a standard infinite-scroll catalogue — filter by genre, year range, rating, and language. Ask mode is where it gets interesting: describe what you're in the mood for in plain English and the AI layer returns curated picks with reasoning for each one. Not just titles — explanations. "You might like this because..." so you can actually make a decision.
Technical decisions
Vercel AI SDK for streaming
The AI response streams in progressively rather than making you wait for the full answer. This was the most important UX call on the AI side — a 3-second blank screen before recommendations appear feels broken. Streaming makes it feel alive and responsive even on slower connections.
React Query for TMDb data fetching
Caching, pagination, and loading states — handled. Without it I'd be managing a lot of useState + useEffect logic for something React Query gives you in one hook. It also kept the infinite scroll implementation clean: just a new page param when the sentinel enters the viewport.
IntersectionObserver for infinite scroll
Scroll event listeners are expensive and require throttling to avoid performance problems. IntersectionObserver fires only when a sentinel element enters the viewport — no debouncing needed, no layout thrashing. It's the right tool for this, and the code is meaningfully simpler.
Stateless AI requests
Each recommendation request is independent — no conversation history passed between calls. I considered building a multi-turn chat experience but the added complexity wasn't worth it for this use case. Stateless kept the UX simple, the implementation lean, and the prompt easier to reason about.
What I'd do differently
A watchlist would've been the first feature I added with more time — localStorage to start, a real backend if the project grew. I'd also handle AI rate-limiting more gracefully; right now there's no friendly feedback if you hit API limits mid-session. On mobile, the filter panel could use a proper drawer pattern instead of collapsing inline — it gets cramped on small screens and it's the part of the UX I'm least happy with.