Including People, Describing Things--The Accessibility Business
Skip this post if long stories aren’t your thing—but if you’re up for a journey, I’ve got one to share.
I first found Supernatural in 2007. As someone who’s totally blind, I was used to missing out on some things when it came to TV shows. Audio description was just getting started, and the WB (being a smaller network) wasn’t offering it yet. So after each episode, I’d head over to Wikipedia to read the plot summaries—piecing together what I couldn’t see so I could enjoy the story in full. That’s how I kept up with Sam, Dean, and everything that went bump in the night.
I joined Twitter in 2009 and stumbled into the Supernatural fandom. And wow—what a visual space it was. Memes. Fan art. On-set photos. Red carpet moments. It was all amazing… but hard to experience without image descriptions. Social media wasn’t really built with accessibility in mind back then. Podcasts helped a little, but it still felt like watching the fandom from the sidelines.
Fast forward to 2025, and things have changed—a lot. Thanks to large language models (yep, AI), I can now get descriptions of images that were once totally out of reach. It’s a bit like the old Wikipedia days—takes some effort—but it’s worth it. For the first time, I can really engage in a fandom I’ve loved for years.
I’m just so grateful this community is still alive and kicking. I might be a latecomer to some parts of it, but I’m here now—and it feels like coming home.