A few days ago, I was thinking about how pricing works for software products, especially in tools like Devin that depend heavily on AI. It’s not the same as something like Figma, where the cost of serving an incremental user is pretty negligible. Once you’ve built the product and infrastructure, every new user is just another account on the system—there’s no real extra cost. That’s why I’ve been hoping Figma introduces regional pricing at some point. Right now, a lot of people in regions with lower purchasing power are likely to churn to cheaper alternatives like Sketch or Penpot. If Figma doesn’t adapt, it feels inevitable.

But Devin is a whole different beast. You can’t do regional pricing easily because every new user burns through expensive GPUs. It’s not like regular software companies, which can rely on high margins to scale. AI-based tools, especially ones that serve developers or professionals, have costs that are way more linear.

Here’s why: if Devin is running its own model (and I think they are), then serving costs remain high no matter what you do. Even if you batch requests efficiently to maximize GPU utilization, the cost per customer doesn’t drop the same way it does for traditional software. And we haven’t even talked about the inference cost. Every time the model processes something, there’s a tangible expense tied to it. Compare that to something like Figma, where an extra user is mostly free to serve once the infrastructure is in place.

So, the big question is whether Devin is actually better than hiring a ₹45k/month fresher from a tier 3 college and training them. Sure, Devin is fast, efficient, and doesn’t take sick days, but at that cost, it’s a real decision companies have to make. At $500/month, Devin has to deliver enough value to justify its cost. Otherwise, wouldn’t you just train someone for a fraction of that?

There’s also the idea of cheaper models. Once Devin collects enough user data, they could distill their larger, more expensive models into smaller, cheaper ones. It’s something we’ve seen with GPT models over time: GPT-4 -> GPT-4 Turbo, GPT-4o -> GPT-4o Mini. Google’s Gemini models are doing the same thing, like Gemini Pro -> Gemini Flash. These smaller models are more affordable to run, and they can serve users at lower price points.

But here’s the catch: cheaper models are usually for free or low-paying users. If I’m paying for GPT, I’m sticking with the full-powered version. I’m not settling for a smaller, distilled model that cuts corners, even if it’s cheaper to serve. And Devin is targeting pro users—developers who need performance and reliability. Are these users really going to settle for anything less than the best? I doubt it.

At the end of the day, tools like Devin have a tricky balancing act. Their costs are high because they’re solving complex problems, and the user base is sophisticated enough to demand top-tier performance. But as AI infrastructure evolves and costs (hopefully) start to drop, maybe we’ll see these tools become more accessible. For now, though, regional pricing and cost efficiencies are easier said than done. Until then, companies and users will have to make some tough calls.