The camera was always off — What a stint working with offshore developers taught me about large language models

If you're still unsure how to get started with large language models, or if prompting feels awkward, this analogy might help.

The camera was always off

There was a time when I worked in-house at one of the UK's high-street banks. I had access to an infinite pool of developers in mainland China. However, all communication had to go through a single point of contact, someone who would join a call every couple of days to discuss the work. They were part of the engineering team, but obviously not going to do all the work on their own. It was a strange dynamic. They never turned on their cameras, and I had almost no insight into who they were, what their working conditions were like, or how invested they were in the work. Were they relying on the job for stability? Or were they disengaged and ready to move on? These were things you'd typically pick up from face-to-face interactions, even by just seeing someone's facial expressions on a video call. But in this case, the camera was always off.

Our goal was to translate interface work designs into working software. I would prepare the documentation, offer a walkthrough, and answer any questions, and then I'd hand it over. After that, I'd have no input until the software came back. That meant I had to give really clear briefs, because any confusion or missing context would show up in the finished product. I could provide feedback when the work came back, but by then, revising things would just push more work back onto my plate, slow down the process, and lead to frustrating back-and-forths. Although that is a taste of what happened with pretty much every round of the process.

This experience, along with other management roles, built a skill set I now find entirely transferable for working with LLMs. It trained me to anticipate misunderstandings from a lack of context. I learned to frontload the correct information, shape instructions clearly, and guide things so that less rework was needed. Over time, and hard learned lessons, these tactics became second nature.

Two approaches, in particular, made the difference.

The first was using my limited time with them to understand their world. When we had direct contact, I asked about their setup, what they knew, what they didn't, and what was available to them. These were often throwaway questions, prompted by the flow of the conversation or the type of work we were tackling. But they served a crucial purpose: helping me understand their worldview. By doing this, I could better align my expectations with their reality, what tools they had, what assumptions they might be making, and where gaps in understanding might arise.

The second was the one-shot handover. My aim was always to get the most accurate and complete output possible in a single pass, something that needed minimal revision. It wasn't about avoiding continued work or iteration altogether; of course, things evolve. But I didn't want to spend days or weeks just trying to reach a usable baseline because of missing context or unclear instructions. That sort of back-and-forth is costly.

So, I put in the effort on my side. I prepared full packs of information: context, rationale, supporting materials, and sometimes broader business details that might come into play. Anything they could draw on when they hit a block. Because I knew I wouldn't be there. It was a black box, I wouldn't see them, I couldn't stand over their shoulder, and they wouldn't be able to ask me a quick question. Most likely, I'd be asleep when they were working. That mindset, doing the upfront work to prevent downstream friction, translates directly to working with large language models.

One habit I developed was ending most calls with a simple question: "Do you have any questions?" Almost every time, the answer was no. And it made sense, I'd just delivered a lot of information, and they were still processing it. They were trying to understand what I'd said, work out their response, and figure out what came next. At that moment, their confidence would typically remain unshaken, because the absolute pressure wouldn't begin until the work did.

So I stopped asking that open question. Instead, I started guiding the conversation toward specific areas: Did they understand the business context? Were they clear on the technical approach? Where were the current gaps? Who might they need to speak with? Was everything in place for the work to actually begin? By working through these specific areas, I created space for more focused and valuable questions to emerge. And in doing so, I made those crucial handover moments far smoother and more effective. This works with an LLM, too. Rather than dumping information and hoping for the best, you can ask it to reflect back what it understands. Ask it what's unclear. Ask it what assumptions it's making. You're not testing it, you're creating the same kind of space for misalignment to surface before it becomes a problem.

The most significant reflection I had through all of this was realising that, yes, this person was an expert, and from my perspective, had access to an almost infinite pool of other experts behind them, capable of delivering work to varying degrees. The fundamental limitation wasn't on their side. It was on mine: specifically, my ability to package the work in a way they could absorb, understand, and act on without issues emerging later. That part was primarily mechanical, focusing on building clarity, anticipating gaps, and structuring handovers well. But the human side was far more difficult.

I didn't know who these people were. I didn't know what kind of pressures they were under, what their working day looked like, how long their commute was, or whether they'd had any sleep the night before. All of that was hidden, made worse by the camera-off nature of our Teams calls. It turned the interaction into something transactional and faceless. They became a kind of machine-entity, and building any empathy with them, trying to truly connect, was incredibly difficult. That meant I had to over-emphasise my own humanity. I paid closer attention to how I joined calls, how patient I was, and how hard I worked to show that I was listening and trying to understand. It was an intentional effort to bridge a gap that shouldn't have existed in the first place.

Now, not all of these tactics fit working with a large language model. The empathy piece, for instance, you don't need to wonder if the model had a bad commute. But something adjacent still applies. It's easy to fall into the trap of seeing an LLM as an all-knowing oracle, something intelligent, responsive, and capable of incredible things. And that's true, to a point. But the other truth is that it operates inside a black box, with a narrow worldview shaped entirely by its training data. It's designed to be helpful. It's tuned to impress. And that can make it seem like it knows you, your intentions, your goals, your context, far more than it actually does.

That false sense of shared understanding is where things go wrong. The model might give you confident answers that are subtly off, or fill in blanks with plausible but incorrect guesses. Just like with those camera-off calls, you're working with something that can do great work, but only if you do the work upfront to make the handover clear, structured, and context-rich.

And perhaps there's a version of the humanity piece that still matters. Not empathy toward the model, but a kind of care in how you approach the interaction. Slowing down. Being deliberate. Recognising that clarity isn't just about efficiency, it's about creating the conditions for good work to happen. The developers I worked with deserved that care. And strangely, the quality of your work with an LLM improves when you bring that same intentionality to the conversation.