Conceptualizing the Relationship Between Humans and Technology in AI Assisted Development

There is tension in the discourse about AI. Organizations race to invest, often ahead of clear utility analysis, risk assessment or legal grounding. Meanwhile customers are frustrated to deal with incpabable customer service bots, while workers fear that they are becoming replaceable. It all seems very novel, but viewed with a historical lense, it’s actually about how work and responsibility shifts when new tools are introduced.

AI is not taking over the world anymore than typewriters or computers did. AI is being implemented in ways that are more or less coherent, functional or sustainable.

What AI actually does is shift thinking upstream and responsibility downstream. Not unlike the dawn of manufacturing (or the dawn of CMSes for that matter) less effort goes into production while more effort is required to frame problems correctly and verify results.

AI will not take your job. Those who know how to wrangle AI will take your job. And what customers are struggling with is not AI, but poor implementations of AI.

So lets take a concrete example. Let’s say you are going to use AI in software development. What would a sensible strategy look like? As it turns out, we do not need to reinvent the wheel. Let’s do a quick review of how existing frameworks can contribute.

NIST AI Risk Management Framework – Setting the Intent
Within the NIST AI Risk Management Framework, Govern anchors responsibility in human decision making. Who owns the code? Who must verify it? Who is accountable when it fails in production? What are the wider consequences? It also emphasizes that a safety culture is built on trust. This directly connects AI governance to modern leadership research on psychological safety which shows that learning collapses when people fear blame.

NIST Secure Software Development Framework – Mastering the Output
NIST SSDF addresses what happens once AI is part of the development workflow. It emphasizes that code is code, whether it’s written by a human or not. This is where responsibility downstream comes in. In practice, SSDF ensures that AI assisted development:

  • Is constrained to properly risk assessed tools
  • Produces code that is governable
  • Preemptively expects AI generated vulnerabilities

ISO/IEC 27001 – Securing the Substance
ISO/IEC 27001 is an information security standard that defines how data is classified, accessed, logged, and protected. It adds an information control layer. What information is processed? Who and what systems have access to the data? How do we ensure data is not destroyed or shared incorrectly? This is where we build defenses that reduce the likelihood of ending up with massive data leaks.

The frameworks discussed here are not a complete blueprint. They are intended to demonstrate that we are not entering new territory without maps. AI is a tool*, not magic. It is a tool that comes with new challenges. But the upstream shift in thinking and the downstream weight of responsibility prove that the human is more important than ever. As with earlier technological shifts, reduced effort in production increases the demand for expert judgment in framing and verification.

AI slop is the plastic of IT: quick, cheap, but also polluting when no one takes responsibility.

* It’s actually many different tools, but that’s out of the scope of this text.