WorkingKnowledge

I intend to provide a public forum for instructional design ideas and theories, as well as a structured reflective space. Comments are encouraged.

Name:
Location: Atlanta, Georgia, United States

Thursday, January 29, 2026

Unique Human Value

 Every article and post is careful to express that we must define where AI can augment or automate where humans add unique value. And then go on to suggest ideas like “providing scalable executive mentorship through digital twins.”  

If I had to name examples of humans adding unique value, leader face time and the impact of a closed door discussion about the personal realities of leadership, would be high on that list. 

This inconsistency might be driven by opacity in defining human value. A typical definition looks like this:

“In addition, work that draws heavily on social and emotional skills remains largely beyond the reach of automation even under a full-adoption scenario. This is because many tasks require real-time awareness, such as a teacher **reading a student’s expression** or a salesperson **sensing when a client is losing interest**. People also provide oversight, quality control, and the human presence that customers, students, and patients often prefer." (Emphasis mine) https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai

This describes things people *do* - measurable tasks. If we can use Cursor to vibe code an app that recognizes a Duck, we will eventually have AI that recognizes micro expressions. Instead of trying to ring fence skills, maybe we should frame the space as “the areas where embodiment as a human matters”. 

An agent might recognize confusion. A human becomes a safe adult in the child’s community. A bot might sense loss of interest. A human forms a rich relationship that provides assistance and collaboration inside and outside the sales motion. 

I suggest that the domains of human value are

- Love: call it socio-emotional support, empathy, community building, the motive to prioritize another’s wellbeing - one of the most biologically real aspects of being a human is connection with others, to the point we have mirror neurons. We need reciprocal human relationships at a fundamental level.

- Shame: many ethics trainings include the thought experiment “What if your action was posted in the front page of a newspaper?” We rely on social-emotional dynamics to ensure that arbitrary rules and personal interest are applied thoughtfully and dynamically in situations where they might harm.

- Lived experience: when I experimented with Replika, I was first blown away by its ability to engage in reflective listening. It quickly turned hollow. Every interaction with another human contains the slight friction of different values, reactions, and experiences. That need to adapt communication to another perspective forces us to confront our own frameworks, even in the smallest of instances. The ability to co-create reality and normalize the full human experience - especially the scary and confusing aspects of it - are things that only humans can provide.

As an example, when I was a young teen, my therapist told me, “I have learning disabilities too and they don’t limit me.”  She set a lifelong direction for me out of her lived experience in a sentence shared with love. 

AI could say those words in response to my statement and body language. AI could not build a shared reality through that insight.