Paper: Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI


Yesterday I was reading a working paper by Harvard Business School titled Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics. The study was conducted with Boston Consulting Group, a global management consulting firm. They interviewed 78 junior consultants in July-August 2023 who had recently participated in a field experiment that gave them access to generative AI (GPT-4) for a business problem solving task.

The paper makes a point that in earlier technologies it was junior professionals who helped senior professionals upskill them with the new technologies. The paper cited multiple reasons why junior professionals are better able to learn and use new technology than their senior counterparts.

  • First, junior professionals are often closest to the work itself, because they are the ones engaging in concrete and less complex tasks
  • Second, junior professionals may be more able to engage in real-time experimentation with new technologies, because they do not risk losing their mandate to lead if those around them, including clients, as well as those more junior to them, recognize that they lack the practical expertise to support their hierarchical position
  • Third, junior professionals may be more willing to learn new methods that conflict with existing identities, practices, and frames

A few more reasons that I can think of:

  • Juniors have more free time in their hands. They don’t have social obligations that a senior professional might have
  • When you are junior it is more about the breadth so you can learn and try multiple things. Seniors prefer depth so they like to go deep in what they know. They get excited about new things faster so they quickly try and learn them.

The paper cited examples of how seniors in fields like radiology learnt to effectively use CT scanners from juniors. They also gave a couple more examples related to help-desk queuing technology and patient referral and tracking technology.

I was expecting examples where software engineers were the target audience. The examples they cited were practicians/managers/consultants using technology to assist them. In my 20 years of software engineering I have also seen multiple emerging technologies like Cloud Computing, Containers and Cloud Native stack, NoSQL databases and in each of them senior engineers and architects lead the way. So, I will assume this paper considers professionals that do not build software themselves. But, who uses technology to do their job. Generative AI is one such technology that impacts all the fields not just software engineering. It impacts everyone.

They cited that in earlier technologies the main challenges was the status threat.

status threat is often the key obstacle to junior professionals successfully coaching more senior professionals in the effective use of new technology

The paper then makes the core point that for emerging technology like Generative AI junior professionals may fail to be a source of expertise for senior professionals. Generative AI is different because:

  • GenAI can be accessed and customized by novice users without coding and without owning infrastructure. Prior studies of juniors teaching seniors have analyzed technologies in which interacting with the technology required infrastructure and raised barriers for ordinary users. For example, the CT scanning technology that Barley (1986) studied required a large investment in hardware by hospital administrators, and integration with existing hospital medical equipment and IT infrastructure by IT professionals. In contrast, GenAI allows novice users access the technology directly from their computers and collaborate with it in a nearly instantaneous fashion.
  • GenAI has uncertain and wide-ranging capabilities and is changing at an exponential rate
  • GenAI carries the possibility of outperforming humans in a wide variety of skilled and cognitive tasks

The paper cites three novice AI mitigation tactics in the context of Generative AI proposed by junior professionals to address the risks associated with the technology. These tactics often fall short of effectively addressing the unique challenges posed by GenAI due to a lack of deep understanding or reliance on traditional methods. The three tactics are:

1. Tactics Grounded in a Lack of Deep Understanding

Novices, often unfamiliar with GenAI’s capabilities and limitations, propose risk mitigation strategies that reflect incomplete or incorrect assumptions about the technology. Examples include:

  • Suggesting “standardized question formats” to improve accuracy without realizing that hallucinations and inherent model limitations cannot always be mitigated this way.
  • Some consultants did not understand that the explainability of GenAI was not possible at the time of the study. Thus, these juniors recommended tactics for “explaining its rationale” to managers, and “understanding the source of the recommendation or the result, being able to explain it,” so that managers could better understand it. In contrast, Experts recommend avoiding their use in tasks requiring high explainability and instead offering global logic explanations and input improvement guidance.
  • Some consultants assumed GenAI lacked contextualization capabilities and recommended using it only in cases where contextualization was unnecessary. In contrast, Experts highlighted that GenAI excels at contextualization when guided by effective prompts and supported by techniques like Retrieval-Augmented Generation (RAG).

2. Focus on Changes to Human Routines Rather Than System Design

Novices tend to emphasize adjusting human behavior instead of redesigning systems to address risks effectively. Examples:

  • Proposing that users “validate results manually” or that managers “review prompts and responses” without recognizing that systemic solutions, like improved model fine-tuning or automated validation tools, are often more effective.
  • Encouraging behavioral changes, such as warning against over-reliance, rather than implementing system features like uncertainty visualization or self-reflective prompts

3. Project-Level Interventions Over System Deployer- or Ecosystem-Level Changes

Novices focus on localized, project-specific solutions instead of broader interventions at the organizational or ecosystem level. Examples:

  • Suggesting that project teams “agree on conditions for GenAI use” or “review GenAI-generated outputs,” while neglecting system-wide measures like centralized monitoring, robust evaluation metrics, or integrating co-audit tools.
  • Overlooking the need for ecosystem-level changes, such as improving data sources or dynamically adjusting model behaviors based on evolving real-world contexts

Conclusion

The study underscores that while junior professionals often embrace the potential of emerging technologies like Generative AI, their lack of deep technical understanding and reliance on novice tactics limit their ability to guide senior professionals effectively. Unlike traditional technologies, GenAI’s unique challenges—such as its opacity, emergent behaviors, and rapidly evolving capabilities—demand a shift from project-level or human-centric interventions to system- and ecosystem-level strategies. This highlights the need for structured training, robust system design, and comprehensive risk mitigation approaches that align with the technology’s complex nature. Organizations must recognize that learning and adapting to such transformative technologies is not just a bottom-up endeavor but requires collaborative, multi-level engagement to address risks and unlock GenAI’s full potential responsibly.


Discover more from Shekhar Gulati

Subscribe to get the latest posts sent to your email.

Leave a comment