18 C
United Kingdom
Friday, July 18, 2025

Why Agents Need to Learn to Believe – O’Reilly



The agentic AI systems that dazzle us today with their ability to sense, understand, and reason are approaching a fundamental bottleneck—not one of computational power or data availability but something far more elusive: the ability to navigate the messy, context-dependent world of human beliefs, desires, and intentions.

The problem becomes clear when you watch these systems in action. Give an AI agent a structured task, like processing invoices or managing inventory, and it performs beautifully. But ask it to interpret the true priority behind a cryptic executive email or navigate the unspoken social dynamics of a highway merge, and you’ll see the limitations emerge. Research suggests that many enterprises’ AI failures stem not from technical glitches but from misaligned belief modeling. These systems treat human values as static parameters, completely missing the dynamic, context-sensitive nature of real-world decision making.

This gap becomes a chasm when AI moves from routine automation into domains requiring judgment, negotiation, and trust. Human decision making is layered, contextual, and deeply social. We don’t just process facts; we construct beliefs, desires, and intentions in ourselves and others. This “theory of mind” enables us to negotiate, improvise, and adapt in ways that current AI simply cannot match. Even the most sensor-rich autonomous vehicles struggle to infer intent from a glance or gesture, highlighting just how far we have to go.

The answer may lie in an approach that’s been quietly developing in AI research circles: the Belief-Desire-Intention (BDI) framework. Rooted in the philosophy of practical reasoning, BDI systems operate on three interconnected levels. Rather than hardcoding every possible scenario, this framework gives agents the cognitive architecture to reason about what they know, what they want, and what they’re committed to doing—much like humans do with the ability to handle sequences of belief changes over time, including possible consequential changes to the intention thereafter in light of new information.

Beliefs represent what the agent understands about the world, including itself and others—information that may be incomplete or even incorrect but gets updated as new data arrives. Desires capture the agent’s motivational state, its objectives and goals, though not all can be pursued simultaneously. Intentions are where the rubber meets the road: the specific plans or strategies the agent commits to executing, representing the subset of desires it actively pursues.

Here’s how this might play out in practice. A self-driving car’s belief might include real-time traffic data and learned patterns about commuter behavior during rush hour. Its desires encompass reaching the destination safely and efficiently while ensuring passenger comfort. Based on these beliefs and desires, it forms intentions such as rerouting through side streets to avoid a predicted traffic jam, even if this means a slightly longer route, because it anticipates a smoother overall journey. An example of this would be different learned patterns of self-driving cars as they are deployed into different parts of the world. (The “hook turn” in Melbourne, Australia, serves as an update to the learned patterns in self-driving cars otherwise not seen anywhere else.)

The real challenge lies in building and maintaining accurate beliefs. Much of what matters in human contexts—priorities, constraints, and intentions—is rarely stated outright or captured in enterprise data. Instead, these are embedded in patterns of behavior that evolve across time and situations. This is where observational learning becomes crucial. Rather than relying solely on explicit instructions or enterprise data sources, agentic AI must learn to infer priorities and constraints by watching and interpreting behavioral patterns in its environment.

Modern belief-aware systems employ sophisticated techniques to decode these unspoken dynamics. Behavioral telemetry tracks subtle user interactions like cursor hovers or voice stress patterns to surface hidden priorities. Probabilistic belief networks use Bayesian models to predict intentions from observed behaviors—frequent after-hours logins might signal an impending system upgrade, while sudden spikes in database queries could indicate an urgent data migration project. In multi-agent environments, reinforcement learning enables systems to refine strategies by observing human responses and adapting accordingly. At Infosys, we reimagined a forecasting solution to help a large bank optimize IT funding allocation. Rather than relying on static budget models, the system could build behavioral telemetry from past successful projects, categorized by type, duration, and resource mix. This would create a dynamic belief system about “what good looks like” in project delivery. The system’s intention could become recommending optimal fund allocations while maintaining flexibility to reassign resources when it infers shifts in regulatory priorities or unforeseen project risks—essentially emulating the judgment of a seasoned program director.

The technical architecture supporting these capabilities represents a significant evolution from traditional AI systems. Modern belief-aware systems rely on layered architectures where sensor fusion integrates diverse inputs—IoT data, user interface telemetry, biometric signals—into coherent streams that inform the agent’s environmental beliefs. Context engines maintain dynamic knowledge graphs linking organizational goals to observed behavioral patterns, while ethical override modules encode regulatory guidelines as flexible constraints, allowing adaptation without sacrificing compliance. We can reimagine customer service, where belief-driven agents infer urgency from subtle cues like typing speed or emoji use, leading to more responsive support experiences. The technology analyzes speech patterns, tone of voice, and language choices to understand customer emotions in real time, enabling more personalized and effective responses. This represents a fundamental shift from reactive customer service to proactive emotional intelligence. Building management systems can also be reimagined as a domain for belief-driven AI. Instead of simply detecting occupancy, modern systems could form beliefs about space usage patterns and user preferences. A belief-aware HVAC system might observe that employees in the northeast corner consistently adjust thermostats down in the afternoon, forming a belief that this area runs warmer due to sun exposure. It could then proactively adjust temperature controls based on weather forecasts and time of day rather than waiting for complaints. These systems could achieve measurable efficiency gains by understanding not just when spaces are occupied but how people actually prefer to use them.

As these systems grow more sophisticated, the challenges of transparency and explainability become paramount. Auditing the reasoning behind an agent’s intentions—especially when they emerge from complex probabilistic belief state models—requires new approaches to AI accountability. The EU’s AI Act now mandates fundamental rights impact assessments for high-risk systems, arguably requiring organizations to document how belief states influence decisions. This regulatory framework recognizes that as AI systems become more autonomous and belief-driven, we need robust mechanisms to understand and validate their decision-making processes.

The organizational implications of adopting belief-aware AI extend far beyond technology implementation. Success requires mapping belief-sensitive decisions within existing workflows, establishing cross-functional teams to review and stress-test AI intentions, and introducing these systems in low-risk domains before scaling to mission-critical applications. Organizations that rethink their approach may report not only operational improvements but also greater alignment between AI-driven recommendations and human judgment—a crucial factor in building trust and adoption.

Looking ahead, the next frontier lies in belief modeling: developing metrics for social signal strength, ethical drift, and cognitive load balance. We can imagine early adopters leveraging these capabilities in smart city management and adaptive patient monitoring, where systems adjust their actions in real time based on evolving context. As these models mature, belief-driven agents will become increasingly adept at supporting complex, high-stakes decision making, anticipating needs, adapting to change, and collaborating seamlessly with human partners.

The evolution toward belief-driven, BDI-based architectures marks a profound shift in AI’s role. Moving beyond sense-understand-reason pipelines, the future demands systems that can internalize and act upon the implicit beliefs, desires, and intentions that define human behavior. This isn’t just about making AI more sophisticated; it’s about making AI more human compatible, capable of operating in the ambiguous, socially complex environments where most important decisions are made.

The organizations that embrace this challenge will shape not only the next generation of AI but also the future of adaptive, collaborative, and genuinely intelligent digital partners. As we stand at this inflection point, the question isn’t whether AI will develop these capabilities but how quickly we can reimagine and build the technical foundations, organizational structures, and ethical frameworks necessary to realize their potential responsibly.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles