INDUSTRY THOUGHT
Preparedness Has an Information Substrate Problem
Critical information is scattered and siloed, leading to a dangerous decay in our ability to coordinate effectively. I beleive the solution is a hybrid substrate approach, blending traditional knowledge with digital data and human-AI partnership. This new model offers a path to move beyond outdated practices, fostering genuine collaboration for more effective, adaptable preparedness. The choice is clear: lead this essential information revolution, or risk being left behind.

Written by
Justin Snair
Why our emergency preparedness efforts are trapped in the wrong century—and how to break them free
Here's a question that should keep us all up at night: Where does our most critical preparedness information actually live?
I'm not talking about where we think it lives—in our carefully maintained binders, our digital databases, our training manuals. I'm talking about where it actually lives when we need to make real decisions about real people's safety.
Let me guess: It's scattered across dozens of incompatible systems, buried in someone's head, or worse—it exists in multiple conflicting versions that we're not even sure we can trust.
The Crisis We Haven't Named Yet
We're managing 21st-century disasters with 19th-century information architecture. Think about it: our evacuation plans exist as PDFs that might as well be etched in stone tablets. Our resource inventories live in spreadsheets that become obsolete the moment someone uses supplies, orders new equipment, or discovers expired materials. Our contact lists are duplicated across systems that can't talk to each other.
This isn't just inefficient—it's dangerous.
When I look at how most organizations handle preparedness information, I see what information theorists call substrate isolation. We've got critical information trapped on incompatible platforms: paper that can't be searched, digital systems that don't sync, and human expertise that walks out the door when people retire. Imagine a flood scenario where the police department has the updated road closures on their GIS, but the school district's bus routes are still based on last month's PDF map. This isn't just a technical glitch; it's a breakdown in the very fabric of coordination.
Here's what really gets me: We've convinced ourselves that having "the same information" means we're coordinated. Our county emergency office has an evacuation plan. The Red Cross has an evacuation plan. The school district has an evacuation plan. Everyone has "the plan."
But it's not just that they have different versions, updated at different times, by different people, with different assumptions—though that's certainly true. The deeper problem is that even if they had identical documents, each organization would still understand those plans completely differently.
The county emergency office reads the evacuation plan through the lens of traffic management and resource allocation. The Red Cross reads it as shelter operations and logistics coordination. The school district reads it as student safety and family reunification procedures. We're looking at the same words but seeing entirely different operational realities.
More critically, when the plan gets enacted during an actual emergency, what matters isn't the document—it's the relationships the plan represents and the real-time information exchange those relationships enable. If the school district's transportation coordinator has never actually talked to the county's traffic management center, our "coordinated" evacuation plan is fiction, regardless of what the document says.
We've created an illusion of coordination while actually building a tower of babel—not just because people have different versions, but because we fundamentally understand coordination itself differently.
I've watched tabletop exercises where agencies spend more time figuring out which version of the contact list is current than actually solving the simulated crisis. That's not preparedness—that's administrative theater.
But here's the deeper irony: exercises are supposed to reveal exactly these coordination problems. Yet the way we develop those exercises—months of committee meetings, paper-based planning processes, time-intensive coordination just to design the scenario—creates the same substrate problems we're trying to test. Then we execute them in ways that don't actually engage participants. How many exercises have we sat through where just a few people speak while others are barely awake? Where executive leadership isn't present at all, even though they're the ones who'll be making critical decisions during actual emergencies?
We've created exercises that are designed by committee, executed as performance, and attended as an obligation. They reveal coordination problems, but only to the handful of people who are actually paying attention, and those people often aren't the ones with authority to fix the underlying issues.
And here's the most damning question: we exercise to test plans, but what are we actually measuring? What's the baseline? Are we testing whether people can follow procedures that may be completely inadequate? Whether they can coordinate using systems that don't actually work? Whether they can make decisions based on information that may be wrong?
Most exercises have no measurable success criteria beyond "did we get through the scenario without anyone walking out?" We're not measuring decision quality, information flow effectiveness, or actual coordination capabilities. We're measuring compliance with a process that may itself be fundamentally broken.
The Artifact Fallacy That's Killing Preparedness
This measurement vacuum reveals something deeper: we've been thinking about preparedness completely wrong.
That 200-page All Hazards plan isn't a thing—it's a snapshot of information relationships at a particular moment, temporarily frozen in a particular format. While traditional paper plans offered a tangible sense of security and a fixed reference point, their inherent rigidity now hobbles our efforts. The real value isn't in the document; it's in the information ecosystem that created it and continues to evolve around it.
And if it's a snapshot, why are we only exercising once or twice a year? Shouldn't we be exercising whenever the snapshot changes? New personnel join the team, new equipment gets purchased, new threats emerge, authorities change, procedures get updated—but we're still testing last year's relationships with this year's reality.
We're essentially running experiments on obsolete hypotheses and wondering why our results don't match our expectations.
When we fetishize the plan-as-artifact, we miss what's actually happening: information flowing between the county planner's mental model, the fire chief's experience, the hospital's capacity data, the school district's student records. The "plan" is just where some of that information temporarily crystallized into text.
This creates what we can call preparedness entropy—the tendency for coordinated preparedness information to decay into chaos over time. The more copies of information we have, and the more time that passes, the more likely they are to diverge. It's not just multiplication—it's exponential.
Month 1: All five organizations have the same evacuation plan. Month 6: Two organizations update their copies based on new road construction. Month 12: Three organizations get new leadership with different priorities. Month 18: One organization discovers legal requirements that affect procedures. Month 24: Nobody has the same plan anymore, and nobody knows which version reflects current reality.
That's 24 months where we probably exercised just a few times. Maybe twice if we were ambitious.
Multiple copies plus time equals compound information drift that makes synchronization nearly impossible.
Here's the part that really bothers me: The most valuable preparedness information doesn't exist in any system at all. It lives in people's heads.
The fire captain who knows which neighborhoods flood first. The school nurse who knows which kids need special medical attention during evacuations. The utility worker who understands how the power grid actually fails. This isn't just "nice to have" local knowledge—it's mission-critical intelligence that could save lives.
And where does this knowledge go when these people aren't available? Nowhere. It evaporates.
But here's what's even worse: this critical human knowledge isn't just disappearing when people retire—it's constantly changing as conditions evolve. The fire captain learns about new flooding patterns, the school nurse gets updated medical protocols, the utility worker sees how grid upgrades change failure modes. We should be engaging in preparedness activities that consistently and as frequently as needed test us against these changed snapshots. When the fire captain retires, when new medical protocols are implemented, when the power grid gets upgraded, when new vulnerabilities are discovered—that's when we need to exercise, not six months later when it's convenient for everyone's calendar.
The Great Substrate Cycle (And Where We Are Now)
To understand how we got here, we need to understand the historical arc of how humans have managed knowledge. We've been cycling through different information substrates—the physical mediums that store and transmit information—each solving the problems of the previous generation while creating new ones.
Paper was humanity's first hard drive. It solved oral history's version control problems by allowing exact replication of information across distance and time. But it created multiple forms of rigidity—technical rigidity as information became locked in fixed formats that were hard to update or search, and social rigidity as literacy became a gatekeeping mechanism. Only those who could read and write could access information, and more critically, only the literate elite could decide what knowledge was "worthy" of being preserved in writing.
This pattern persists today in emergency management. Who gets to contribute to the "official" All Hazards plan? Usually it's people with certain titles, certifications, or organizational authority. Meanwhile, the crossing guard who knows exactly how traffic flows during school dismissal, or the maintenance worker who understands which building systems fail first, rarely gets their knowledge encoded in official documents.
Digital systems solved paper's limitations by making information searchable, updatable, and infinitely copyable. But they created information chaos. We went from information scarcity to information abundance, which often isn't information at all—it's just noise.
Unassisted, humans couldn't possibly make sense of all this information, so we did what we always do—we specialized. We created cybersecurity experts, GIS specialists, public health epidemiologists, and emergency communications coordinators. This specialization has genuine value and addresses real needs. But it also created new silos, and the connective tissue between specialties hasn't kept up.
By connective tissue, I mean all the layers that enable integration: technical infrastructure (APIs, data standards, interoperability systems), social structures (relationships, cross-functional teams, communities of practice), conceptual frameworks (shared mental models, common vocabularies, bridging concepts), and formal mechanisms (protocols, procedures, institutional workflows).
We systematically under-invested in all forms of connective tissue while over-investing in domain-specific capabilities. Now we have incredibly sophisticated specialists who struggle to integrate when complex problems require cross-domain solutions.
But here's where it gets worse: emergency preparedness itself has become specialized. I've observed at least 16 different segments, each developing its own version of "emergency preparedness." These include:
Local/state/federal government
Healthcare
Schools
Business
Infrastructures
Military
Each segment thinks in its own terms: local government in comprehensive emergency management, healthcare in surge capacity, schools in lockdown protocols, business in continuity planning, and military in defense readiness. We're all doing "emergency preparedness," but we're essentially speaking different languages, using incompatible tools, and optimizing for different outcomes. Each sector has its own risk assessment methodologies, planning frameworks, response protocols, training requirements, communication systems, resource management approaches, regulatory requirements, funding streams, and performance metrics.
During an actual emergency, these 16 specialized preparedness systems have to suddenly integrate and coordinate (presumably under established frameworks like the Incident Command System (ICS) and National Incident Management System (NIMS). But there's no (functioning) systematic connective tissue between them—no shared situation awareness platforms, no compatible resource tracking systems, no unified command structures that everyone actually understands and adopts. Government emergency management is supposed to be doing this – but for many reasons, they just aren’t as effectively as needed.
The hospital's emergency operations center, the school district's crisis management team, the county emergency management office, and the local business coalition are all "prepared"—but prepared in ways that don't necessarily fit together. And their “preparedness” is measured in ways that may not mean much. Not really. It’s checklists over actual capabilities.
Now we're in the LLM era, where artificial intelligence can synthesize vast amounts of information and communicate in natural language, bringing us back to something like the conversational intelligence of oral tradition. But these systems have their own problems: they can hallucinate, they don't explain their reasoning, and they're mostly general-purpose tools that don't understand domain-specific contexts.
Emergency management is caught between these substrate paradigms. We're still operating primarily in the paper/early digital mindset—treating plans as fixed artifacts that need version control—while the world (and our operational reality) has moved into an era where information wants to be fluid, contextual, and conversational.
This mismatch has practical consequences. Every single preparedness activity—every drill, meeting, training session, tabletop exercise—offers four possible outcomes: gather information, improve systems, distract from priorities, or cause chaos. Most emergency managers treat these activities as inherently beneficial, but they're actually information system perturbations that could strengthen or weaken our capabilities depending on how they're designed with substrate dynamics in mind.
The issue isn't just that these activities can be inefficient; it's that their design often fails to cultivate or integrate the very qualities emergency preparedness critically needs: domain/specialty expertise, explainability, and reliability. For instance, if a drill focuses solely on procedural compliance without creating opportunities for cross-functional teams to truly problem-solve and share their unique knowledge, it misses the chance to build genuine expertise and foster mutual understanding. Similarly, if after-action reports are just filed away without a clear process for explaining why certain decisions were made or how information flowed (or didn't), we lose valuable lessons in explainability. And if the systems used during these activities are not robust or consistent, they undermine reliability. This leads to the deeper mismatch: emergency preparedness needs these qualities, but our current approaches to activities often struggle to provide them—and ironically, these are exactly what current AI systems also struggle with. A general-purpose AI that hallucinates evacuation routes or can't explain why it recommended a particular resource allocation could be worse than useless. But an emergency management system that isn’t interoperable could also be worse than useless.
Why Everything We've Tried Hasn't Worked
The typical response to these problems is to choose sides: Go all-digital or stick with paper. Centralize everything or keep it distributed. Standardize or customize.
But that's a false choice. The real world doesn't give us the luxury of single-substrate solutions. Power goes out, networks fail, people forget, and we need our information to work regardless.
Let's talk about updates. When was the last time we tried to synchronize preparedness information across multiple organizations? It's like trying to conduct an orchestra where every musician is playing from a different sheet of music.
Someone updates the shelter locations, but forgets to tell the transportation coordinator. The medical supplies inventory changes, but the printed emergency kits still reference the old locations. The contact information changes, but only in two of the five systems where it's stored.
The more we try to solve this with traditional approaches—more meetings, more centralized systems, more standardized formats—the worse it gets. We're fighting against the fundamental nature of how information wants to behave in complex organizations.
The Hybrid Substrate Solution
What we really need is something that combines the best of each substrate era:
The reliability of paper (information we can trust)
The accessibility of digital (searchable, updateable)
The conversational intelligence of AI (contextual, adaptive)
The domain specificity that none of these substrates have naturally provided
The breakthrough isn't choosing between paper, digital, human expertise and AI systems—it's creating hybrid substrates where they work together.
Imagine an emergency manager sitting down to design a tabletop exercise. Instead of starting with a blank document or copying last year's scenario, they have a conversation with an AI system that has access to their jurisdiction's plans, recent threat intelligence, and best practices from similar communities.
But here's the crucial difference: the AI isn't generating the exercise. The human is. The AI is helping them think through the scenario, surface relevant information, identify potential gaps, and structure their expertise into a usable format.
The human provides:
Local regulatory constraints
Jurisdictional capabilities
Historical context ("this happened here in 2019")
Organizational dynamics ("the fire chief and police chief don't communicate well")
The AI provides:
Synthesis of relevant documents and data
Pattern recognition across similar scenarios
Rapid scenario generation and testing
Consistent formatting and documentation
Together, we create something neither could produce alone: exercises that are both grounded in local expertise and informed by broader knowledge.
What This Looks Like in Practice
This isn't theoretical. Organizations are starting to build and use these hybrid substrate systems, and the results are promising.
Instead of spending months in committee meetings to design a single tabletop exercise, emergency managers can rapidly iterate through multiple scenarios, testing different assumptions and identifying vulnerabilities. Instead of sifting through hundreds of pages of documents to find relevant information, we can ask questions conversationally and get synthesized answers.
But more importantly, these systems can potentially serve as connective tissue across all the layers where integration has failed—technical, social, conceptual, and formal. An AI system that can synthesize insights from GIS data, public health models, communications research, and logistics planning could help bridge silos in ways that traditional organizational structures struggle with. An AI system that provides the translation between the 16 emergency management segments, that helps the segments contextualize the information coming from others. Where the AI is fluid, not the humans. Done in a way that is comfortable to humans – talking.
Take threat intelligence. Instead of drowning in thousands of daily alerts and reports, practitioners can use AI systems to identify emerging threats and automatically generate exercise scenarios that work across sectors. The AI monitors 80,000+ open sources, but practitioners from different sectors can collaborate on deciding what's relevant and actionable for their shared geography.
Or consider cross-sector exercise design. Instead of each sector developing isolated tabletop exercises, hybrid substrate systems could enable collaborative scenario development where schools, hospitals, businesses, and government agencies co-create exercises that test their actual integration capabilities, not just their individual preparedness.
The documents become evidence of relationships, not goals in themselves. More importantly, the relationships can span organizational and sectoral boundaries in ways that were previously impossible.
The Bigger Questions This Raises
But this transformation raises fundamental questions about how we organize knowledge and authority in emergency management.
Does this strengthen or weaken human networks? If AI systems can synthesize information from multiple sources with speed, do practitioners still need to build direct relationships with each other? My hypothesis is that good AI-human partnership actually strengthens human networks by making collaboration more efficient and focused, but we're still learning.
Can these systems bridge sectoral differences? The 16 different segments of emergency preparedness have evolved incompatible approaches for good reasons—they face different risks, operate under different authorities, and serve different constituencies. Can hybrid substrate systems help them integrate without forcing false standardization?
What happens to institutional knowledge transfer? If critical expertise gets encoded in AI-mediated conversations, is it more likely to be preserved when people retire, or are we creating new forms of knowledge lock-in? This becomes especially complex when knowledge needs to transfer not just within organizations, but across sectors.
Are we solving the right problems? It's possible to make broken processes more efficient without actually fixing them. The risk is that AI-powered tools accelerate the creation of better documentation for fundamentally flawed approaches to preparedness, or worse, that they enable coordination between sectors that shouldn't be coordinating in certain ways.
What about data protection and responsible AI use? Implementing hybrid systems, especially those leveraging AI, necessitates a rigorous approach to data protection, privacy, and responsible AI governance. Given that emergency preparedness involves highly sensitive personal, medical, and infrastructure data, these systems must be designed with security by default and privacy by design. This includes robust encryption, strict access controls, transparent data handling policies, and adherence to relevant regulatory frameworks. Furthermore, the development and deployment of AI in this critical domain mandates active emergency management (EM) participation (not its abstinence), prioritizing ethical considerations, ensuring fairness, accountability, and the avoidance of biases that could lead to inequitable outcomes or misinformed decisions during a crisis. Human oversight and clear lines of responsibility for AI-generated recommendations are paramount to building trust and ensuring that these powerful tools serve humanity responsibly.
What Comes Next
The infrastructure challenge is real. We need systems that can work across multiple substrates—paper, digital, and human—while maintaining integrity, accuracy, and accessibility. But we also need systems that can work across the 16 different sectors of emergency preparedness, each with their own specialized approaches, authorities, and constraints.
The technical problems are actually the easy part. The harder challenge is cultural and political. Most emergency management is built around the assumption that each sector should develop its own preparedness capabilities and coordinate during response. Even most of the software solutions have “niched down” to focus on just a few of the 16 segments. Shifting to integrated preparedness requires fundamental changes in how we think about authority, accountability, and expertise across organizational and sectoral boundaries. This will require overcoming deeply ingrained habits and a willingness to embrace new ways of working.
But here's what I know: the organizations and sectors that figure this out first will have enormous advantages. They'll be able to prepare more effectively, respond more quickly, and adapt more successfully to changing conditions. More importantly, they'll be able to address complex threats that no single sector can handle alone.
The information substrate revolution in emergency preparedness isn't just about better document management or faster exercise design. It's about creating the connective tissue that allows 16 specialized sectors to work together as a coherent system while maintaining their individual strengths.
The question isn't whether this integration will happen—it's whether we'll be leading it or struggling to catch up when the next major disaster reveals just how fragmented our preparedness really is.
What We Need to Start Doing Differently Now
If we're emergency managers reading this, we don't have to wait for perfect solutions. We can start thinking differently about our information architecture today.
Stop treating our plans as sacred documents. Start thinking of them as captured conversations that need to continue evolving.
Stop trying to solve information chaos by creating more copies. Start thinking about how to enable information relationships that can survive substrate failures.
Stop assuming that more training and more exercises automatically improve preparedness. Start evaluating every preparedness activity through four lenses: What will this gather? What will this improve? What will this distract from? What chaos might this create?
Most importantly, start experimenting with human-AI partnership in low-stakes contexts. Learn how these systems think, where they're helpful, and where they fall short. The organizations that develop this capability now will be ready when these approaches become standard practice.
The information substrate revolution in emergency preparedness is already beginning. The question isn't whether it will happen—it's whether we'll be leading it or struggling to catch up.
The choice, as always, is ours. But the status quo isn't sustainable, and the clock is ticking.
We're Building This Future Now
I'm not just writing about these problems from the sidelines. At Preppr, we're actively building the hybrid substrate systems I've described—and seeing them work in practice.
Our users range from the smallest rural counties with limited resources to Fortune 500 companies with complex organizational structures. What we've learned is that the AI doesn't need to be the same for everyone. It needs to be fluid, meeting each user where they are—understanding their specific constraints, authorities, vocabularies, and workflows.
Preppr doesn't change the interface—it changes the conversation (just like a human might). The AI speaks the language of whoever it's working with, understands their specific constraints and authorities, and adapts its responses to their context while maintaining the ability to translate between different domains when needed.
We're proving that it's possible to create connective tissue across the 16 segments of emergency preparedness without forcing anyone to abandon their domain expertise or adopt someone else's approach. The AI adapts to human knowledge patterns rather than demanding humans adapt to machine logic.
This isn't a vision of what might be possible someday. It's happening now, in real organizations, solving real problems. The information substrate revolution in emergency preparedness has begun, and we're honored to be part of building it.
Ready to explore how hybrid substrate systems could transform our preparedness efforts? Let's have a conversation about where we are and where we're heading. www.preppr.ai/get-started
Subscribe to our FREE weekly newsletter
Hottest new Preppr features
Updates on company success
Insights from the industry