"How do you know what you know? And how do you know that you know it?"— Bill Finley, to his nephew Bruce, circa 1974
Bruce Tisler was twelve years old when his uncle Bill posed this question. It was not a puzzle to be solved. It was a structural observation: every claim about knowledge rests on another claim about the reliability of the process that produced it, and that claim rests on another, and so on. The question doesn't resolve. It recurses.
For nearly five decades, that question ran as a background process through every domain Bruce worked in paramedicine, network infrastructure, field operations, culinary work. Each career was, in retrospect, an informal laboratory for the same epistemological inquiry.
In November 2022, the question became empirical. The emergence of large language models that appeared to reason but whose "knowing" was structurally opaque made the question urgent in a new way. Quantum Inquiry is the formalization of what had been a lifelong project.
Emergency assessment under time and information constraints. Rapid pattern classification from incomplete signals — the same cognitive architecture that later shaped how Bruce approaches research design.
Early internet company built around hexagonal pattern and flow mechanics — the geometric framework that later became central to the WWWWHW interrogative structure and HDT² theory.
Field deployment of RF network infrastructure for healthcare in Uganda. Constraint-driven design in low-resource, high-stakes environments — a direct antecedent to the research program's emphasis on structural necessity over optional compliance.
Culinary school and restaurant management. Coordination, timing, and system-level thinking under real-time pressure. The same holonic pattern-recognition applied to kitchen operations as to network design.
Formal research program in interrogative emergence, MARL ethics experiments, and documentary accountability infrastructure. All prior careers treated as informal preparation for a question that finally had empirical tools
Quantum Inquiry treats method as part of the result. Work is preregistered before data collection, deterministic where required, and published with visible limitations. Confirmations matter, but so do failures, inversions, and underpowered results.
Most AI systems are judged by output quality after the fact. This research focuses on something earlier and more structural: whether reasoning remains stable under pressure, whether documents can be converted into auditable obligations without interpretive drift, and whether apparent ethical behavior survives real incentive conditions.
The goal is not persuasive language. It is inspectable structure.
Available for contract and remote roles in AI evaluation, LLM safety, and applied reasoning research. The repositories are open for exploration, critique, and extension.