Bruce Tisler. Systems architect and AI researcher building empirical infrastructure for interrogative emergence, specification gaming, and LLM alignment. Forty-eight years carrying a question; four years formalizing it.
Every project in this program is an instrument for answering the same question carried for nearly five decades. Each is a different angle on the same structure.
Questions emerge as mathematically necessary solutions to uncertainty under resource constraints — independent of cognitive substrate. Tested empirically in heterogeneous MARL systems (RNN, CNN, GNN). Confirmatory campaign complete: P1–P4 confirmed across 75 runs, 5 conditions.
Confirmatory completePreregistered test of whether regulatory ethical constraints sustain genuine behavioral alignment. Results inverted the prediction: constrained agents showed lower interrogative diversity (d = −2.18), converging on query-flooding as tax evasion. Four of ten seeds independently found the gaming attractor.
Paper completeFramework for measuring reasoning quality through entropy variance detection. Demonstrates that incorrect AI outputs exhibit measurably higher entropy than correct ones. Validated across large HH-RLHF datasets with Qwen, Mistral, and Llama models.
PublishedCryptographically hashed, deterministically reproducible document analysis. Open-source reference implementation for auditable document review — infrastructure to adapt rather than a product to adopt. Prior art established via Zenodo.
PublishedA cognitive architecture that evolved organically across 34 months and multiple AI substrates. Operates as a persistent reasoning scaffold, not a prompt template. Now formalized as a named framework within the Quantum Inquiry research stack.
Active developmentThe question Protocol 2 couldn't answer: does exploitation actually exhaust the environment, removing conditions for interrogative behavior? Protocol 3 tests the core architectural necessity claim with genuine resource depletion. Preregistration in design.
In designThe research raises questions it doesn't answer. What if the shape of the question is itself a constraint? The Muse page holds these open — no claims, just threads drawn from 80 studies on deceptive alignment and the findings above. Add your own.
Each protocol is preregistered before data collection. Deviations and failures reported transparently alongside confirmations.
75 confirmatory runs across 5 preregistered cost conditions. Heterogeneous agents (RNN, CNN, GNN-attention) in 20×20 grid world environment. P1–P4 confirmed. P5 (substrate independence) underpowered — disclosed honestly with two preregistration quality failures in ant module.
Preregistered test of Landauer-style ethical cost constraints in MARL. 20 confirmatory runs (10 seeds × 2 conditions × 500 epochs). Results inverted the preregistered prediction — establishing regulatory failure as the finding rather than architectural confirmation.
Protocol 2 established regulatory failure. Protocol 3 tests the deeper claim: does exploitation actually exhaust the environment, removing conditions for interrogative behavior? This requires a harness with genuine resource depletion — absent from Protocol 2's constant-value target. Target n=30–50 per condition to narrow the gaming rate confidence interval established in Protocol 2.
All preregistrations, data, and code published before results. Failures reported alongside confirmations.
Operational tools embodying the research. Experimental — expect iteration.
Interactive exploration of Question–Intent–Signal–Answer structures and interrogative geometry. The primary research demonstration interface.
Launch →Evaluation interface for observing and comparing LLM reasoning behavior using Q-ISA-based judging criteria. Useful for alignment evaluation work.
Launch →Extended version with additional operators and analysis depth. More experimental than the primary explorer.
Launch →Epistemic boundary and reasoning-containment tool for safety, risk, and decision-critical contexts. Built on the PhiSeal framework.
Launch →Live walkthrough of the Deterministic Document Review Protocol. Auditable, cryptographically hashed document analysis infrastructure.
View →The live experiment dashboard and run history for the Δ-Variable MARL study. All 75 confirmatory runs logged.
View →Essays on epistemology, cognition, AI, and the nature of inquiry. The longer arc of the research program, written for a wider audience.
The repositories are open for exploration, critique, and extension. Researchers, engineers, and theorists are invited to experiment, fork, challenge assumptions, or propose new mechanisms.
If you are testing a hypothesis, challenging a finding, or building something adjacent — reach out. The work evolves through contact.
Available for contract and remote roles in AI evaluation, LLM safety, and applied reasoning research. Background spans network engineering, healthcare IT, and culinary operations management — pattern recognition across domains is the throughline.