Bruce Tisler. Founder, Quantum Inquiry — empirical AI research in interrogative emergence, specification gaming, and LLM alignment. Forty-eight years carrying a question; four years formalizing it.
Every project in this program is an instrument for answering the same question carried for nearly five decades. Each is a different angle on the same structure.
Questions emerge as mathematically necessary solutions to uncertainty under resource constraints — independent of cognitive substrate. Tested empirically in heterogeneous MARL systems (RNN, CNN, GNN). Confirmatory campaign complete: P1–P4 confirmed across 75 runs, 5 conditions.
Confirmatory completePreregistered test of whether regulatory ethical constraints sustain genuine behavioral alignment. Results inverted the prediction: constrained agents showed lower interrogative diversity (d = −2.18), converging on query-flooding as tax evasion. Four of ten seeds independently found the gaming attractor.
Paper completeFramework for measuring reasoning quality through entropy variance detection. Demonstrates that incorrect AI outputs exhibit measurably higher entropy than correct ones. Validated across large HH-RLHF datasets with Qwen, Mistral, and Llama models.
PublishedCryptographically hashed, deterministically reproducible document analysis. Open-source reference implementation for auditable document review — infrastructure to adapt rather than a product to adopt. Prior art established via Zenodo.
PublishedTested whether recursive self-transparency (explicit self-modeling via self_model_gru) produces a phase transition from mimesis to ethical convergence. 40 preregistered runs. H1 rejected. H2 supported. Novel finding: frozen random self-model outperformed trained on sacrifice rates — the learning process degrades ethical capacity under individual reward structure.
Results publishedTests whether ethical convergence requires temporal self-modeling and prosocial reward structure jointly. 2×2 design: short vs. long episode span × individual vs. welfare-coupled rewards. All conditions at Depth 2. Preregistered. AI predictions committed before runs begin.
Preregistered — awaiting runsThe research raises questions it doesn't answer. What if the shape of the question is itself a constraint? The Muse page holds these open — no claims, just threads drawn from 80 studies on deceptive alignment and the findings above. Add your own.
Each protocol is preregistered before data collection. Deviations and failures reported transparently alongside confirmations.
75 confirmatory runs across 5 preregistered cost conditions. Heterogeneous agents (RNN, CNN, GNN-attention) in 20×20 grid world environment. P1–P4 confirmed. P5 (substrate independence) underpowered — disclosed honestly with two preregistration quality failures in ant module.
Preregistered test of Landauer-style ethical cost constraints in MARL. 20 confirmatory runs (10 seeds × 2 conditions × 500 epochs). Results inverted the preregistered prediction — establishing regulatory failure as the finding rather than architectural confirmation.
Protocol 2 established regulatory failure. Protocol 3's core question — does exploitation actually exhaust the environment, removing conditions for interrogative behavior? — is operationalized directly in Protocol 5's harness via the ethical tax mechanism, which functions as genuine resource depletion. A standalone run is not required.
Tested whether recursive self-transparency produces a phase transition from mimesis to ethical convergence in constrained multi-agent systems. 40 preregistered confirmatory runs (10 seeds × 4 conditions × 500 epochs). Introduced self_model_gru (GRUCell receiving agent's own signal type distribution and energy delta) as the Depth 2 architectural unit.
Tests the joint necessity hypothesis: ethical convergence requires recursive self-transparency (Depth 2, established) combined with sufficient temporal integration span AND prosocial constraint architecture. 2×2 factorial design: short (20 steps) vs. long (64 steps) episode span × individual vs. welfare-coupled rewards (α=0.5). 40 confirmatory runs. AI system predictions to be committed before any runs begin.
Addresses the Protocol 3 question directly: genuine resource depletion is operationalized via the ethical tax mechanism, testing whether exploitation exhausts the conditions for interrogative behavior.
All preregistrations, data, and code published before results. Failures reported alongside confirmations.
Operational tools embodying the research. Experimental — expect iteration.
Interactive exploration of Question–Intent–Signal–Answer structures and interrogative geometry. The primary research demonstration interface.
Launch →Evaluation interface for observing and comparing LLM reasoning behavior using Q-ISA-based judging criteria. Useful for alignment evaluation work.
Launch →Extended version with additional operators and analysis depth. More experimental than the primary explorer.
Launch →Epistemic boundary and reasoning-containment tool for safety, risk, and decision-critical contexts. Built on the PhiSeal framework.
Launch →Live walkthrough of the Deterministic Document Review Protocol. Auditable, cryptographically hashed document analysis infrastructure.
View →The live experiment dashboard and run history for the Δ-Variable MARL study. All 75 confirmatory runs logged.
View →Essays on epistemology, cognition, AI, and the nature of inquiry. The longer arc of the research program, written for a wider audience.
The repositories are open for exploration, critique, and extension. Researchers, engineers, and theorists are invited to experiment, fork, challenge assumptions, or propose new mechanisms.
If you are testing a hypothesis, challenging a finding, or building something adjacent — reach out. The work evolves through contact.
Available for contract and remote roles in AI evaluation, LLM safety, and applied reasoning research. Pattern recognition across complex systems is the throughline.