Quantum Inquiry

Access to this material is discretionary and may be limited or removed at any time. If this work is relevant to your organization, please contact me directly at brucetisler@quantuminquiry.org.

Questions as Measurable Structures

HDT² treats inquiry structure as a first-class uncertainty object and uses entropy-band calibration to determine whether answering is epistemically conforming, analogous to selective prediction frameworks that guarantee bounded risk by abstaining outside calibrated regimes.

Central Hypothesis

Core Claim: Questions possess measurable internal structure that constrains inference processes prior to answer generation, independently of surface linguistic form.

Research Status Dashboard

This investigation asks a specific question: Do questions have measurable internal structure that constrains inference before answers are generated?

Not whether questions matter, or whether framing changes outcomes—those are known. The claim here is sharper: that there exist properties of interrogative inputs, invariant under paraphrase, that predictably alter downstream inference in ways independent of token statistics, prompt length, or standard framing controls.

This is a testable hypothesis. The Research Status Dashboard tracks exactly three things:

  • What's implemented: Working instruments, processed datasets, operational systems
  • What's observed: Recurring patterns across models—not yet validated against confounds
  • What's next: The decisive paraphrase invariance experiment that proves or falsifies the claim

Independent Research Program

Quantum Inquiry operates as an independent research initiative focused on measuring cognitive processes through entropy dynamics. The work spans theoretical epistemology, practical AI evaluation systems, and experimental validation through controlled computational experiments.

HDT² Framework

A patent-pending system for measuring reasoning quality through entropy variance detection. Demonstrates that incorrect AI outputs exhibit measurably higher entropy than correct ones across multiple language models.

Geometric Epistemology

Treats questions as mathematical objects with measurable properties. Develops formal instruments for analyzing question structure, collision dynamics, and field formation in inquiry spaces.

Experimental Validation

Systematic testing across Qwen, Mistral, and Llama models using controlled GPU infrastructure. All data, code, and results published openly for independent verification.

From Chain-of-Thought Unfaithfulness to Persona-Conditional Alignment

Connects empirical chain-of-thought unfaithfulness to a deployment-facing evaluation harness. Measures persona sensitivity, monitoring-cue conditionality, and intervention-based faithfulness to map where safety properties break under realistic prompt and serving-layer variation. Read the article here.

Theoretical Framework

The research develops several interconnected theoretical components, each addressing different aspects of measurable inquiry:

Ω-Δ-Φ-Ψ Cycle

A four-phase framework for analyzing how questions transform into knowledge through structured reasoning gates. Models the relationship between inquiry structure, transformation dynamics, and epistemic validation.

Entropy Band Calibration

Empirically validated method for establishing quality thresholds in AI reasoning. Uses Shannon entropy variance to distinguish between high-confidence and unstable outputs, achieving reliable discrimination between correct and incorrect responses.

Question Collision Theory

Analyzes what happens when incompatible questions or reasoning patterns interact. Models the geometric dynamics of inquiry conflicts and the formation of structured question fields.

Reflective Architecture

Multi-agent systems designed for adversarial validation and epistemic stress-testing. Implements specialized roles for challenge generation, critique synthesis, and meta-level evaluation.

Recent Publications

Medium Writing

Loading recent articles...

GitHub Development

Loading repositories...

Live Demonstrations

Operational, interactive tools that embody HDT² concepts, inquiry geometry, and cognitive boundary design. These demos are live systems used to explore, test, and practice structured reasoning in real time.

Q-ISA Explorer

Interactive exploration of Question–Intent–Signal–Answer structures and interrogative geometry.

Launch Demo →

Q-ISA Explorer v160

Extended and experimental version of the Q-ISA Explorer with additional operators and analysis depth.

Launch Demo →

Q-ISA LLM Judge Explorer

Evaluation interface for observing and comparing LLM reasoning behavior using Q-ISA-based judging criteria.

Launch Demo →

Φ-SEAL GPT (Phi-SEAL GPT)

A live epistemic boundary and reasoning-containment tool designed for safety, risk, and decision-critical contexts.

Launch Demo →

Generic Dyslexic-Aware Tutor

A cognitive practice harness supporting dyslexic and non-linear thinkers in learning and interview-style reasoning.

Launch Demo →

Note: These are experimental research tools. Expect rough edges, incomplete features, and ongoing iteration.

Open Invitation to Researchers and Collaborators

The Quantum Inquiry repositories are open for exploration, experimentation, and critique. Each project—HDT², Edos, HSIQ Reflector GPT, the diagnostic tools, and the emerging governance layers—is published in the spirit of transparent development: ideas under pressure, systems in motion, reasoning exposed rather than concealed.

Researchers, engineers, theorists, and curious builders are invited to:

  • Experiment with any of the existing systems
  • Fork, extend, or reconfigure the architectures
  • Propose new mechanisms grounded in HDT² or adjacent to it
  • Develop original research that draws on the theory, protocols, or reflective stacks
  • Discuss, critique, or refine any part of the work conceptual, technical, or philosophical

No contribution is too small or too exploratory. Inquiry evolves through contact, and these repositories exist as a shared space for that contact.

If you are testing a hypothesis, challenging an assumption, iterating on a failure mode, or building something unexpected, you are welcome here. The only requirement is epistemic honesty: document what you try, show what breaks, and let the work speak in its unfinished form.

For discussion, collaboration, or research alignment, open an issue, start a thread, or reach out through the channels provided. This is an open lab. If the work moves you, you are already part of it.