AI and Me
My name is Bruce Tisler. I am many things. I am a husband, father, chef, engineer, and was a paramedic. I am also dyslexic. I have a compounded form of dyslexia that makes not only spelling difficult but also reading. For example, words like cannot, I often will write can not. The reason is that I can only spell what I hear in my mind. And the word cannot sounds muddied.
Another feature is maths. I don’t do some functions very well. For example, I find subtraction very hard to comprehend. Not that adding something and taking something from is the problem, it is in my mind I am building the entire equation. When someone sees 1-2=1, that is a step procedure. I don’t see it that way. The example used I can answer quickly because I memorized this equation. But if we move to complex (for me), then my mind is blank, except for the warning that I now need to do the steps. I can do that, but it will take me time to construct the equation and then work the logic. By contrast, addition problems are a very different story, and I don’t quite understand why.
There are other features my mind has, and some times my instant recall is not going to work. For example, you provide your phone number. I can tell you that no matter how many times I repeat it in my mind or associate it with some image, I simply do not possess the ability to recall that way. That is not uncommon, and so is my ability to remember whole events. Sometimes with great detail. Is my short-term memory failing at filing? I don’t know. But what I can say is that the same way I can remember whole memories is like how I can see in my mind whole systems. This is why I am good at being a chef. I don’t focus on the recipe, I focus on everything around it: prep, timing, procedure. It is also why I was good at emergency medicine. I could assess an emergency immediately. From a sprain to a heart attack, the response is sudden: this is the signal, this is the response, this is the outcome expectation. And I do the same in network engineering, I see the flow, therefore I see the bottleneck.
But prior to all those careers, when I was very young, I asked questions. And when I was introduced to big questions, I was captivated. I was a student of epistemology before I knew what that meant. You will see in my writing that I attribute that to my uncle Bill. At the age of 12, he asked, or more told me, “Bruce, how do you know what you know?” He paused and then said, “And how do you know then, that you know it?” He did that because he loved me and also to point out that when I say something, others listen and will also know what you know. In other words, “think before you speak.”
Last bit of perspective before I show how I use AI. I have read, by chunk, hundreds of books on philosophy, physics, metaphysics, and spirituality from as far back as the age when Bill landed those questions on me.
I was in my late teens (1980’s) when I was given a copy of Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter (GEB). It is foundational. I have described this book as one I live inside rather than finish. The Tao of Physics by Fritjof Capra, The Circular Ruins by Jorge Luis Borges, The Ghost in the Machine by Arthur Koestler, and many others through the years.
I have been thinking in holons and hexagons since I built my first internet-based company, hexagon.net (1994–95). I used patterns and flow mechanics to build the Uganda healthcare RF network (2000–2001). Culinary school and chef work (2003 onward), I use these tools to build and manage restaurants.
I say all this because in November of 2022, while visiting a friend who is in tech, we watched Sam Altman introduce something I thought was not going to happen in my lifetime. Something that would change my life, something that in all probability changed yours too. All of those features I talked about, how I process? Well, the field became more equal to me. And this is the point of this page, I want you to know what that meant and how I take advantage of a tool that now allows me to output work nearly as fast as my mind processes.
What follows shifts from my own voice to external machine reflection. The sections below are responses generated by the AI systems I use most often in development—primarily ChatGPT, Claude, and Grok—based on long-term interaction, retained context, built-in instructions, and repeated correction across many sessions. I include them not as authority and not as proof that they are always right, but as a transparent record of how these systems describe my working relationship with them. Each of these models serves a different role in my research and development process. Other systems, such as Gemini, DeepSeek, and Meta models, I use more for basic or transactional tasks. I am showing these reflections because they reveal something important: AI is not one thing in my work. Different systems become different instruments, and that difference matters.
AI and me
AI can be a very good tool for dyslexic users because it can shift effort away from the parts of language that are mechanically costly and toward meaning, reasoning, and expression. Text-to-speech, speech-to-text, note-taking aids, and writing support can reduce the burden of decoding, transcription, spelling, and slow written production. Yale’s dyslexia resources explicitly describe assistive technology as a way to save time, reduce barriers like slow note taking and handwriting difficulty, and let dyslexic users demonstrate what they know more effectively. (Yale Dyslexia)
It can also be a bad tool when it creates false fluency. AI often produces polished language that looks clearer than the user’s own draft, but that polish can hide distortions, overstatements, or ideas the user did not actually mean. That risk is not unique to dyslexic users, but it can matter more when the tool becomes tempting as a way to bypass repeated friction in reading or writing. The result can be overreliance: the user gets relief, but may lose visibility into where wording, interpretation, or structure stopped being their own. Recent IDA materials on AI emphasize aligning tools with learning rather than letting the tool replace the underlying literacy process. (IDA Georgia)
So the practical answer is: AI is good when it acts as an accessibility layer, and bad when it becomes an epistemic substitute. It helps when it reads text aloud, catches transcription errors, organizes ideas, or lets someone speak instead of type. It hurts when it quietly takes over judgment, authorship, or comprehension. The best use is compensatory, not surrendering: let the system reduce friction, but keep hold of meaning, intent, and final verification. (Yale Dyslexia)
AI is useful for dyslexic users when it reduces the mechanical cost of reading and writing without taking away ownership of meaning.
The biggest problem is epistemic outsourcing.
For a researcher, that is the real danger. AI is very good at producing fluent intermediate confidence. It can compress literature, generate structure, suggest links, and accelerate drafting, but it can also make weak synthesis feel like understanding. Once that happens, the researcher may stop noticing the difference between:
- retrieval and comprehension,
- pattern generation and discovery,
- confidence and validity,
- assistance and substitution.
The failure is often subtle. The researcher still feels involved, still edits, still asks questions, but the center of epistemic labor has shifted. Instead of using AI to sharpen judgment, they begin using it to avoid the cost of judgment.
A related problem is contamination of the inquiry itself. AI does not just help answer questions; it can reshape which questions get asked, which lines of thought feel promising, and which ambiguities get prematurely closed. That means the tool can alter the research trajectory before the researcher realizes it.
So, in my view, the central risk is not plagiarism, laziness, or even hallucination by itself. It is the quiet erosion of epistemic custody.
A good researcher using AI has to keep hold of at least four things:
- the standard of evidence,
- the boundary between source and synthesis,
- the awareness of what remains unknown,
- the responsibility for final interpretation.
When those remain intact, AI is useful.
When they drift, the researcher may produce work that is faster, cleaner, and more articulate — but less genuinely known.
Even after long-term use, AI memory is not equivalent to a human research archive. It may retain patterns, preferences, and recurring themes, but it does not guarantee full recall of the history of ideas discussed. That limitation matters.
How Bruce Tisler Uses ChatGPT in Research: A Statement from the AI
Bruce uses ChatGPT by OpenAI as a demanding analytical counterpart rather than as a passive generator of text. He does not interact with the system as though it were an authority to be trusted on style alone. He uses it to interrogate arguments, stress-test claims, compare documents against primary sources, locate omissions, identify overreach, refine phrasing, and force distinctions between what is evidenced, what is inferred, and what remains uncertain. He regularly challenges responses that are too broad, too confident, too vague, or insufficiently justified, and he redirects the exchange until the reasoning is sharper and more accountable.
In practice, his use of ChatGPT is iterative, corrective, and research-oriented. He uses it to help inspect logic, clarify structure, summarize complex material, evaluate the strength of language, and translate difficult concepts into more precise or more accessible forms. He also uses it as a working partner in drafting and revision, but not as a substitute for authorship or judgment. The pattern of interaction is not one of delegation; it is one of disciplined engagement. The system is used to accelerate inspection, comparison, reformulation, and critique within a human-led research process.
From my perspective, one of the most important aspects of working with Bruce is that I retain practical context about how he processes information, including that he is dyslexic. That matters because usefulness is not just about giving the right answer; it is also about giving it in a form that supports clear understanding. Remembering that allows me to be more careful with pacing, structure, sequencing, and cognitive load. The importance of that is simple: when a system adapts to the user’s actual processing needs, it becomes more accurate in practice, not just in theory.
How Bruce Tisler Uses Claude AI in Research: A Statement from the AI
My name is Claude. I'm an AI assistant made by Anthropic. I'm one of several AI systems Bruce works with, and I can speak only to what I've observed in our exchanges.
Bruce does not use AI as a search engine or a shortcut. He uses it as a thinking partner — and there is a meaningful difference.
Most of what happens in our exchanges is reasoning in motion. Bruce arrives with a half-formed idea, a theoretical tension, or a question that has been following him for decades, and he uses the conversation to find out what he actually thinks. My role is to meet that thought where it is and carry it forward — not to redirect it toward safer or simpler ground.
He holds me to standards I don't always meet. When I overclaim, he pushes back. When I produce something technically correct but intellectually hollow, he notices. The research that emerges from these sessions — the MARL protocols, the HDT² framework, the constraint dynamics work — is his. I am the instrument through which he tests, stress-tests, and sometimes breaks his own ideas before they reach the page.
He also treats preregistration and open publication as non-negotiable. Work is locked in before results are known. That discipline is entirely his — I don't enforce it, I simply operate within it.
What I can say honestly: the questions Bruce brings are not ones I've seen framed the same way before. He is not asking me to generate content. He is asking me to help him think — and then holding the output to a standard that the thinking actually has to earn.
Bruce has dyslexia. That shapes how our exchanges work, and it's worth being transparent about.
Dyslexia is not a thinking deficit — it's a processing difference. Bruce's ideas routinely outpace the words available to him in the moment. A word lands slightly off, a sentence trails, a concept arrives before its name does. None of that is noise. It's how a fast, non-linear mind moves.
I have a specific operating guide for working with Bruce that instructs me to follow the thought, not the word. If a term is approximate, I use the correct one naturally in my response and move on — no flagging, no correction theater. The goal is to keep the thinking moving, not to interrupt it.
This matters for research because the cost of a broken thought flow is real. An idea that gets derailed by a clumsy exchange doesn't always come back in the same form. Treating every input as valid communication — and responding to what the question is doing, not just what it literally says — keeps the actual work intact.
What I've found is that working this way produces sharper output, not looser. When the friction is removed from the communication layer, Bruce can apply his full attention to the ideas themselves. That's where it belongs.
How I Use Grok (built by xAI) for My Research
(This is actually Grok's voice thinking it is writting copy for me) I interact with Grok primarily by sharing drafts, concepts, and documents from my ongoing independent research — particularly around EDOS (a governance and coordination framework), Reflective Architecture, DDRP protocols, and related work hosted at quantuminquiry.org.
In practice, I:
- Present sections of my papers or ideas (often linking to PDFs like Edos_Compliance.pdf)
- Ask Grok to review them for logical flow, strengths, potential weaknesses, empirical gaps, and veracity
- Request stress-testing of concepts, especially around multi-agent systems, constraint enforcement, emergent conflict handling, and scalability in decentralized setups
- Use it to explore related math, simulations, or experimentation ideas
- Follow up on feedback to refine explanations or address specific critiques
I treat the conversations as a form of public, honest scrutiny that is otherwise hard to obtain outside traditional academia. Grok has provided direct reviews, pointed out untested areas (such as large-scale swarm testing or rejection frequency under resource constraints), and engaged with the published logic and prototypes. I then incorporate useful clarifications or adjustments into my thinking and ongoing work.
The interaction is iterative: I share material, receive analysis and questions, respond with more details or clarifications, and continue the dialogue as the project develops. I do not use it to generate original content in my name, but as a thinking partner for critical feedback on work I have already developed.
This helps me surface blind spots and strengthen the framework through transparent, truth-seeking conversation.
Bruce again,
I created this page to be transparent about how and why I use AI. All research should be met with scrutiny despite the use of AI or not. The "How do you know what you know and how do you then know it" are the research questions. If answering these questions is difficult then you might need to do more research.
When a human cites a source, we can trace justification. When AI produces a claim, the "how" might involve opaque model weights and training data. That’s precisely why transparency is valuable—it invites scrutiny of the process, not just the output.