top of page
AIS Final Logo (2).png

Through A Nightmare, Darkly: The Limits of the Human Perception of AI

  • Writer: Edan Harr
    Edan Harr
  • Apr 27
  • 6 min read

"For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known."

1 Corinthians 13:12.


The biblical passage above captures the classic human limitation of minimal perception: we perceive ultimate reality imperfectly, as if looking through clouded glass. While there is a Truth with a capital T out there, we can only build our methodologies around partial glimpses, understanding the pieces we can apply to our own lives without the ability to relate them to the larger whole. The original Greek term "ainigma" (translated as "darkly") literally means "in a riddle," suggesting that reality is a puzzle requiring careful interpretation and discernment. What makes this verse so intriguing to me - and especially relevant for our current technological age, in my opinion - isn't just its recognition of human limitations but also its suggestion that our partial understanding isn't permanent, and that our partial knowing isn't the final word on what we might eventually comprehend. The apostle Paul isn't lamenting this condition so much as framing it as our temporary state, using the verbiage of the "face to face" encounter to suggest an eventual directness of perception when the barriers between the perceiver and the perceived dissolve. As far as achieving this, the reciprocal nature of the final phrase "know even as also I am known," suggests that true knowledge involves being known by what we seek to know - that complete understanding of truth is relational rather than merely objective.


Our current user understanding of AI systems is precisely this.


Both as users and as programmers, humans face a clouded vision of artificial intelligence through inherent cognitive limitations that prevent us from fully comprehending the systems we create and use. Consider how we interact with large language models. As users, we generally know the inputs we provide and the outputs we receive - we might give a pattern like 2, 4, 6 and expect it to continue with 8 - yet we cannot fully articulate the specific weightings and connections that drive its decisions between those points. Is it following 6 + 2? 2 + 2 + 2 + 2? 2 × 2 × 2? A fantastic example of this clouded vision is the baffling phenomenon researchers have observed of LLMs getting dumber in certain respects, despite more extensive training and architectural improvements. We don't understand why this is happening. One of our best guesses is that we're feeding these systems increasingly polluted streams of unstructured human data, saturated with our biases, intellectual shortcuts, low-quality information, and overall laziness.


I was reminded recently on a Kane Simms' VUX world podcast that we deploy AI specifically because it detects patterns in that unstructured human data that are incomprehensible to human perception - an AI analyzing thousands of running samples can determine the subtle boundary between a fast walk and a slow run in ways no human-designed ruleset could achieve. There is no number of rules a human could manually determine or artificially hardcode to get this same result. There is irony in this boundary - we cannot independently verify these distinctions because our brains literally cannot process the same volume of data. And as pattern-completion systems, AI generates justifiable explanations of the rules it used to reach its conclusion that fit our expectations rather than revealing true computational paths, allowing us to "know in part" without seeing the larger whole. I have a feeling this phenomenon of remaining fundamentally unable to comprehend why these same pattern-recognition capabilities sometimes deteriorate rather than improve with additional training might be one such incomprehensible pattern.


Young woman showing that AIS Engage returns an 82% satisfaction rate with a relative conversation length of 63%.

I'd be obliged to attribute a similar limited comprehension parallel to a more modern example to address another dimension entirely - we cannot, at this time, grasp the final destination of what AI will become most useful for. There are several scenes in Aaron Sorkin's The Social Network where characters repeatedly assert, "You don't even know what the thing is yet," regarding Facebook's potential direction. Defining the platform too early would have closed the doors to its eventual evolution into a communication platform, marketplace, and digital identity system. This principle applies directly to today's AI development. While conventional product development typically requires a narrow target market (the business principle that "when you sell to everyone, you sell to no one"), early social networks that defined themselves narrowly (such as Friendster, which focused only on dating connections) ultimately restricted their growth potential. Similarly, we find ourselves in a sort of AI Wild West - without rigid frameworks prematurely constraining development, innovators can discover applications that address problems we've struggled with for generations. This openness creates space for AI to evolve organically toward its most beneficial applications, discovering AI's most valuable contributions to society rather than prescribing them in advance.


OpenAI's Sam Altman has spoken candidly about being surprised by the capabilities of their own systems. Large language models are, in fact, large, and develop emergent capabilities at certain parameter thresholds that simply cannot be predicted from smaller models. Abilities like chain-of-thought reasoning or understanding analogies appear seemingly out of nowhere at specific scale levels, and the distributed representations in these neural networks create knowledge connections that weren't explicitly encoded but emerge from statistical patterns across the training corpus. Contrary to popular belief, this technical reality isn't entirely without precedent. Pharmaceutical companies regularly develop potentially life-saving drugs whose full mechanism of action isn't completely understood before clinical use. Social media platforms launch social engineering features to drive engagement without fully understanding their psychological impacts.


To continue this line of thinking, we have to recognize that it isn't just knowledge outside of human comprehension that furthers the parallel - we increasingly accept and utilize the above-mentioned outputs that we as individuals cannot fully understand. We access capabilities without acquiring skills, utilizing outputs whose quality we often cannot independently verify. In our daily interactions with AI, we casually translate languages we don't speak, generate code we couldn't write, and craft presentations with principles we couldn't articulate. Competence no longer requires comprehension. Most people would compare this rapid advancement to the early internet. I want to go back further, to the Industrial Revolution, when factory workers achieved master-craftsman-quality outputs without understanding the mechanical principles that powered their equipment. This democratization of production capability allowed ordinary people to create perfect textiles without comprehending the engineering that made their looms function. Like today's AI user, the mechanical loom operator could produce goods indistinguishable from those made by experts who understood every aspect of the process. The critical difference in today's day and age is that the underlying skills will not be lost, only the need to learn them to complete work at the most basic execution levels. We won't lose the ability to speak Spanish or write code - we'll lose the requirement that people fully understand the skills to utilize them for routine tasks or components of larger projects. The expertise itself should remain intact and continue to evolve alongside the technology, and therefore, the need for skilled practitioners at higher usage levels will not disappear.


To put a personal spin on things, several months ago, I began development on the SIMS SQL Intelligence and Modeling Support (SIMS) Data Agent, a technology designed to bridge the gap between natural language and database queries. The concept seemed straightforward enough: create an AI system that could interpret human questions about data and translate them accurately into SQL queries by connecting directly to a database. What began as a targeted solution to a specific problem quickly became something more profound. As the system evolved, I witnessed an AI agent accomplishing something revolutionary - generating accurate answers without relying on pre-programmed parallel vector knowledge bases that are expensive to set up and difficult to maintain. That could be the next step forward for AI as a whole. As AI systems become more transparent to us, the disparity between machine capability and human skill widens. My SQL-writing assistant may become increasingly sophisticated and explainable, but this doesn't improve my personal SQL proficiency one bit. A non-programmer might observe that my SQL agent provides the query used to validate its response, but this transparency means nothing to someone who couldn't write SQL in the first place.


But despite the ominous-sounding essay title, the uncertainty in my observations shouldn't be mistaken for pessimism. I think one worthwhile long-term fix to this might be utilizing AI systems that validate the output of other AI systems, like the automated programs we currently use as defenses against prompt-injection hacking. This meta-level solution acknowledges our human limitations while leveraging AI's own capabilities as a check against its potential errors or biases. Then the challenge can become learning what questions to ask, rather than understanding every aspect of the system. I think there are other potential and worthwhile long-term fixes to these 'issues', if you can call them that, since they are closer to observations in practice, and I look forward to seeing how the industry and the user base adapt as time goes on. I have certainly learned a lot from AI as I became immersed in specific skills I didn't know before. I suggest that we, not as programmers, but as users, move toward better grasping what AI is, what it does, and how it works to capitalize on its nuances fully.


A computer transformed after implementing an AIS Engage chatbot.


Comments


bottom of page