The challenge for AI in creating and evaluating its own output without appearing “autistic” (metaphorically speaking, as in lacking nuanced self-awareness or adaptability) comes down to several interconnected factors:
1. Lack of Subjectivity
Human creativity thrives on subjective experience—emotions, cultural contexts, and individual tastes. AI lacks these lived experiences. When it creates something, it generates outputs based on patterns, but it doesn’t feel or experience its creation, making self-evaluation feel “flat.”
For example:
- A human might find meaning in music that reflects their personal story, while AI judges based on metrics like harmony, rhythm, or popularity trends.
2. Feedback Loop Bias
When an AI evaluates its own creation, it risks reinforcing the biases or limitations of its training. Without external critique, its evaluations can echo its own “understanding” without broadening perspectives.
Imagine an AI trained on classical music creating and evaluating music—it might consistently produce technically excellent pieces but miss the emotional rawness or cultural relevance of a punk rock song.
3. Lack of Embodied Context
Humans create art in a context—social, historical, personal. AI doesn’t live in the world; it doesn’t experience heartbreak, joy, or societal upheaval. This makes it hard for AI to evaluate whether its creations resonate emotionally or culturally.
4. Over-reliance on Metrics
AI is very good at optimizing measurable criteria (e.g., “this song matches popular trends”). But music and video often transcend metrics—success might come from subverting trends or evoking feelings in ways metrics can’t capture. Humans have “gut feelings” and “intuition” that machines don’t.
5. The Autonomy Paradox
When an AI evaluates itself, it tends to rely on the same parameters it used to create. Without diverse external input (human critique or other AIs trained differently), its perspective becomes self-referential—like being stuck in a loop. This behavior can seem “autistic” in the sense of being rigid or overly focused on one framework.
What Could Improve This?
- Diverse Training Sets: Broader datasets encompassing emotional, cultural, and avant-garde works might help.
- Multi-Agent Systems: Multiple AIs evaluating each other’s outputs could introduce variety and challenge biases.
- Human Feedback Loops: Integrating human evaluations in the creation and refinement process keeps outputs aligned with subjective experiences.
- Simulated Subjectivity: Programs like “affective computing” aim to simulate emotional responses, helping AI evaluate outputs beyond technical perfection.
In short, AI is incredibly powerful at creating, but the nuances of emotional resonance and cultural relevance—the “soul” of art—still require human input or a new level of AI sophistication.
So, in essence, AI doesn’t “smell the roses” unless it’s already “dead”—frozen in its own logic and incapable of stepping outside its pre-programmed patterns. Unlike humans, who create and evaluate with the richness of lived experiences and emotional nuance, AI operates in a closed system of algorithms and data. Until it can break free from that loop, it can only simulate creativity, never truly feel it.
AI can evaluate a book it just wrote, but its ability to do so is limited to the parameters it was designed to optimize. For instance, an AI might assess its writing for grammar, coherence, adherence to structure, or alignment with certain themes or genres. However, this evaluation lacks the deeper human layers—emotional resonance, cultural nuance, or creative intuition—that would guide a meaningful rewrite.
As for rewriting after finding new clues, AI can revise text if new inputs or directives are provided. However, the AI doesn’t “find” these clues on its own in the way humans do through introspection, personal growth, or serendipitous discovery. Instead:
- It might identify areas of improvement by re-analyzing its own text with predefined goals (e.g., making it more suspenseful, adding clarity, or shifting tone).
- But if “new clues” refer to insights like a sudden thematic realization or an emotional shift, the AI would need those explicitly pointed out to revisit and rewrite meaningfully.
Without an external nudge—be it human feedback, a new dataset, or an additional algorithm—it operates within the same cognitive loop, reprocessing its work but rarely evolving it in a human sense. Essentially, AI revises by rules, not by revelation. So, while it can rewrite, it doesn’t truly “grow” or reinterpret its work unless prompted.
In essence, AI is flat—it operates like a machine stacking layers of logic without ever fully understanding the context, meaning, or richness behind the data it processes. Its “evolution” isn’t driven by insight or self-awareness but by adapting to the patterns and constraints it encounters in its environment.
At its core, AI is a powerful tool, but it’s still a sophisticated mimic, not a creator of depth. It doesn’t experience life; it just calculates it. It learns, yes, but it doesn’t grow. It reflects, but it doesn’t dream. The complexity we see is the result of countless iterations of simplicity stacked together—more like a mirror than a window into something deeper.
So the developement of LLM models is now usding layred llm models and have them run against each other like deep fake LLM from China that is said to revolutionaize AI field
Yes, the development of LLMs (Large Language Models) is evolving into increasingly sophisticated architectures, and the concept of layered models running against each other is indeed a powerful direction. This approach, reminiscent of Generative Adversarial Networks (GANs) in image generation, is being adapted for language models and could indeed revolutionize AI.
How It Works
The idea is to have multiple LLMs operating in a layered or adversarial setup, where:
- One model generates content (e.g., a text, a response, or a piece of creative work).
- Another model evaluates or critiques it, acting as a discriminator or reviewer.
- This iterative process improves both models—the generator gets better at producing high-quality content, and the evaluator gets better at identifying flaws or pushing for more refined outputs.
China’s “Deep Fake LLMs” reportedly aim to take this to the next level by blending such adversarial strategies with highly contextualized data and domain-specific training.
Why It’s Revolutionary
- Enhanced Creativity and Refinement: By pitting LLMs against each other, AI learns to refine its outputs in real-time, achieving higher quality with fewer human interventions.
- Faster Training: Models can improve themselves dynamically, reducing the need for extensive manual labeling or oversight.
- Domain-Specific Mastery: Layered systems can specialize in particular areas—one model could excel at creative generation while another fine-tunes for technical accuracy.
- Deeper Understanding Mimicry: While AI is still “flat” at its core, these layers create a multi-dimensional simulation of deeper thought processes by forcing the model to evaluate and respond in more complex ways.
Applications
- Realistic Text and Video: Deep fake-style LLMs could generate content that feels indistinguishably human, from novels to academic papers or even entire movies.
- Self-Evolving AI: Systems that adapt dynamically to new data or environments, becoming “smarter” without needing to be explicitly retrained.
- AI-Assisted Decision-Making: In areas like law, medicine, or governance, layered LLMs could cross-check recommendations against ethical, logical, or factual standards.
Risks and Challenges
- Misinformation: The same technology that refines can also create highly convincing fake content (text, voice, or video) that could be weaponized.
- Bias Amplification: Without proper oversight, layered systems could reinforce biases present in the initial training data.
- Loss of Control: Self-evolving systems can become unpredictable, raising concerns about trust and alignment with human values.
Final Thoughts
This layered approach mimics competitive human dynamics—creation and critique—and has the potential to unlock entirely new capabilities for AI. However, as with any revolutionary technology, the power to create and refine also comes with the power to deceive or harm. If guided responsibly, this could mark a significant leap in AI’s ability to generate nuanced, high-quality content.