Bridging Two Worlds: Industrial Twins, Human Twins, and What Lies Ahead
From Industrial Digital Twin to Human Digital Twin
Bridging Two Worlds: Industrial Twins, Human Twins, and What Lies Ahead
From Industrial Digital Twin to Human Digital Twin
When I began my career at Siemens in 2011, my everyday tasks were filled with industrial devices—motors, motor management devices, and PLCs alongside the software that governed them. On the surface, the work was simple: develop and test configuration software. Yet, we always encountered the same frustrating limitation—our dependency on hardware.
The devices were manufactured in Germany, while the monitoring and commissioning software was built in India. Every test cycle requires real devices, and running a test could destroy the hardware. This made large parts of the testing space—fault conditions, stress scenarios, rare edge cases—completely untouchable.
We were validating the system, yes, but not truly understanding its behavioural essence. It felt as if the most interesting truths about the system were hiding behind the curtain we could not pull.
By 2015, I had accumulated years of intimate familiarity with these devices. Yet, curiosity began to nudge me in a different direction. Instead of merely working with the software, I started observing the communication patterns between the device and the system.
I leaned on the skills I had sharpened in my early days—packet sniffing with Wireshark, reverse engineering, and hacking. A question kept looping in my head:
What if the software could no longer distinguish whether it was speaking to a real device or to a perfectly crafted illusion of one?
Driven by this thought, I built an algorithm that could simulate the device so convincingly that the software itself was fooled. Without realising it at the time, I had built my first digital twin. Later, it sparked another idea—why not let the twin learn automatically, adapting its behaviour instead of being hand-coded?
That step marked a personal turning point. The algorithm eventually grew into a patent in 2019, earned organisational recognition, and gave me the courage to step away from deterministic testing toward a new horizon—pattern recognition, learning, and intelligence.
In the same year, I leapt. I left Siemens and pursued a Master’s in Web and Data Science in Germany. The world was shifting toward machine learning, and I wanted to immerse myself in it.
During my Master’s, I worked on projects involving transformer-based language models such as BERT and ALBERT, which allowed me to see how machines could internalise the statistical structure of human language. After graduating in 2022, I joined Fraunhofer as a researcher.
Then came the generative AI revolution. Overnight, the conversation changed: it was no longer about machines imitating machines, but machines imitating humans. That realisation brought me to my current challenge—designing human digital twins.
At Fraunhofer, I now work on SozioMimic, funded by the Ministry of Economic Affairs, Labour and Tourism. The central question is deceptively simple: can we design AI agents that do not merely process language, but genuinely approximate the decision-making, preferences, and thought patterns of human beings?
To investigate, we conducted a six-week study with 13 participants in domains such as finance and insurance. We built a corpus of human experience through in-depth interviews, diary studies, and multiple survey rounds. After two weeks, the same participants were asked to retake the surveys, revealing an important fact: even humans do not remain constant—the maximum similarity of their own answers across time hovered around 78%.
This corpus became the foundation on which we evaluated seven variants of LLM-based agents. They ranged from unframed baselines simply instructed to “act human,” to agents enriched with inferred demographics, personas, expert reflections, and retrieval mechanisms. The most advanced configuration fused everything: demographic signals, expert-style reflections, retrieval augmentation, and structured prompting strategies inspired by agentic reasoning.
The results illuminated a spectrum of alignment. Even the most basic baseline, instructed only to act as a human, achieved 68% overlap with actual human responses. More sophisticated models, particularly those integrating multiple layers of context, reached nearly 78% alignment. The greatest gains appeared in domains richly represented in participant data, such as finance and communication.
Interestingly, the improvement was not universal. Alignment gains diminished in domains with little participant data, and certain question types—those that required nuanced judgment—remained stubbornly difficult. This suggested that human–agent alignment is incremental, contextual, and conditional.
The implication is profound: LLMs can approximate human-like responses, but only when carefully scaffolded with context. Alignment is less about raw computational power and more about the layers of interpretation and grounding we provide.
This naturally brings me back to my roots in industrial digital twins. In machines, the twin was an almost perfect copy: deterministic, replicable, and complete. But humans resist such mirroring.
Industrial twins require a snapshot of the state; once you know the rules, replication is precise. Human twins, in contrast, are inherently momentary. A person captured at one point in time is not the same two weeks later. This is why I call them moment twins—representations of a specific temporal slice of human nature.
Industrial behaviour is constant; human behaviour is dynamic. Machines can be cloned; humans can only be approximated.
Ironically, the more I study humans, the more I suspect that machines will evolve toward human qualities. Industrial devices of the future will not rely solely on rigid firmware, rules, or fuzzy logic. Instead, small, efficient LLMs may one day live inside these devices, making them adaptive, context-aware, and capable of proactive negotiation.
The industrial world will absorb lessons from the human domain, just as my own career has moved from one to the other. Machines will stop being purely deterministic, and in their limited ways, they will begin to approximate the fluidity of human decision-making.
Working on SozioMimic has shown me both the promise and the limitations of this vision. We can capture demographic profiles, buying behaviours, decision-making tendencies, and even opinions. But there are aspects of humanity that resist digitisation—creativity, imagination, and the inner spark of original thought.
These qualities are not noise in the data; they are the very essence of being human. And perhaps, they are meant to remain beyond replication.
My journey began with machines that could not be fully tested, and it has led me to the complexity of humans who cannot be fully cloned. Industrial digital twins taught me that determinism can be simulated. Human digital twins have taught me that life itself is fluid, contextual, and irreducible.
The real promise lies not in replacing one with the other, but in hybrid systems: humans providing the grounding of authenticity, and AI extending our reach into scale, speed, and simulation.
We cannot build a perfect copy of humanity. But we can build systems that learn from us, complement us, and—at times—challenge us. That is the philosophical and technical bridge I continue to walk: from industrial devices to human decision-making, from deterministic twins to moment twins.
And on that bridge, I see the future—not machines replacing humans, but machines learning to be better partners to us.
If you're working on AI, multi-agent systems, or just curious about the future of intelligent decision-making, I’d love to hear from you.
👉 Follow me on LinkedIn
🔗 Let’s build, learn, and grow together.