Digital twins aren’t only for inanimate objects
BEVERLY HILLS, CA (goshrobin) 2021/4/21 – In 3D hospital simulations I directed building at the UN WHO, we created digital twins of hospital buildings, field hospitals with tents, medical equipment of all kinds, and even doctors, nurses and patients. Some say “digital twins” are only for inanimate objects, that digital representations of people in the metaverse should be called “avatars.” The issue with such a definition is digital twins are accurate simulations of real world objects, while avatars are typically fanciful representations, such as cartoon characters. Avatars appear as someone unlike ourselves.
I was head-scanned in the Hollywood studio that scanned actors’ bodies and faces to replace them with digital doubles in the Terminator films. It’s weirdly not like looking into a mirror to watch a 360-degree 3D rotating model of your head. Looking at yourself this way is an out-of-body experience.
Later, I designed the animation system at Hollywood’s largest motion-capture sound stage, where photo-realistic digital stunt doubles are created for visual effects shots in major motion pictures such as Spider-Man. The purpose of this Hollywood wizardry is to create body doubles so good that viewers can’t tell the difference. Why not use the real actors? For reasons of safety and cinematography. The fight scenes in Hollywood superhero films had become so dangerous that the same stuntman broke his arm in two separate Spider-Man films while swinging through the air. And, superhero fight scenes may feature dramatic camera moves impossible for physical cameras to shoot.
A human digital twin can reverse-age, like Jeff Bridges in Tron Legacy.
We may expect that anything Hollywood does that costs millions of dollars to produce will be driven by Moore’s Law to consumers, become an inexpensive app on our phones. Someday, everyone will have their head scanned. There is already a mobile app to do this. What we don’t have quite yet are high fidelity metaverses capable of displaying human digital twins convincingly. As that arrives, catfishing becomes an issue of metaverse trustworthiness.
I was thesis advisor to a Masters student who graduated last year from Rhine-Waal University. Her research in Augmented Reality in Healthcare studied pupillometry. Eye-tracking as a means of understanding a person’s mood. or intention A tempting analysis goal is to answer such questions as… Can an AR headsets with eye-tracking cameras tell us if the user is depressed or angry or happy? Can it tell us if the user understands or is confused by what she sees in a metaverse hospital training exercise? Can it tell us who the user finds physically attractive, either likes or dislikes? There’s further research to be done, but the initial results indicate that the answer to all these questions is yes.
Sentiment Analysis can be done in many ways beside pupillometry. By AI analysis of words in a chat session. By Voice Stress Analysis in a telephone call or e-meeting. By monitoring brainwaves from an EEG headset.
Strictly speaking, Sentiment Analysis isn’t PII (Personally Identifiable Information). It’s like TikTok knowing what we want to see next based on previous video clips we watched. Such Predictive Analysis is already to the point that Big Data knows that you’re pregnant. Big Data often isn’t regulated.
PII, in particular HIPPA medical data, is regulated. When it comes to breaking the trust of users, regulations may not prevent abuses. In separate lawsuits this week, Fox News paid $788 million to settle a lawsuit charging them with deceiving viewers in order to boost ratings and manipulate politics, and you can claim your share of the $725 million fine Facebook owes for providing your data to Cambridge Analytica, that used it to rig elections by targeting impressionable viewers with deceptive political ads that played to their personal fears or prejudice.
Targeting viewers can be very specific. During the Brexit referendum, the leader of the UK Labor Party was singled out to be served different ads than anyone else, to make it appear that his ads opposing Brexit were running everywhere on Facebook when they were not.
Predictive Policing like the film Minority Report is becoming reality. Beverly Hills, where I live, has military grade surveillance technology, capable of tracking anyone in the community from their door to their vehicle and to track from there using automatic license plate readers.
While the Fox News defamation settlement is for a record amount, approaching $1 billion, having $4 billion in cash on hand, Fox can afford it. Fox hasn’t apologized or admitted any wrong-doing. Fox hasn’t promised to change.
Some say a solution is to have a trustworthiness certification system, some sort of Good Housekeeping seal or Consumer Reports rating or become the Better Business Bureau. Or, perhaps more like ISO 37001 Anti-bribery Management Systems.
I help lead ITU FGMV-WG6-TG4 Metaverse Trustworthiness, a group considering the impact of digital twins (deep fakes, generative AI, chatbots and NPCs), contracts in the metaverse (blockchain, NFTs), and Sentiment Analysis (non-PII data from eye or EEG tracking). We meet virtually on the 3rd Friday of each month. Contact me to join.
About Robin Rowe
Robin Rowe is co-chairman of ITU FG-MV WG6-TG4. Has developed metaverse and robotics technology, is a Hollywood creative technologist, engineering director, product designer, AI research lab director, and C++ software architect. Led the AR Group at the UN WHO. Was design strategist for Lenovo ThinkReality AR glasses. As a navy research scientist, developed VR war games to train NATO Special Forces, and designed and programmed a night vision flight simulator to test naval aviators. As a professor, taught computer science at the Naval Postgraduate School and the University of Washington.