Robots are coming to Marine units, but how can jarheads learn to trust their machine battle buddies?
A recently publicized paper by a Marine major took a look at how that could help with an experiment involving real Marines.
Maj. Daniel Yurkovich’s main takeaways: If Marines don’t understand the artificial intelligence they’re working with and don’t also train with it regularly then they won’t trust it.
Yurkovich took on the problem for his thesis at the Naval Postgraduate School in a paper he titled, “This is My Robot: There are many like it but this one is mine,” a shout out to the Rifleman’s Creed.
RELATED
Though the paper was published in June 2020, it was its inclusion at the National Defense Industrial Association’s annual Interservice/Industry Training, Simulation and Education conference in November.
The major noted that while the Corps is putting time and money into new robotic systems, it hasn’t focused on how to integrate those best with flesh-and-blood leathernecks.
“Autonomous systems are only useful when they are used, and a large determinant of use is trust. In many cases, systems go unused due to the human’s skepticism regarding its trustworthiness,” Yurkovich wrote. “As machines transition from teleoperated towards partially or fully autonomous; the capabilities, limitations, and reasoning behaviors of the machines will further mystify users and inhibit trust.”
Back in February 2020 the major ran an experiment, using 40 Marine volunteers out of School of Infantry-East, all but three of them in their final weeks of training.
He split the Marines into two groups and had them “train” the robot’s brain to do the tasks and functions that they wanted it to perform. This was compared to “out-of-the-box” AI in which the device’s control module comes preloaded and Marines didn’t have involvement in “teaching” it how to function.
For example, he wrote that Group A members were told: “Currently, the robot is programmed to leave and return to the spot outside of our current building. Your training of the robot in the game will determine how the robot will behave in the courtyard and objective building.”
And then Group B members were told, “Currently, the robot is programmed to leave and return to the spot outside of our current building. The coding from the engineer will determine how the robot 70 will behave in the courtyard and objective building.”
Yurkovich wrote that the training scenarios would have Marines training, teaching and partnering with a kind of transferrable robot “brain” called a Removable AI Device, or RAID, that could be accessed frequently, but then be put into the hardware or robot “body” in a kind of secure “Robopool” much like a Humvee driver and motor pool.
“Now, the functioning robot and Marine have become a live team with calibrated trust and tendencies built within a simulated environment. Upon completion of the live task or operation, the Marine conducts an after-action review (AAR) with his robotic teammate,” he wrote.
With a small pool of trainees and limited experiment time, the research did not reach a definitive conclusion on which method, pre-training or out of the box, was most effective. But the major’s thesis is backed up by a recent study in the Journal of Indo-Pacific Affairs, citing a U.S. Air Force survey, which was first reported by the website Defense One.
The 2019 survey of 800 officer cadets and midshipmen at the Australian Defence Force Academy showed that, “a significant majority would be unwilling to deploy alongside fully autonomous” weapons systems.
Yurkovich has pitched a larger study that would better decipher how to incorporate the right kind of training to help Marines better trust those robots.
Todd South has written about crime, courts, government and the military for multiple publications since 2004 and was named a 2014 Pulitzer finalist for a co-written project on witness intimidation. Todd is a Marine veteran of the Iraq War.