The space industry’s continued focus and advances in safe reusable launch vehicles have ushered in a new affordable age of space flight, enabling a wider range of enterprises and organizations to launch and operate space-based assets in low earth orbit and beyond. Ensuring and extending mission life cycles of these orbital assets to include launch vehicles, satellites, and space stations will require a new generation of adaptive, robust, and autonomous robotic systems. Merging proven orbital dynamics, relative motion, robotic kinematics, and spacecraft rendezvous/docking with new advances in Machine Learning, Computer Vision, Data communications, and many more exciting fields of study. These efforts intend to provide future enterprises with the capability to perform On-Orbit Servicing and Maintenance (OSAM) of failed or damaged space assets, in-space assembly of new platforms, and manufacturing of components. However, the means to validate individual hardware and software components of these technologies and test the collaborative “system of systems” at a large scale are still largely in their development stages. This paper is a comprehensive survey and assessment of the current and near-future technical developments in the fields of space simulation and validation, orbital robotics, and space-based automation; identifying the current gaps and capability necessary for large scale industry validation and employment of these systems. Finally, it will also illustrate some of the on-going research being conducted at Virginia Tech’s space labs to address some of these gaps in the future.
Using the same synthetic environment enables us to back out vast amounts of image-based training data used to train a convolutional neural network to identify and track objects of interest. These can be objects that already exist on-orbit such as solar panels, the Canada Arm, or entire spacecraft; or objects that only exist in computer-aided design (CAD) files such as future space station schematics, repair parts, or construction materials. For this particular proof of concept, we introduced truss sections into an Unreal Environment in the vicinity of the ISS above a hyper-realistic 24k render of planet Earth. In doing so, we have created a labeled Dataset of over 10,000 unique images of both synthetic and real-world orbital structures that can be used to enable further computer vision developments in the space domain. Once a model was sufficiently trained on virtual trusses, they are then evaluated against real-world 3D printed truss structures in the CUBE space with the same unreal Environment being projected along the perimeter. Initial results using this method are proving promising with regards to neural network performance, yielding mean average precisions (mAP) within 15 percent of none synthetically trained models. Introducing additional techniques such as domain randomization into the synthetic datasets, we hope to further close the gap in CNN performance when using artificially generated imagery, results of which will be published in full no later than March 2022. If successful, such a system would prove to be a cost-effective means of Hardware-in-the-loop testing for image-based mechanical actuation and machine learning for space vehicles. Such capabilities will set the stage for autonomous and semi-autonomous space missions. Machine Learning aided refueling and/or repair missions to the now launched James Web Telescope (JWTS) is just one example of such an endeavor.
This paper establishes the current body of work between three separate, but homogeneous fields of research, system simulation/validation, machine learning, and robotics; while providing an introduction into these wide-ranging fields of study. Additionally, we detail how they can be applied to near-future space domain applications and potential impacts on the aerospace industry as a whole. Lastly, we identified key gaps in current literature, and ongoing future work efforts to address them.