Title: Mobile Edge Scan Hive
Where: Bristol (UK)
When (time-plan): January to September 2020 (9 months)
Anyone can be captured and preserved via a collaborative process as a 3D-scanned ‘sculpture’ to be left behind in an augmented reality layer in the testbed environment.
Users are invited to help create a 3D scan of a human subject, without the need to be situated in an expensive studio environment. One person volunteers to be scanned, and must hold a pose while the other participants use the smartphone app to capture imagery of them; following instructions in augmented reality. The server continuously monitors the progress of the scan via the participants’ devices, and automatically decides when the scan is complete.
Unknown to the participants, the network and edge computing platform is working hard to receive and process every image captured from each client device via a photogrammetry engine. A basic 3D model of the scan is churned out rapidly and transmitted to all client devices, allowing participants to view the ‘sculpture’ as quickly as possible in AR. Simultaneously, the engine is working on a higher-fidelity version of the 3D model, which is transferred as soon as it is complete, replacing the previous one. The resulting user experience is centred on a high quality, crowd-sourced 3D photogrammetry scan of a person, locked in its original location in the world.
This 3D model is just the latest piece added to the testbed’s cumulative ‘sculpture garden’ of AR 3D scans; as the shape, position and timestamp of each capture is archived on the server, so it can be retrieved by the authors at any time.
This solution also provides a compelling dataset, with researchers able to study not only the performance of the system itself but also incredibly detailed spatial information from every client device, as recorded by the AR engine – and every 3D scan processed on the edge computing platform.