Instead of a complex infrastructure deployment, latent computing resources could be used to support the edge computing needs of the IoT.
The data processing and storage needs of the internet of things (IoT) will quickly become more than our current model of centralized, data center-based can handle. Additionally, latency-sensitive use cases command geographically decentralized resources as a function of physics. Known as edge computing, this decentralization is seen as an integral part of future networks. But where does all of this equipment go?
To draw a parallel, think of the myriad problems carriers have had deploying small cells at scale. Access to power, backhaul and site acquisition, as well as varying regulatory regimes, have significantly slowed operator ambitions related to network densification. Take that same paradigm and apply it to the servers, gateways and other network equipment that comprise edge computing at scale. Where is all of this stuff supposed to live? And, following a traditional real estate-based deployment model, could it even be deployed with enough velocity to support a rapid commercialization and monetization of 5G?