Exploring hardware accelerator offload for the Internet of Things
dc.contributor.author | Cooke, Ryan A. | |
dc.contributor.author | Fahmy, Suhaib A. | |
dc.date.accessioned | 2021-06-21T09:47:16Z | |
dc.date.available | 2021-06-21T09:47:16Z | |
dc.date.issued | 2020 | |
dc.description.abstract | The Internet of Things is manifested through a large number of low-capability connected devices. This means that for many applications, computation must be offloaded to more capable platforms. While this has typically been cloud datacenters accessed over the Internet, this is not feasible for latency sensitive applications. In this paper we investigate the interplay between three factors that contribute to overall application latency when offloading computations in IoT applications. First, different platforms can reduce computation latency by differing amounts. Second, these platforms can be traditional server-based or emerging network-attached, which exhibit differing data ingestion latencies. Finally, where these platforms are deployed in the network has a significant impact on the network traversal latency. All these factors contributed to overall application latency, and hence the efficacy of computational offload. We show that network-attached acceleration scales better to further network locations and smaller base computation times that traditional server based approaches. | en |
dc.identifier.doi | 10.1515/itit-2020-0017 | |
dc.identifier.pissn | 2196-7032 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/36576 | |
dc.language.iso | en | |
dc.publisher | De Gruyter | |
dc.relation.ispartof | it - Information Technology: Vol. 62, No. 5-6 | |
dc.subject | Internet of Things | |
dc.subject | edge computing | |
dc.subject | hardware acceleration | |
dc.title | Exploring hardware accelerator offload for the Internet of Things | en |
dc.type | Text/Journal Article | |
gi.citation.endPage | 214 | |
gi.citation.publisherPlace | Berlin | |
gi.citation.startPage | 207 |