dc.contributor.author | Perera, Kevini | |
dc.contributor.author | Hettihewa, Chamod | |
dc.contributor.author | Wickramasinghe, Manupa | |
dc.contributor.author | Sandanayake, Ashan | |
dc.contributor.author | Rajapaksha, Chamali | |
dc.contributor.author | Pathirana, Pubudu | |
dc.date.accessioned | 2025-08-29T11:22:19Z | |
dc.date.available | 2025-08-29T11:22:19Z | |
dc.date.issued | 2024-09-29 | |
dc.identifier.uri | https://ir.kdu.ac.lk/handle/345/8876 | |
dc.description.abstract | Artificial intelligence and deep learning are
gaining traction in edge computing to extract insights from
Internet of Things (IoT) devices. Hardware accelerators
like Field Programmable Gate Arrays (FPGAs) accelerate
deep learning efficiently due to their energy efficiency,
parallelism, flexibility, and reconfigurability. However,
resource constraints of FPGAs pose deployment
challenges. This research explores hardware-accelerated
applications’ dynamic deployment on the Kria KV260
platform with a Xilinx Kria K26 system-on-module,
equipped with a Zynq multiprocessor system-on-chip. It
presents an innovative solution to dynamically reconfigure
deep neural networks by running multiple neural networks
and Deep Processing Units concurrently. This research
advances Edge Computing using FPGAs to facilitate
efficient deployment of Neural Networks in resource
constrained edge environments | en_US |
dc.language.iso | en | en_US |
dc.subject | FPGA | en_US |
dc.subject | Neural Networks | en_US |
dc.subject | DPU. Hardware Accelerator | en_US |
dc.title | Edge Computing using FPGA with the Deployment of Neural Networks for General Purpose Application | en_US |
dc.type | Proceeding article | en_US |
dc.identifier.faculty | Faculty of Engineering | en_US |
dc.identifier.journal | 17th International Research Conference ( KDU IRC ) 2024 | en_US |
dc.identifier.volume | 25-30 | en_US |