Arşiv logosu
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Rajabi, Shakiba" seçeneğine göre listele

Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    An advanced deep reinforcement learning algorithm for three-layer D2D-edge-cloud computing architecture for efficient task offloading in the Internet of things
    (Elsevier, 2024) Moghaddasi, Komeil; Rajabi, Shakiba; Gharehchopogh, Farhad Soleimanian; Ghaffari, Ali
    The Internet of Things (IoTs) has transformed the digital landscape by interconnecting billions of devices worldwide, paving the way for smart cities, homes, and industries. With the exponential growth of IoT devices and the vast amount of data they generate, concerns have arisen regarding efficient task-offloading strategies. Traditional cloud and edge computing methods, paired with basic Machine Learning (ML) algorithms, face several challenges in this regard. In this paper, we propose a novel approach to task offloading in a Device-toDevice (D2D)-Edge-Cloud computing using the Rainbow Deep Q-Network (DQN), an advanced Deep Reinforcement Learning (DRL) algorithm. This algorithm utilizes advanced neural networks to optimize task offloading in the three-tier framework. It balances the trade-offs among D2D, Device-to-Edge (D2E), and Device/ Edge-to-Cloud (D2C/E2C) communications, benefiting both end users and servers. These networks leverage Deep Learning (DL) to discern patterns, evaluate potential offloading decisions, and adapt in real time to dynamic environments. We compared our proposed algorithm against other state -of -the -art methods. Through rigorous simulations, we achieved remarkable improvements across key metrics: an increase in energy efficiency by 29.8%, a 27.5% reduction in latency, and a 43.1% surge in utility.
  • Küçük Resim Yok
    Öğe
    An advanced deep reinforcement learning algorithm for three-layer D2D-edge-cloud computing architecture for efficient task offloading in the ınternet of thıngs
    (Elsevier Inc., 2024) Moghaddasi, Komeil; Rajabi, Shakiba; Gharehchopogh, Farhad Soleimanian; Ghaffari, Ali
    The Internet of Things (IoTs) has transformed the digital landscape by interconnecting billions of devices worldwide, paving the way for smart cities, homes, and industries. With the exponential growth of IoT devices and the vast amount of data they generate, concerns have arisen regarding efficient task-offloading strategies. Traditional cloud and edge computing methods, paired with basic Machine Learning (ML) algorithms, face several challenges in this regard. In this paper, we propose a novel approach to task offloading in a Device-to-Device (D2D)-Edge-Cloud computing using the Rainbow Deep Q-Network (DQN), an advanced Deep Reinforcement Learning (DRL) algorithm. This algorithm utilizes advanced neural networks to optimize task offloading in the three-tier framework. It balances the trade-offs among D2D, Device-to-Edge (D2E), and Device/Edge-to-Cloud (D2C/E2C) communications, benefiting both end users and servers. These networks leverage Deep Learning (DL) to discern patterns, evaluate potential offloading decisions, and adapt in real time to dynamic environments. We compared our proposed algorithm against other state-of-the-art methods. Through rigorous simulations, we achieved remarkable improvements across key metrics: an increase in energy efficiency by 29.8%, a 27.5% reduction in latency, and a 43.1% surge in utility. © 2024

| İstinye Üniversitesi | Kütüphane | Açık Bilim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


İstinye Üniversitesi, İstanbul, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim