Yazar "Ghaffari, Ali" seçeneğine göre listele
Listeleniyor 1 - 19 / 19
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A cost-effective and machine-learning-based method to identify and cluster redundant mutants in software mutation testing (Apr, 10.1007/s11227-024-06107-8, 2024)(Springer, 2024) Arasteh, Bahman; Ghaffari, AliA Cost-effective and Machine-learning-based method to identify and cluster redundant mutants in software mutation testing (Apr, 10.1007/s11227-024-06107-8, 2024)Öğe An advanced deep reinforcement learning algorithm for three-layer D2D-edge-cloud computing architecture for efficient task offloading in the Internet of things(Elsevier, 2024) Moghaddasi, Komeil; Rajabi, Shakiba; Gharehchopogh, Farhad Soleimanian; Ghaffari, AliThe Internet of Things (IoTs) has transformed the digital landscape by interconnecting billions of devices worldwide, paving the way for smart cities, homes, and industries. With the exponential growth of IoT devices and the vast amount of data they generate, concerns have arisen regarding efficient task-offloading strategies. Traditional cloud and edge computing methods, paired with basic Machine Learning (ML) algorithms, face several challenges in this regard. In this paper, we propose a novel approach to task offloading in a Device-toDevice (D2D)-Edge-Cloud computing using the Rainbow Deep Q-Network (DQN), an advanced Deep Reinforcement Learning (DRL) algorithm. This algorithm utilizes advanced neural networks to optimize task offloading in the three-tier framework. It balances the trade-offs among D2D, Device-to-Edge (D2E), and Device/ Edge-to-Cloud (D2C/E2C) communications, benefiting both end users and servers. These networks leverage Deep Learning (DL) to discern patterns, evaluate potential offloading decisions, and adapt in real time to dynamic environments. We compared our proposed algorithm against other state -of -the -art methods. Through rigorous simulations, we achieved remarkable improvements across key metrics: an increase in energy efficiency by 29.8%, a 27.5% reduction in latency, and a 43.1% surge in utility.Öğe An advanced deep reinforcement learning algorithm for three-layer D2D-edge-cloud computing architecture for efficient task offloading in the ınternet of thıngs(Elsevier Inc., 2024) Moghaddasi, Komeil; Rajabi, Shakiba; Gharehchopogh, Farhad Soleimanian; Ghaffari, AliThe Internet of Things (IoTs) has transformed the digital landscape by interconnecting billions of devices worldwide, paving the way for smart cities, homes, and industries. With the exponential growth of IoT devices and the vast amount of data they generate, concerns have arisen regarding efficient task-offloading strategies. Traditional cloud and edge computing methods, paired with basic Machine Learning (ML) algorithms, face several challenges in this regard. In this paper, we propose a novel approach to task offloading in a Device-to-Device (D2D)-Edge-Cloud computing using the Rainbow Deep Q-Network (DQN), an advanced Deep Reinforcement Learning (DRL) algorithm. This algorithm utilizes advanced neural networks to optimize task offloading in the three-tier framework. It balances the trade-offs among D2D, Device-to-Edge (D2E), and Device/Edge-to-Cloud (D2C/E2C) communications, benefiting both end users and servers. These networks leverage Deep Learning (DL) to discern patterns, evaluate potential offloading decisions, and adapt in real time to dynamic environments. We compared our proposed algorithm against other state-of-the-art methods. Through rigorous simulations, we achieved remarkable improvements across key metrics: an increase in energy efficiency by 29.8%, a 27.5% reduction in latency, and a 43.1% surge in utility. © 2024Öğe An intrusion detection system on the internet of things using deep learning and multi-objective enhanced gorilla troops optimizer(Springer, 2024) Asgharzadeh, Hossein; Ghaffari, Ali; Masdari, Mohammad; Gharehchopogh, Farhad SoleimanianIn recent years, developed Intrusion Detection Systems (IDSs) perform a vital function in improving security and anomaly detection. The effectiveness of deep learning-based methods has been proven in extracting better features and more accurate classification than other methods. In this paper, a feature extraction with convolutional neural network on Internet of Things (IoT) called FECNNIoT is designed and implemented to better detect anomalies on the IoT. Also, a binary multi-objective enhance of the Gorilla troops optimizer called BMEGTO is developed for effective feature selection. Finally, the combination of FECNNIoT and BMEGTO and KNN algorithm-based classification technique has led to the presentation of a hybrid method called CNN-BMEGTO-KNN. In the next step, the proposed model is implemented on two benchmark data sets, NSL-KDD and TON-IoT and tested regarding the accuracy, precision, recall, and F1-score criteria. The proposed CNN-BMEGTO-KNN model has reached 99.99% and 99.86% accuracy on TON-IoT and NSL-KDD datasets, respectively. In addition, the proposed BMEGTO method can identify about 27% and 25% of the effective features of the NSL-KDD and TON-IoT datasets, respectively.Öğe Application of human activity/action recognition: a review(Springer, 2025) Sedaghati, Nazanin; Ardebili, Sondos; Ghaffari, AliHuman activity recognition is a crucial domain in computer science and artificial intelligence that involves the Detection, Classification, and Prediction of human activities using sensor data such as accelerometers, gyroscopes, etc. This field utilizes time-series signals from sensors present in smartphones and wearable devices to extract human activities. Various types of sensors, including inertial HAR sensors, physiological sensors, location sensors, cameras, and temporal sensors, are employed in diverse environments within this domain. It finds valuable applications in various areas such as smart homes, elderly care, the Internet of Things (IoT), personal care, social sciences, rehabilitation engineering, fitness, and more. With the advancement of computational power, deep learning algorithms have been recognized as effective and efficient methods for detecting and solving well-established HAR issues. In this research, a review of various deep learning algorithms is presented with a focus on distinguishing between two key aspects: activity and action. Action refers to specific, short-term movements and behaviors, while activity refers to a set of related, continuous affairs over time. The reviewed articles are categorized based on the type of algorithms and applications, specifically sensor-based and vision-based. The total number of reviewed articles in this research is 80 sources, categorized into 42 references. By offering a detailed classification of relevant articles, this comprehensive review delves into the analysis and scrutiny of the scientific community in the HAR domain using deep learning algorithms. It serves as a valuable guide for researchers and enthusiasts to gain a better understanding of the advancements and challenges within this field.Öğe The Application of Hybrid Krill Herd Artificial Hummingbird Algorithm for Scientific Workflow Scheduling in Fog Computing(Springer Singapore Pte Ltd, 2023) Abdalrahman, Aveen Othman; Pilevarzadeh, Daniel; Ghafouri, Shafi; Ghaffari, AliFog Computing (FC) provides processing and storage resources at the edge of the Internet of Things (IoT). By doing so, FC can help reduce latency and improve reliability of IoT networks. The energy consumption of servers and computing resources is one of the factors that directly affect conservation costs in fog environments. Energy consumption can be reduced by efficacious scheduling methods so that tasks are offloaded on the best possible resources. To deal with this problem, a binary model based on the combination of the Krill Herd Algorithm (KHA) and the Artificial Hummingbird Algorithm (AHA) is introduced as Binary KHA- AHA (BAHA-KHA). KHA is used to improve AHA. Also, the BAHA-KHA local optimal problem for task scheduling in FC environments is solved using the dynamic voltage and frequency scaling (DVFS) method. The Heterogeneous Earliest Finish Time (HEFT) method is used to discover the order of task flow execution. The goal of the BAHA-KHA model is to minimize the number of resources, the communication between dependent tasks, and reduce energy consumption. In this paper, the FC environment is considered to address the workflow scheduling issue to reduce energy consumption and minimize makespan on fog resources. The results were tested on five different workflows (Montage, CyberShake, LIGO, SIPHT, and Epigenomics). The evaluations show that the BAHA-KHA model has the best performance in comparison with the AHA, KHA, PSO and GA algorithms. The BAHA-KHA model has reduced the makespan rate by about 18% and the energy consumption by about 24% in comparison with GA.Öğe DDoS attack detection techniques in IoT networks: a survey(Springer, 2024) Pakmehr, Amir; Aßmuth, Andreas; Taheri, Negar; Ghaffari, AliThe Internet of Things (IoT) is a rapidly emerging technology that has become more valuable and vital in our daily lives. This technology enables connection and communication between objects and devices and allows these objects to exchange information and perform intelligent operations with each other. However, due to the scale of the network, the heterogeneity of the network, the insecurity of many of these devices, and privacy protection, it faces several challenges. In the last decade, distributed DDoS attacks in IoT networks have become one of the growing challenges that require serious attention and investigation. DDoS attacks take advantage of the limited resources available on IoT devices, which disrupts the functionality of IoT-connected applications and services. This article comprehensively examines the effects of DDoS attacks in the context of the IoT, which cause significant harm to existing systems. Also, this paper investigates several solutions to identify and deal with this type of attack. Finally, this study suggests a broad line of research in the field of IoT security, dedicated to examining how to adapt to current challenges and predicting future trends. © The Author(s) 2024.Öğe Detecting and mitigating security anomalies in software-defined networking (SDN) using gradient-boosted trees and floodlight controller characteristics(Elsevier b.v., 2025) Jafarian, Tohid; Ghaffari, Ali; Seyfollahi, Ali; Arasteh, BahmanCutting-edge and innovative software solutions are provided to address network security, network virtualization, and other network-related challenges in highly congested SDN-powered networks. However, these networks are susceptible to the same security issues as traditional networks. For instance, SDNs are significantly vulnerable to distributed denial of service (DDoS) attacks. Previous studies have suggested various anomaly detection techniques based on machine learning, statistical analysis, or entropy measurement to combat DDoS attacks and other security threats in SDN networks. However, these techniques face challenges such as collecting sufficient and relevant flow data, extracting and selecting the most informative features, and choosing the best model for identifying and preventing anomalies. This paper introduces a new and advanced multi-stage modular approach for anomaly detection and mitigation in SDN networks. The approach consists of four modules: data collection, feature selection, anomaly classification, and anomaly response. The approach utilizes the NetFlow standard to gather data and generate a dataset, employs the Information Gain Ratio (IGR) to select the most valuable features, uses gradient-boosted trees (GBT), and leverages Representational State Transfer Application Programming Interfaces (REST API) and Static Entry Pusher within the floodlight controller to construct an exceptionally efficient structure for detecting and mitigating anomalies in SDN design. We conducted experiments on a synthetic dataset containing 15 types of anomalies, such as DDoS attacks, port scans, worms, etc. We compared our model with four existing techniques: SVM, KNN, DT, and RF. Experimental results demonstrate that our model outperforms the existing techniques in terms of enhancing Accuracy (AC) and Detection Rate (DR) while simultaneously reducing Classification Error (CE) and False Alarm Rate (FAR) to 98.80 %, 97.44 %, 1.2 %, and 0.38 %, respectively.Öğe Detecting and mitigating security anomalies in software-defined networking (SDN) using gradient-boosted trees and floodlight controller characteristics(Elsevier, 2024) Jafarian, Tohid; Ghaffari, Ali; Seyfollahi, Ali; Arasteh, BahmanCutting-edge and innovative software solutions are provided to address network security, network virtualization, and other network-related challenges in highly congested SDN-powered networks. However, these networks are susceptible to the same security issues as traditional networks. For instance, SDNs are significantly vulnerable to distributed denial of service (DDoS) attacks. Previous studies have suggested various anomaly detection techniques based on machine learning, statistical analysis, or entropy measurement to combat DDoS attacks and other security threats in SDN networks. However, these techniques face challenges such as collecting sufficient and relevant flow data, extracting and selecting the most informative features, and choosing the best model for identifying and preventing anomalies. This paper introduces a new and advanced multi-stage modular approach for anomaly detection and mitigation in SDN networks. The approach consists of four modules: data collection, feature selection, anomaly classification, and anomaly response. The approach utilizes the NetFlow standard to gather data and generate a dataset, employs the Information Gain Ratio (IGR) to select the most valuable features, uses gradient-boosted trees (GBT), and leverages Representational State Transfer Application Programming Interfaces (REST API) and Static Entry Pusher within the floodlight controller to construct an exceptionally efficient structure for detecting and mitigating anomalies in SDN design. We conducted experiments on a synthetic dataset containing 15 types of anomalies, such as DDoS attacks, port scans, worms, etc. We compared our model with four existing techniques: SVM, KNN, DT, and RF. Experimental results demonstrate that our model outperforms the existing techniques in terms of enhancing Accuracy (AC) and Detection Rate (DR) while simultaneously reducing Classification Error (CE) and False Alarm Rate (FAR) to 98.80 %, 97.44 %, 1.2 %, and 0.38 %, respectively.Öğe Effective test-data generation using the modified black widow optimization algorithm(Springer, 2024) Arasteh, Bahman; Ghaffari, Ali; Khadir, Milad; Torkamanian-Afshar, Mahsa; Pirahesh, SajadSoftware testing is one of the software development activities and is used to identify and remove software bugs. Most small-sized projects may be manually tested to find and fix any bugs. In large and real-world software products, manual testing is thought to be a time and money-consuming process. Finding a minimal subset of input data in the shortest amount of time (as test data) to obtain the maximal branch coverage is an NP-complete problem in the field. Different heuristic-based methods have been used to generate test data. In this paper, for addressing and solving the test data generation problem, the black widow optimization algorithm has been used. The branch coverage criterion was used as the fitness function to optimize the generated data. The obtained experimental results on the standard benchmarks show that the proposed method generates more effective test data than the simulated annealing, genetic algorithm, ant colony optimization, particle swarm optimization, and artificial bee colony algorithms. According to the results, with 99.98% average coverage, 99.96% success rate, and 9.36 required iteration, the method was able to outperform the other methods.Öğe Efficient software mutation test by clustering the single-line redundant mutants(Emerald Group Publishing Ltd, 2024) Arasteh, Bahman; Ghaffari, AliPurposeReducing the number of generated mutants by clustering redundant mutants, reducing the execution time by decreasing the number of generated mutants and reducing the cost of mutation testing are the main goals of this study.Design/methodology/approachIn this study, a method is suggested to identify and prone the redundant mutants. In the method, first, the program source code is analyzed by the developed parser to filter out the effectless instructions; then the remaining instructions are mutated by the standard mutation operators. The single-line mutants are partially executed by the developed instruction evaluator. Next, a clustering method is used to group the single-line mutants with the same results. There is only one complete run per cluster.FindingsThe results of experiments on the Java benchmarks indicate that the proposed method causes a 53.51 per cent reduction in the number of mutants and a 57.64 per cent time reduction compared to similar experiments in the MuJava and MuClipse tools.Originality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program. Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program. Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkÖğe Intrusion detection in internet of things using improved binary golden jackal optimization algorithm and LSTM(Springer, 2023) Hanafi, Amir Vafid; Ghaffari, Ali; Rezaei, Hesam; Valipour, Aida; Arasteh, BahmanInternet of things (IoT) technology has gained a reputation in recent years due to its ease of use and adaptability. Due to the amount of sensitive and significant data exchanged over the global Internet, intrusion detection is a challenging task in the vast IoT network. A variety of hostile behaviors and attacks are now detected by intrusion detection systems (IDSs), which are difficult or impossible for a single method to identify. An Improved Binary Golden Jackal Optimization (IBGJO) algorithm and Long Short-Term Memory (LSTM) network are used in this paper to develop a new IDS model for IoT networks. Firstly, the GJO is improved by opposition-based learning (OBL). A binary mode of the improved GJO algorithm is used to select features from IDS data in order to determine the best subset selection. IBGJO uses OBL strategy to improve the performance of the GJO and prevents the algorithm from getting trap in local optima by controlling initial population. LSTM is used in the IBGJO-LSTM model to classify samples. Although high detection rates are achieved by machine learning techniques, the efficiency of these methods decreases with the increase in the size of the dataset. To overcome these problems, deep learning methods are more suitable for distinguishing samples from huge amount of data. The proposed model was assessed using the NSL-KDD and CICIDS2017 datasets. For CICIDS2017 and NSL-KDD, the proposed model was 98.21% accurate. The results show that the recognition accuracy of the proposed model is higher than the models BGJO-LSTM, Binary Whale Optimization Algorithm-LSTM (BWOA-LSTM) and Binary Sine Cosine Algorithm-LSTM (BSCA-LSTM). This is likely because the binary mode of the improved GJO algorithm is able to more effectively select the most relevant features from the IDS data and the LSTM is able to more accurately classify the samples. Also, the proposed model has a significantly higher percentage accuracy than Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naive Bayes (NB).Öğe An optimal task scheduling method in IoT-Fog-Cloud network using multi-objective moth-flame algorithm(Springer, 2023) Salehnia, Taybeh; Seyfollahi, Ali; Raziani, Saeid; Noori, Azad; Ghaffari, Ali; Alsoud, Anas Ratib; Abualigah, LaithNowadays, cloud and fog computing have been leveraged to enhance Internet of Things (IoT) performance. The outstanding potential of cloud platforms accelerates the processing and storage of aggregated big data from IoT equipment. Emerging fog-based schemes can improve service quality to IoT applications and mitigate excessive delays and security challenges. Also, since energy consumption can directly cause CO2 emissions from fog and cloud nodes, an efficient task scheduling method reduces energy consumption. In this regard, the growing need for an efficient task scheduling mechanism considering the optimal management of IoT resources is increasingly felt. IoT's task scheduling based on fog-cloud computing plays a crucial role in responding to users' requests. Optimal task scheduling can improve system performance. Therefore, this study uses an IoT task request scheduling method on resources by the Multi-Objective Moth-Flame Optimization (MOMFO) algorithm. It enhances the quality of IoT services based on fog-cloud computing to reduce task requests' completion and system throughput times and energy consumption. If energy consumption is diminished, the percentage of CO2 emissions is also reduced. Then, the proposed scheduling method to solve the task scheduling problem is evaluated using the datasets. A comparison between the proposed scheme and Particle Swarm Optimization (PSO), Firefly Algorithm (FA), Salp Swarm Algorithms (SSA), Harris Hawks Optimizer (HHO), and Artificial Bee Colony (ABC) is performed to assess the performance. According to experiments, the proposed solution has reduced the completion time of IoT tasks and throughput time, thus cutting down the delay due to the processing of tasks, energy consumption, and CO2 emissions and increasing the system's performance rate.Öğe QoS-based routing protocol and load balancing in wireless sensor networks using the markov model and the artificial bee colony algorithm(Springer, 2023) Sefati, Seyed Salar; Abdi, Mehrdad; Ghaffari, AliDue to resource constraints in wireless sensor networks (WSNs), energy consumption and networks' lifetime are considered significant challenges. Because sensors have a tiny battery and cannot be charged again. In WSN, collected data is usually transferred to the Base station (BS) directly or hop-by-hop. Therefore, load balancing and routing are one of the main issues in the WSN. This paper proposes a new routing scheme with load-balancing capability using the Markov Model (MM) and the Artificial Bee Colony (ABC) algorithm. LEACH algorithm is used to maintain load balancing between Cluster Heads (CHs). Then the Markov Model and the Artificial Bee Colony (MMABC) algorithm were used to find the best candidate nodes of each cluster to be turned into a CH. The simulation results in MATLAB software demonstrated that the proposed method surpasses the compared methods in terms of energy efficiency, number of alive nodes, and the number of delivered packets to BS and CH.Öğe Resource allocation in 5G cloud-RAN using deep reinforcement learning algorithms: A review(Wiley, 2024) Khani, Mohsen; Jamali, Shahram; Sohrabi, Mohammad Karim; Sadr, Mohammad Mohsen; Ghaffari, AliThis paper reviews recent research on resource allocation in 5G cloud-based radio access networks (C-RAN) using deep reinforcement learning (DRL) algorithms. It explores the potential of DRL for learning complex decision-making policies without human intervention. The paper first introduces the C-RAN architecture and resource allocation concepts, followed by an overview of DRL algorithms applied to C-RAN. It discusses the challenges and potential solutions in applying DRL to C-RAN resource allocation, including scalability, convergence, and fairness. The review concludes by highlighting open research directions for future investigation. By providing insights into the state-of-the-art techniques for resource allocation in 5G C-RAN using DRL, this paper emphasizes their potential impact on advancing 5G network technology.Öğe RM-RPL: reliable mobility management framework for RPL-based IoT systems(Springer, 2023) Seyfollahi, Ali; Mainuddin, Md; Taami, Tania; Ghaffari, AliThis paper represents the Reliable Mobility Management of RPL (RM-RPL) protocol, specifically developed to overcome the limitations of the Routing Protocol for Low-Power and Lossy Networks (RPL) in mobile IoT environments. RM-RPL incorporates a sophisticated mechanism to prevent the formation of loops, enabling mobile nodes to operate as both routers and parents within the network. It introduces a novel objective function that optimizes the selection of parent nodes and includes a mechanism to adjust the protocol's behavior when nodes are stationary. Furthermore, an algorithm is devised to acknowledge critical packets properly. The proposed model provides superior support for mobility, efficient routing, and dependable data transmission, rendering it highly suitable for diverse IoT applications. Through comprehensive evaluations, RM-RPL demonstrates exceptional performance in challenging scenarios characterized by large-scale networks, high density, and dynamic conditions. Comparative analysis reveals that RM-RPL significantly enhances the packet delivery ratio and exhibits commendable power consumption, end-to-end delay, and handover delay.Öğe SDN-IoT: SDN-based efficient clustering scheme for IoT using improved Sailfish optimization algorithm(Peerj Inc, 2023) Mohammadi, Ramin; Akleylek, Sedat; Ghaffari, AliThe Internet of Things (IoT) includes billions of different devices and various applications that generate a huge amount of data. Due to inherent resource limitations, reliable and robust data transmission for a huge number of heterogenous devices is one of the most critical issues for IoT. Therefore, cluster-based data transmission is appropriate for IoT applications as it promotes network lifetime and scalability. On the other hand, Software Defined Network (SDN) architecture improves flexibility and makes the IoT respond appropriately to the heterogeneity. This article proposes an SDN-based efficient clustering scheme for IoT using the Improved Sailfish optimization (ISFO) algorithm. In the proposed model, clustering of IoT devices is performed using the ISFO model and the model is installed on the SDN controller to manage the Cluster Head (CH) nodes of IoT devices. The performance evaluation of the proposed model was performed based on two scenarios with 150 and 300 nodes. The results show that for 150 nodes ISFO model in comparison with LEACH, LEACH-E reduced energy consumption by about 21.42% and 17.28%. For 300 ISFO nodes compared to LEACH, LEACH-E reduced energy consumption by about 37.84% and 27.23%.Öğe Task and resource allocation in the internet of things based on an improved version of the moth-flame optimization algorithm(Springer, 2024) Nematollahi, Masoud; Ghaffari, Ali; Mirzaei, A.The Internet of Things (IoT) technology is used to develop a wide range of applications and services, including intelligent healthcare systems and virtual reality applications. Low processing power limits IoT devices' capabilities. It's common practice to use cloud services to do operations that would otherwise require a user's device to be overloaded with data. High latency, high traffic, and high energy consumption remain, though. Given the above concerns, Fog Computing (FC) should be applied in the IoT to speed up time-sensitive data processing and management. In this study, a novel architecture for offloading jobs and allocating resources in the IoT is presented. Sensors, controllers, and FC servers are all part of the upgraded system. The second layer uses the subtask pool approach to offload work and the Moth-Flame Optimization (MFO) algorithm combined with Opposition-based Learning (OBL) to distribute resources. This combination is known as OBLMFO. A stack cache approach is used to complete resource allocation in the second layer to avoid system load imbalance. In addition, the second layer relies on the blockchain to guarantee the accuracy of transaction data. Another way to put it is that the proposed architecture utilizes blockchain advantages to optimize resource distribution in the IoT. The evaluation of the OBLMFO model was done through the Python 3.9 environment, which contains a large variety of distinct jobs. The results show that the OBLMFO model reduced the delay factor by 12.18% and the energy consumed by 6.22%.Öğe Task offloading in Internet of Things based on the improved multi-objective aquila optimizer(Springer London Ltd, 2024) Nematollahi, Masoud; Ghaffari, Ali; Mirzaei, AbbasThe Internet of Things (IoT) is a network of tens of billions of physical devices that are all connected to each other. These devices often have sensors or actuators, small microprocessors and ways to communicate. With the expansion of the IoT, the number of portable and mobile devices has increased significantly. Due to resource constraints, IoT devices are unable to complete tasks in full. To overcome this challenge, IoT devices must transfer tasks created in the IoT environment to cloud or fog servers. Fog computing (FC) is a computing paradigm that bridges the gap between the cloud and IoT devices and has lower latency compared to cloud computing. An algorithm for task offloading should have smart ways to make the best use of FC resources and cut down on latency. In this paper, an improved multi-objective Aquila optimizer (IMOAO) equipped with a Pareto front is proposed to task offloading from IoT devices to fog nodes with the aim of reducing the response time. To improve the MOAO algorithm, opposition-based learning (OBL) is used to diversify the population and discover optimal solutions. The IMOAO algorithm has been evaluated by the number of tasks and the number of fog nodes in order to reduce the response time. The results show that the average response time and failure rate obtained by IMOAO algorithm are lower compared to particle swarm optimization (PSO) and firefly algorithm (FA). Also, the comparisons show that the IMOAO model has a lower response time compared to multi-objective bacterial foraging optimization (MO-BFO), ant colony optimization (ACO), particle swarm optimization (PSO) and FA.