Arşiv logosu
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Arasteh, Bahman" seçeneğine göre listele

Listeleniyor 1 - 20 / 56
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    A cost-effective and machine-learning-based method to identify and cluster redundant mutants in software mutation testing (Apr, 10.1007/s11227-024-06107-8, 2024)
    (Springer, 2024) Arasteh, Bahman; Ghaffari, Ali
    A Cost-effective and Machine-learning-based method to identify and cluster redundant mutants in software mutation testing (Apr, 10.1007/s11227-024-06107-8, 2024)
  • Küçük Resim Yok
    Öğe
    A new accurate and fast convergence cuckoo search algorithm for solving constrained engineering optimization problems
    (IOS Press BV, 2024) Abdollahi, Mahdi; Bouyer, Asgarali; Arasteh, Bahman
    In recent years, the Cuckoo Optimization Algorithm (COA) has been widely used to solve various optimization problems due to its simplicity, efficacy, and capability to avoid getting trapped in local optima. However, COA has some limitations such as low convergence when it comes to solving constrained optimization problems with many constraints. This study proposes a new modified and adapted version of the Cuckoo optimization algorithm, referred to as MCOA, that overcomes the challenge of solving constrained optimization problems. The proposed adapted version introduces a new coefficient that reduces the egg-laying radius, thereby enabling faster convergence to the optimal solution. Unlike previous methods, the new coefficient does not require any adjustment during the iterative process, as the radius automatically decreases along the iterations. To handle constraints, we employ the Penalty Method, which allows us to incorporate constraints into the optimization problem without altering its formulation. To evaluate the performance of the proposed MCOA, we conduct experiments on five well-known case studies. Experimental results demonstrate that MCOA outperforms COA and other state-of-the-art optimization algorithms in terms of both efficiency and robustness. Furthermore, MCOA can reliably find the global optimal solution for all the tested problems within a reasonable iteration number. © 2024 – IOS Press. All rights reserved.
  • Küçük Resim Yok
    Öğe
    A software defect prediction method using binary gray wolf optimizer and machine learning algorithms
    (Pergamon-Elsevier Science, 2024) Wang, Hao; Arasteh, Bahman; Arasteh, Keyvan; Gharehchopogh, Farhad Soleimanian; Rouhi, Alireza
    Context: Software defect prediction means finding defect-prone modules before the testing process which will reduce testing cost and time. Machine learning methods can provide valuable models for developers to classify software faulty modules. Problem: The inherent problem of the classification is the large volume of the training dataset's features, which reduces the accuracy and precision of the classification results. The selection of the effective features of the training dataset for classification is an NP-hard problem that can be solved using heuristic algorithms. Method: In this study, a binary version of the Gray Wolf optimizer (bGWO) was developed to select the most effective features of the training dataset. By selecting the most influential features in the classification, the precision and accuracy of the software module classifiers can be increased. Contribution: Developing a binary version of the gray wolf optimization algorithm to optimally select the effective features and creating an effective defect predictor are the main contributions of this study. To evaluate the effectiveness of the proposed method, five real-world and standard datasets have been used for the training and testing stages of the classifier. Results: The results indicate that among the 21 features of the train datasets, the basic complexity, sum of operators and operands, lines of codes, number of lines containing code and comments, and sum of operands have the greatest effect in predicting software defects. In this research, by combining the bGWO method and machine learning algorithms, accuracy, precision, recall, and F1 criteria have been considerably increased.
  • Yükleniyor...
    Küçük Resim
    Öğe
    A survey of beluga whale optimization and its variants: Statistical analysis, advances, and structural reviewing
    (Elsevier Ireland ltd, 2025) Lee, Sang-Woong; Haider, Amir; Rahmani, Amir Masoud; Arasteh, Bahman; Gharehchopogh, Farhad Soleimanian; Tang, Shengda; Liu, Zhe; Aurangzeb, Khursheed; Hosseinzadeh, Mehdi
    Optimization, as a fundamental pillar in engineering, computer science, economics, and many other fields, plays a decisive role in improving the performance of systems and achieving desired goals. Optimization problems involve many variables, various constraints, and nonlinear objective functions. Among the challenges of complex optimization problems is the extensive search space with local optima that prevents reaching the global optimal solution. Therefore, intelligent and collective methods are needed to solve problems, such as searching for large problem spaces and identifying near-optimal solutions. Metaheuristic algorithms are a successful method for solving complex optimization problems. Usually, metaheuristic algorithms, inspired by natural and social phenomena, try to find optimal or near-optimal solutions by using random searches and intelligent explorations in the problem space. Beluga Whale Optimization (BWO) is one of the metaheuristic algorithms for solving optimization problems that has attracted the attention of researchers in recent years. The BWO algorithm tries to optimize the search space and achieve optimal solutions by simulating the collective behavior of whales. A study and review of published articles on the BWO algorithm show that this algorithm has been used in various fields, including optimization of mathematical functions, engineering problems, and even problems related to artificial intelligence. In this article, the BWO algorithm is classified according to four categories (combination, improvement, variants, and optimization). An analysis of 151 papers shows that the BWO algorithm has the highest percentage (49%) in the improvement field. The combination, variants, and optimization fields comprise 12%, 7%, and 32%, respectively.
  • Küçük Resim Yok
    Öğe
    Advances in Manta Ray Foraging Optimization: A Comprehensive Survey
    (Springer Singapore Pte Ltd, 2024) Gharehchopogh, Farhad Soleimanian; Ghafouri, Shafi; Namazi, Mohammad; Arasteh, Bahman
    This paper comprehensively analyzes the Manta Ray Foraging Optimization (MRFO) algorithm and its integration into diverse academic fields. Introduced in 2020, the MRFO stands as a novel metaheuristic algorithm, drawing inspiration from manta rays' unique foraging behaviors-specifically cyclone, chain, and somersault foraging. These biologically inspired strategies allow for effective solutions to intricate physical challenges. With its potent exploitation and exploration capabilities, MRFO has emerged as a promising solution for complex optimization problems. Its utility and benefits have found traction in numerous academic sectors. Since its inception in 2020, a plethora of MRFO-based research has been featured in esteemed international journals such as IEEE, Wiley, Elsevier, Springer, MDPI, Hindawi, and Taylor & Francis, as well as at international conference proceedings. This paper consolidates the available literature on MRFO applications, covering various adaptations like hybridized, improved, and other MRFO variants, alongside optimization challenges. Research trends indicate that 12%, 31%, 8%, and 49% of MRFO studies are distributed across these four categories respectively.
  • Küçük Resim Yok
    Öğe
    A bioinspired discrete heuristic algorithm to generate the effective structural model of a program source code
    (Elsevier, 2023) Arasteh, Bahman; Sadegi, Razieh; Arasteh, Keyvan; Gunes, Peri; Kiani, Farzad; Torkamanian-Afshar, Mahsa
    When the source code of a software is the only product available, program understanding has a substantial influence on software maintenance costs. The main goal in code comprehension is to extract information that is used in the software maintenance stage. Generating the structural model from the source code helps to alleviate the software maintenance cost. Software module clustering is thought to be a viable reverse engineering approach for building structural design models from source code. Finding the optimal clustering model is an NP-complete problem. The primary goals of this study are to minimize the number of connections between created clusters, enhance internal connections inside clusters, and enhance clustering quality. The previous approaches' main flaws were their poor success rates, instability, and inadequate modularization quality. The Olympiad optimization algorithm was introduced in this paper as a novel population-based and discrete heuristic algorithm for solving the software module clustering problem. This algorithm was inspired by the competition of a group of students to increase their knowledge and prepare for an Olympiad exam. The suggested algorithm employs a divide-and-conquer strategy, as well as local and global search methodologies. The effectiveness of the suggested Olympiad algorithm to solve the module clustering problem was evaluated using ten real-world and standard software benchmarks. According to the experimental results, on average, the modularization quality of the generated clustered models for the ten benchmarks is about 3.94 with 0.067 standard deviations. The proposed algorithm is superior to the prior algorithms in terms of modularization quality, convergence, and stability of results. Furthermore, the results of the experiments indicate that the proposed algorithm can be used to solve other discrete optimization problems efficiently. (c) 2023 The Author(s). Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
  • Küçük Resim Yok
    Öğe
    A Bioinspired Test Generation Method Using Discretized and Modified Bat Optimization Algorithm
    (Mdpi, 2024) Arasteh, Bahman; Arasteh, Keyvan; Kiani, Farzad; Sefati, Seyed Salar; Fratu, Octavian; Halunga, Simona; Tirkolaee, Erfan Babaee
    The process of software development is incomplete without software testing. Software testing expenses account for almost half of all development expenses. The automation of the testing process is seen to be a technique for reducing the cost of software testing. An NP-complete optimization challenge is to generate the test data with the highest branch coverage in the shortest time. The primary goal of this research is to provide test data that covers all branches of a software unit. Increasing the convergence speed, the success rate, and the stability of the outcomes are other goals of this study. An efficient bioinspired technique is suggested in this study to automatically generate test data utilizing the discretized Bat Optimization Algorithm (BOA). Modifying and discretizing the BOA and adapting it to the test generation problem are the main contributions of this study. In the first stage of the proposed method, the source code of the input program is statistically analyzed to identify the branches and their predicates. Then, the developed discretized BOA iteratively generates effective test data. The fitness function was developed based on the program's branch coverage. The proposed method was implemented along with the previous one. The experiments' results indicated that the suggested method could generate test data with about 99.95% branch coverage with a limited amount of time (16 times lower than the time of similar algorithms); its success rate was 99.85% and the average number of required iterations to cover all branches is 4.70. Higher coverage, higher speed, and higher stability make the proposed method suitable as an efficient test generation method for real-world large software.
  • Küçük Resim Yok
    Öğe
    Cache Aging with Learning (CAL): A Freshness-Based Data Caching Method for Information-Centric Networking on the Internet of Things (IoT)
    (Multidisciplinary Digital Publishing Institute (MDPI), 2025) Hazrati, Nemat; Pirahesh, Sajjad; Arasteh, Bahman; Sefati, Seyed Salar; Fratu, Octavian; Halunga, Simona
    Information-centric networking (ICN) changes the way data are accessed by focusing on the content rather than the location of devices. In this model, each piece of data has a unique name, making it accessible directly by name. This approach suits the Internet of Things (IoT), where data generation and real-time processing are fundamental. Traditional host-based communication methods are less efficient for the IoT, making ICN a better fit. A key advantage of ICN is in-network caching, which temporarily stores data across various points in the network. This caching improves data access speed, minimizes retrieval time, and reduces overall network traffic by making frequently accessed data readily available. However, IoT systems involve constantly updating data, which requires managing data freshness while also ensuring their validity and processing accuracy. The interactions with cached data, such as updates, validations, and replacements, are crucial in optimizing system performance. This research introduces an ICN-IoT method to manage and process data freshness in ICN for the IoT. It optimizes network traffic by sharing only the most current and valid data, reducing unnecessary transfers. Routers in this model calculate data freshness, assess its validity, and perform cache updates based on these metrics. Simulation results across four models show that this method enhances cache hit ratios, reduces traffic load, and improves retrieval delays, outperforming similar methods. The proposed method uses an artificial neural network to make predictions. These predictions closely match the actual values, with a low error margin of 0.0121. This precision highlights its effectiveness in maintaining data currentness and validity while reducing network overhead. © 2025 by the authors.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Clustered design-model generation from a program source code using chaos-based metaheuristic algorithms
    (Springer Science and Business Media Deutschland GmbH, 2022) Arasteh, Bahman
    Comprehension of the structure of software will facilitate maintaining the software more efficiently. Clustering software modules, as a reverse engineering technique, is assumed to be an effective technique in extracting comprehensible structural-models of software from the source code. Finding the best clustering model of a software system is regarded as a NP-complete problem. Minimizing the connections among the created clusters, maximizing the internal connections within the created clusters and maximizing the clustering quality are considered to be the most important objectives in software module clustering (SMC). Poor success rate, low stability and modularization quality are regarded as the major drawbacks of the previously proposed methods. In this paper, five different heuristic algorithms (Bat, Cuckoo, Teaching–Learning-Based, Black Widow and Grasshopper algorithms) are proposed for optimal clustering of software modules. Also, the effects of chaos theory in the performance of these algorithms in this problem have been experimentally investigated. The results of conducted experiments on the eight standard and real-world applications indicate that performance of the BWO, PSO, and TLB algorithms are higher than the other algorithms in SMC problem; also, the performance of these algorithm increased when their initial population were generated with logistic chaos method instead of random method. The average MQ of the generated clusters for the selected benchmark set by BWO, PSO and TLB are 3.155, 3.120 and 2.778, respectively.
  • Küçük Resim Yok
    Öğe
    Constraint-based heuristic algorithms for software test generation
    (Elsevier, 2024) Arasteh, Bahman; Aghaei, Babak; Ghanbarzadeh, Reza; Kalan, Reza
    While software testing is essential for enhancing a software system's quality, it can be time-consuming and costly during developing software. Automation of software testing can help solve this problem, streamlining time-consuming testing tasks. However, generating automated test data that maximally covers program branches is a complex optimization problem referred to as NP-complete and should be addressed appropriately. Although a variety of heuristic algorithms have already been suggested to create test suites with the greatest coverage, they have issues such as insufficient branch coverage, low rate of success in generating test data with high coverage, and unstable results. The main objective of the current chapter is to investigate and compare the coverage, success rate (SR), and stability of various heuristic algorithms in software structural test generation. To achieve this, the effectiveness of seven algorithms, genetic algorithm (GA), simulated annealing (SA), ant colony optimizer (ACO), particle swarm optimizer (PSO), artificial bee colony (ABC), shuffle frog leaping algorithm (SFLA), and imperialist competitive algorithm (ICA), are examined in automatically generating test data, and their performance is compared on the basis of various criteria. The experiment results demonstrate the superiority of the SFLA, ABC, and ICA to other examined algorithms. Overall, SFLA outperforms all other algorithms in coverage, SR, and stability. © 2024 Elsevier Inc. All rights reserved.
  • Küçük Resim Yok
    Öğe
    Cybersecurity in a Scalable Smart City Framework Using Blockchain and Federated Learning for Internet of Things (IoT)
    (Multidisciplinary Digital Publishing Institute (MDPI), 2024) Sefati, Seyed Salar; Craciunescu, Razvan; Arasteh, Bahman; Halunga, Simona; Fratu, Octavian; Tal, Irina
    Highlights: What are the main findings? Implementation of blockchain enhances the security and scalability of smart city frameworks. Federated Learning enables efficient and privacy-preserving data sharing among IoT devices. What are the implications of the main finding? The proposed framework significantly reduces the risk of data breaches in smart city infrastructures. Improved data privacy and security can foster greater adoption of IoT technologies in urban environments. Smart cities increasingly rely on the Internet of Things (IoT) to enhance infrastructure and public services. However, many existing IoT frameworks face challenges related to security, privacy, scalability, efficiency, and low latency. This paper introduces the Blockchain and Federated Learning for IoT (BFLIoT) framework as a solution to these issues. In the proposed method, the framework first collects real-time data, such as traffic flow and environmental conditions, then normalizes, encrypts, and securely stores it on a blockchain to ensure tamper-proof data management. In the second phase, the Data Authorization Center (DAC) uses advanced cryptographic techniques to manage secure data access and control through key generation. Additionally, edge computing devices process data locally, reducing the load on central servers, while federated learning enables distributed model training, ensuring data privacy. This approach provides a scalable, secure, efficient, and low-latency solution for IoT applications in smart cities. A comprehensive security proof demonstrates BFLIoT’s resilience against advanced cyber threats, while performance simulations validate its effectiveness, showing significant improvements in throughput, reliability, energy efficiency, and reduced delay for smart city applications. © 2024 by the authors.
  • Küçük Resim Yok
    Öğe
    DATA REPLICATION IN DISTRIBUTED SYSTEMS USING OLYMPIAD OPTIMIZATION ALGORITHM
    (Univ Nis, 2023) Arasteh, Bahman; Bouyer, Asgarali; Ghanbarzadeh, Reza; Rouhi, Alireza; Mehrabani, Mahsa Nazeri; Tirkolaee, Erfan Babaee
    Achieving timely access to data objects is a major challenge in big distributed systems like the Internet of Things (IoT) platforms. Therefore, minimizing the data read and write operation time in distributed systems has elevated to a higher priority for system designers and mechanical engineers. Replication and the appropriate placement of the replicas on the most accessible data servers is a problem of NP-complete optimization. The key objectives of the current study are minimizing the data access time, reducing the quantity of replicas, and improving the data availability. The current paper employs the Olympiad Optimization Algorithm (OOA) as a novel population-based and discrete heuristic algorithm to solve the replica placement problem which is also applicable to other fields such as mechanical and computer engineering design problems. This discrete algorithm was inspired by the learning process of student groups who are preparing for the Olympiad exams. The proposed algorithm, which is divide-and-conquer-based with local and global search strategies, was used in solving the replica placement problem in a standard simulated distributed system. The 'European Union Database' (EUData) was employed to evaluate the proposed algorithm, which contains 28 nodes as servers and a network architecture in the format of a complete graph. It was revealed that the proposed technique reduces data access time by 39% with around six replicas, which is vastly superior to the earlier methods. Moreover, the standard deviation of the results of the algorithm's different executions is approximately 0.0062, which is lower than the other techniques' standard deviation within the same experiments.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Detecting and mitigating security anomalies in software-defined networking (SDN) using gradient-boosted trees and floodlight controller characteristics
    (Elsevier b.v., 2025) Jafarian, Tohid; Ghaffari, Ali; Seyfollahi, Ali; Arasteh, Bahman
    Cutting-edge and innovative software solutions are provided to address network security, network virtualization, and other network-related challenges in highly congested SDN-powered networks. However, these networks are susceptible to the same security issues as traditional networks. For instance, SDNs are significantly vulnerable to distributed denial of service (DDoS) attacks. Previous studies have suggested various anomaly detection techniques based on machine learning, statistical analysis, or entropy measurement to combat DDoS attacks and other security threats in SDN networks. However, these techniques face challenges such as collecting sufficient and relevant flow data, extracting and selecting the most informative features, and choosing the best model for identifying and preventing anomalies. This paper introduces a new and advanced multi-stage modular approach for anomaly detection and mitigation in SDN networks. The approach consists of four modules: data collection, feature selection, anomaly classification, and anomaly response. The approach utilizes the NetFlow standard to gather data and generate a dataset, employs the Information Gain Ratio (IGR) to select the most valuable features, uses gradient-boosted trees (GBT), and leverages Representational State Transfer Application Programming Interfaces (REST API) and Static Entry Pusher within the floodlight controller to construct an exceptionally efficient structure for detecting and mitigating anomalies in SDN design. We conducted experiments on a synthetic dataset containing 15 types of anomalies, such as DDoS attacks, port scans, worms, etc. We compared our model with four existing techniques: SVM, KNN, DT, and RF. Experimental results demonstrate that our model outperforms the existing techniques in terms of enhancing Accuracy (AC) and Detection Rate (DR) while simultaneously reducing Classification Error (CE) and False Alarm Rate (FAR) to 98.80 %, 97.44 %, 1.2 %, and 0.38 %, respectively.
  • Küçük Resim Yok
    Öğe
    Detecting and mitigating security anomalies in software-defined networking (SDN) using gradient-boosted trees and floodlight controller characteristics
    (Elsevier, 2024) Jafarian, Tohid; Ghaffari, Ali; Seyfollahi, Ali; Arasteh, Bahman
    Cutting-edge and innovative software solutions are provided to address network security, network virtualization, and other network-related challenges in highly congested SDN-powered networks. However, these networks are susceptible to the same security issues as traditional networks. For instance, SDNs are significantly vulnerable to distributed denial of service (DDoS) attacks. Previous studies have suggested various anomaly detection techniques based on machine learning, statistical analysis, or entropy measurement to combat DDoS attacks and other security threats in SDN networks. However, these techniques face challenges such as collecting sufficient and relevant flow data, extracting and selecting the most informative features, and choosing the best model for identifying and preventing anomalies. This paper introduces a new and advanced multi-stage modular approach for anomaly detection and mitigation in SDN networks. The approach consists of four modules: data collection, feature selection, anomaly classification, and anomaly response. The approach utilizes the NetFlow standard to gather data and generate a dataset, employs the Information Gain Ratio (IGR) to select the most valuable features, uses gradient-boosted trees (GBT), and leverages Representational State Transfer Application Programming Interfaces (REST API) and Static Entry Pusher within the floodlight controller to construct an exceptionally efficient structure for detecting and mitigating anomalies in SDN design. We conducted experiments on a synthetic dataset containing 15 types of anomalies, such as DDoS attacks, port scans, worms, etc. We compared our model with four existing techniques: SVM, KNN, DT, and RF. Experimental results demonstrate that our model outperforms the existing techniques in terms of enhancing Accuracy (AC) and Detection Rate (DR) while simultaneously reducing Classification Error (CE) and False Alarm Rate (FAR) to 98.80 %, 97.44 %, 1.2 %, and 0.38 %, respectively.
  • Küçük Resim Yok
    Öğe
    Detecting SQL injection attacks by binary gray wolf optimizer and machine learning algorithms
    (Springer London Ltd, 2024) Arasteh, Bahman; Aghaei, Babak; Farzad, Behnoud; Arasteh, Keyvan; Kiani, Farzad; Torkamanian-Afshar, Mahsa
    SQL injection is one of the important security issues in web applications because it allows an attacker to interact with the application's database. SQL injection attacks can be detected using machine learning algorithms. The effective features should be employed in the training stage to develop an optimal classifier with optimal accuracy. Identifying the most effective features is an NP-complete combinatorial optimization problem. Feature selection is the process of selecting the training dataset's smallest and most effective features. The main objective of this study is to enhance the accuracy, precision, and sensitivity of the SQLi detection method. In this study, an effective method to detect SQL injection attacks has been proposed. In the first stage, a specific training dataset consisting of 13 features was prepared. In the second stage, two different binary versions of the Gray-Wolf algorithm were developed to select the most effective features of the dataset. The created optimal datasets were used by different machine learning algorithms. Creating a new SQLi training dataset with 13 numeric features, developing two different binary versions of the gray wolf optimizer to optimally select the features of the dataset, and creating an effective and efficient classifier to detect SQLi attacks are the main contributions of this study. The results of the conducted tests indicate that the proposed SQL injection detector obtain 99.68% accuracy, 99.40% precision, and 98.72% sensitivity. The proposed method increases the efficiency of attack detection methods by selecting 20% of the most effective features.
  • Küçük Resim Yok
    Öğe
    Discovering overlapping communities using a new diffusion approach based on core expanding and local depth traveling in social networks
    (Taylor & Francis Ltd, 2023) Bouyer, Asgarali; Sabavand Monfared, Maryam; Nourani, Esmaeil; Arasteh, Bahman
    This paper proposes a local diffusion-based approach to find overlapping communities in social networks based on label expansion using local depth first search and social influence information of nodes, called the LDLF algorithm. It is vital to start the diffusion process in local depth, traveling from specific core nodes based on their local topological features and strategic position for spreading community labels. Correspondingly, to avoid assigning excessive and unessential labels, the LDLF algorithm prudently removes redundant and less frequent labels for nodes with multiple labels. Finally, the proposed method finalizes the node's label based on the Hub Depressed index. Thanks to requiring only two iterations for label updating, the proposed LDLF algorithm runs in low time complexity while eliminating random behavior and achieving acceptable accuracy in finding overlapping communities for large-scale networks. The experiments on benchmark networks prove the effectiveness of the LDLF method compared to state-of-the-art approaches.
  • Küçük Resim Yok
    Öğe
    A discrete heuristic algorithm with swarm and evolutionary features for data replication problem in distributed systems
    (Springer London Ltd, 2023) Arasteh, Bahman; Allahviranloo, Tofigh; Funes, Peri; Torkamanian-Afshar, Mahsa; Khari, Manju; Catak, Muammer
    Availability and accessibility of data objects in a reasonable time is a main issue in distributed systems like cloud computing services. As a result, the reduction of data-related operation times in distributed systems such as data read/write has become a major challenge in the development of these systems. In this regard, replicating the data objects on different servers is one commonly used technique. In general, replica placement plays an essential role in the efficiency of distributed systems and can be implemented statically or dynamically. Estimation of the minimum number of data replicas and the optimal placement of the replicas is an NP-complete optimization problem. Hence, different heuristic algorithms have been proposed for optimal replica placement in distributed systems. Reducing data processing costs as well as the number of replicas, and increasing the reliability of the replica placement algorithms are the main goals of this research. This paper presents a discrete and swarm-evolutionary method using a combination of shuffle-frog leaping and genetic algorithms to data-replica placement problems in distributed systems. The experiments on the standard dataset show that the proposed method reduces data access time by up to 30% with about 14 replicas; whereas the generated replicas by the GA and ACO are, respectively, 24 and 30. The average reduction in data access time by GA and ACO 21% and 18% which shows less efficiency than the SFLA-GA algorithm. Regarding the results, the SFLA-GA converges on the optimal solution before the 10th iteration, which shows the higher performance of the proposed method. Furthermore, the standard deviation among the results obtained by the proposed method on several runs is about 0.029, which is lower than other algorithms. Additionally, the proposed method has a higher success rate than other algorithms in the replica placement problem.
  • Küçük Resim Yok
    Öğe
    Discretized optimization algorithms for finding the bug-prone locations of a program source code
    (Elsevier, 2024) Arasteh, Bahman; Sefati, Seyed Salar; Shami, Shiva; Abdollahian, Mehrdad
    The number of discovered bugs determines the efficacy of software test data. Software mutation testing is an important issue in software engineering since it is used to evaluate the effectiveness of test techniques. Syntactical changes are made to the program source code to generate buggy versions, which are then run alongside the original programs using test data. However, one of the key disadvantages of mutation testing is its high processing cost, which presents a difficult dilemma in the field of software engineering. The major goal of this study is to investigate the performance of different heuristic algorithms associated with mutation testing. According to the 80%–20 rule, 80% of a program's bugs are detected in only 20% of its bug-prone code. Different heuristic algorithms have been proposed to find out the bug-prone and sensitive locations of a program source code. Next, mutants are only put into the identified bug-prone instructions and data. This technique guarantees that mutation operators are only injected into code portions that are prone to bugs. Experimental evaluation on typical benchmark programs shows the effectiveness of different heuristic algorithms in reducing the number of generated mutants. A decrease in the number of created mutants reduces the total cost of mutation testing. Another feature of the heuristic-based mutation testing technique is its independence from platform and testing tool. Experimental findings show that using the heuristic strategy in different testing tools such as Mujava, Muclipse, Jester, and Jumble results in a considerable reduction of mutations created during testing. © 2024 Elsevier Inc. All rights reserved.
  • Küçük Resim Yok
    Öğe
    A divide and conquer based development of gray wolf optimizer and its application in data replication problem in distributed systems
    (Springer, 2023) Fan, Wenguang; Arasteh, Bahman; Bouyer, Asgarali; Majidnezhad, Vahid
    One of the main problems of big distributed systems, like IoT, is the high access time to data objects. Replicating the data objects on various servers is a traditional strategy. Replica placement, which can be implemented statically or dynamically, is generally crucial to the effectiveness of distributed systems. Producing the minimum number of data copies and placing them on appropriate servers to minimize access time is an NP-complete optimization problem. Various heuristic techniques for efficient replica placement in distributed systems have been proposed. The main objectives of this research are to decrease the cost of data processing operations, decrease the number of copies, and improve the accessibility of the data objects. In this study, a discretized and group-based gray wolf optimization algorithm with swarm and evolutionary features was developed for the replica placement problem. The proposed algorithm includes swarm and evolutionary features and divides the wolves' population into subgroups, and each subgroup was locally searched in a different solution space. According to experiments conducted on the standard benchmark dataset, the suggested method provides about a 40% reduction in the data access time with about five replicas. Also, the reliability of the suggested method during different executions is considerably higher than the previous methods.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Duzen: generating the structural model from the software source code using shuffled frog leaping algorithm
    (SPRINGER LONDON, 2022) Arasteh, Bahman; Karimi, Mohammad Bagher; Sadegi, Razieh
    The cost of software maintenance is heavily influenced by program understanding. When the source code is the only product accessible, maintainers spend a significant amount of effort trying to understand the structure and behavior of the software. Program module clustering is a useful reverse-engineering technique for obtaining the software structural model from source code. Finding the best clustering is regarded as an NP-hard optimization problem, and several meta-heuristic methods have been employed to solve it. The fundamental flaws of the prior approaches were their insufficient performance and effectiveness. The major goals of this research are to achieve improved software clustering quality and stability. A new method (Duzen) is proposed in this research for improving software module clustering. As a meta-heuristic memetic algorithm, this technique employs the shuffled frog-leaping algorithm. The Duzen results were investigated and compared to those produced using earlier approaches. In terms of obtaining the best clustering quality, the proposed method was shown to be better and more successful than the others; it also had higher data stability and data convergence to optimal replies in a fewer number of repetitions. Furthermore, it acquired a higher data mean and a faster clustering execution time.
  • «
  • 1 (current)
  • 2
  • 3
  • »

| İstinye Üniversitesi | Kütüphane | Açık Bilim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


İstinye Üniversitesi, İstanbul, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim