Arşiv logosu
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Bouyer, Asgarali" seçeneğine göre listele

Listeleniyor 1 - 20 / 23
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    A new accurate and fast convergence cuckoo search algorithm for solving constrained engineering optimization problems
    (IOS Press BV, 2024) Abdollahi, Mahdi; Bouyer, Asgarali; Arasteh, Bahman
    In recent years, the Cuckoo Optimization Algorithm (COA) has been widely used to solve various optimization problems due to its simplicity, efficacy, and capability to avoid getting trapped in local optima. However, COA has some limitations such as low convergence when it comes to solving constrained optimization problems with many constraints. This study proposes a new modified and adapted version of the Cuckoo optimization algorithm, referred to as MCOA, that overcomes the challenge of solving constrained optimization problems. The proposed adapted version introduces a new coefficient that reduces the egg-laying radius, thereby enabling faster convergence to the optimal solution. Unlike previous methods, the new coefficient does not require any adjustment during the iterative process, as the radius automatically decreases along the iterations. To handle constraints, we employ the Penalty Method, which allows us to incorporate constraints into the optimization problem without altering its formulation. To evaluate the performance of the proposed MCOA, we conduct experiments on five well-known case studies. Experimental results demonstrate that MCOA outperforms COA and other state-of-the-art optimization algorithms in terms of both efficiency and robustness. Furthermore, MCOA can reliably find the global optimal solution for all the tested problems within a reasonable iteration number. © 2024 – IOS Press. All rights reserved.
  • Yükleniyor...
    Küçük Resim
    Öğe
    A novel contrastive multi-view framework for heterogeneous graph embedding
    (Springer science and business media deutschland GmbH, 2025) Noori, Azad; Balafar, Mohammad Ali; Bouyer, Asgarali; Salmani, Khosro
    Heterogeneous graphs, characterized by diverse node types and relational structures, serve as powerful tools for representing intricate real-world systems. Understanding the complex relationships within these graphs is essential for various downstream tasks. However, traditional graph embedding methods often struggle to represent the rich semantics and heterogeneity of these graphs effectively. In order to tackle this challenge, we present an innovative self-supervised multi-view network (SMHGNN) designed for the embedding of heterogeneous graphs. SMHGNN utilizes three complementary views: meta-paths, meta-structures, and the network schema, to thoroughly capture the complex relationships and interactions among nodes in heterogeneous graphs. The proposed SMHGNN utilizes a self-supervised learning paradigm, thereby obviating the necessity for annotated data while simultaneously augmenting the model's capability for generalization. Furthermore, we present an innovative mechanism for the identification of positive and negative samples, which is predicated on a scoring matrix that integrates both node characteristics and graph topology, thereby proficiently differentiating authentic positive pairs from false negatives. Comprehensive empirical evaluations conducted on established benchmark datasets elucidate the enhanced efficacy of our SMHGNN in comparison with cutting-edge methodologies in heterogeneous graph embedding. (The proposed method achieved a 0.75-2.4% improvement in node classification and a 0.78-2.7% improvement in clustering across diverse datasets when compared to previous state-of-the-art methods.).
  • Küçük Resim Yok
    Öğe
    A cascade information diffusion prediction model integrating topic features and cross-attention
    (Elsevier, 2023) Liu, Xiaoyang; Wang, Haotian; Bouyer, Asgarali
    Information cascade prediction is a crucial task in social network analysis. However, previous research has only focused on the impact of social relationships on cascade information diffusion, while ignoring the differences caused by the characteristics of cascade information itself, which limits the performance of prediction results. We propose a novel cascade information diffusion prediction model (Topic-HGAT). Firstly, we extract features from different topic features to enhance the learned cascade information representation. To better implement this method, we use hypergraphs to better characterize cascade information and dynamically learn multiple diffusion sub-hypergraphs according to the time process; secondly, we introduce cross-attention mechanisms to learn each other's feature representations from the perspectives of both user representation and cascade representation, thereby achieving deep fusion of the two features. This solves the problem of poor feature fusion effect caused by simply calculating self-attention on learned user representation and cascade representation in previous studies; finally, we conduct comparative experiments on four real datasets, including Twitter and Douban. Experimental results show that the proposed Topic-HGAT model achieves the highest improvements of 2.91% and 1.59% on Hits@100 and MAP@100 indicators, respectively, compared to other 8 baseline models, verifying the rationality and effectiveness of the proposed Topic-HGAT model.
  • Küçük Resim Yok
    Öğe
    DATA REPLICATION IN DISTRIBUTED SYSTEMS USING OLYMPIAD OPTIMIZATION ALGORITHM
    (Univ Nis, 2023) Arasteh, Bahman; Bouyer, Asgarali; Ghanbarzadeh, Reza; Rouhi, Alireza; Mehrabani, Mahsa Nazeri; Tirkolaee, Erfan Babaee
    Achieving timely access to data objects is a major challenge in big distributed systems like the Internet of Things (IoT) platforms. Therefore, minimizing the data read and write operation time in distributed systems has elevated to a higher priority for system designers and mechanical engineers. Replication and the appropriate placement of the replicas on the most accessible data servers is a problem of NP-complete optimization. The key objectives of the current study are minimizing the data access time, reducing the quantity of replicas, and improving the data availability. The current paper employs the Olympiad Optimization Algorithm (OOA) as a novel population-based and discrete heuristic algorithm to solve the replica placement problem which is also applicable to other fields such as mechanical and computer engineering design problems. This discrete algorithm was inspired by the learning process of student groups who are preparing for the Olympiad exams. The proposed algorithm, which is divide-and-conquer-based with local and global search strategies, was used in solving the replica placement problem in a standard simulated distributed system. The 'European Union Database' (EUData) was employed to evaluate the proposed algorithm, which contains 28 nodes as servers and a network architecture in the format of a complete graph. It was revealed that the proposed technique reduces data access time by 39% with around six replicas, which is vastly superior to the earlier methods. Moreover, the standard deviation of the results of the algorithm's different executions is approximately 0.0062, which is lower than the other techniques' standard deviation within the same experiments.
  • Küçük Resim Yok
    Öğe
    Discovering overlapping communities using a new diffusion approach based on core expanding and local depth traveling in social networks
    (Taylor & Francis Ltd, 2023) Bouyer, Asgarali; Sabavand Monfared, Maryam; Nourani, Esmaeil; Arasteh, Bahman
    This paper proposes a local diffusion-based approach to find overlapping communities in social networks based on label expansion using local depth first search and social influence information of nodes, called the LDLF algorithm. It is vital to start the diffusion process in local depth, traveling from specific core nodes based on their local topological features and strategic position for spreading community labels. Correspondingly, to avoid assigning excessive and unessential labels, the LDLF algorithm prudently removes redundant and less frequent labels for nodes with multiple labels. Finally, the proposed method finalizes the node's label based on the Hub Depressed index. Thanks to requiring only two iterations for label updating, the proposed LDLF algorithm runs in low time complexity while eliminating random behavior and achieving acceptable accuracy in finding overlapping communities for large-scale networks. The experiments on benchmark networks prove the effectiveness of the LDLF method compared to state-of-the-art approaches.
  • Küçük Resim Yok
    Öğe
    A divide and conquer based development of gray wolf optimizer and its application in data replication problem in distributed systems
    (Springer, 2023) Fan, Wenguang; Arasteh, Bahman; Bouyer, Asgarali; Majidnezhad, Vahid
    One of the main problems of big distributed systems, like IoT, is the high access time to data objects. Replicating the data objects on various servers is a traditional strategy. Replica placement, which can be implemented statically or dynamically, is generally crucial to the effectiveness of distributed systems. Producing the minimum number of data copies and placing them on appropriate servers to minimize access time is an NP-complete optimization problem. Various heuristic techniques for efficient replica placement in distributed systems have been proposed. The main objectives of this research are to decrease the cost of data processing operations, decrease the number of copies, and improve the accessibility of the data objects. In this study, a discretized and group-based gray wolf optimization algorithm with swarm and evolutionary features was developed for the replica placement problem. The proposed algorithm includes swarm and evolutionary features and divides the wolves' population into subgroups, and each subgroup was locally searched in a different solution space. According to experiments conducted on the standard benchmark dataset, the suggested method provides about a 40% reduction in the data access time with about five replicas. Also, the reliability of the suggested method during different executions is considerably higher than the previous methods.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Efficient superpixel-based brain MRI segmentation using multi-scale morphological gradient reconstruction and quantum clustering
    (Elsevier ltd, 2025) Golzari Oskouei, Amin; Abdolmaleki, Nasim; Bouyer, Asgarali; Arasteh, Bahman; Shirini, Kimia
    Segmentation of brain MRI images is a fundamental task in medical image analysis. However, existing clustering methods often face significant challenges, including high computational complexity in calculating distances between cluster centers and pixels at each iteration, sensitivity to initial parameters and noise, and inadequate consideration of local spatial structures. This paper introduces an innovative method, Efficient Superpixel-Based Brain MRI Segmentation using Multi-Scale Morphological Gradient Reconstruction and Quantum Clustering, designed to address these challenges. The aim is to develop an efficient and robust segmentation technique that enhances accuracy while mitigating computational and parameter-related issues. To achieve this, we propose a multi-scale morphological gradient reconstruction operation that generates precise superpixel images, thereby improving the representation of local spatial features. These superpixel images are then used to compute histograms, effectively compressing the original color image data. Quantum clustering is subsequently applied to these superpixels using histogram parameters, leading to the desired segmentation outcomes. Experimental results demonstrate that our method outperforms state-of-the-art clustering techniques in terms of both segmentation accuracy and processing speed. These findings underscore the proposed approach's potential to overcome traditional methods' limitations, offering a promising solution for brain MRI segmentation in medical imaging.
  • Küçük Resim Yok
    Öğe
    A fast module identification and filtering approach for influence maximization problem in social networks
    (Elsevier Science Inc, 2023) Beni, Hamid Ahmadi; Bouyer, Asgarali; Azimi, Sevda; Rouhi, Alireza; Arasteh, Bahman
    In this paper, we explore influence maximization, one of the most widely studied problems in social network analysis. However, developing an effective algorithm for influence maximization is still a challenging task given its NP-hard nature. To tackle this issue, we propose the CSP (Combined modules for Seed Processing) algorithm, which aim to identify influential nodes. In CSP, graph modules are initially identified by a combination of criteria such as the clustering coefficient, degree, and common neighbors of nodes. Nodes with the same label are then clustered together into modules using label diffusion. Subsequently, only the most influential modules are selected using a filtering method based on their diffusion capacity. The algorithm then merges neighboring modules into distinct modules and extracts a candidate set of influential nodes using a new metric to quickly select seed sets. The number of selected nodes for the candidate set is restricted by a defined limit measure. Finally, seed nodes are chosen from the candidate set using a novel node scoring measure. We evaluated the proposed algorithm on both real-world and synthetic networks, and our experimental results indicate that the CSP algorithm outperforms other competitive algorithms in terms of solution quality and speedup on tested networks.
  • Küçük Resim Yok
    Öğe
    Feature-weighted fuzzy clustering methods: An experimental review
    (Elsevier B.V., 2025) Golzari Oskouei, Amin; Samadi, Negin; Khezri, Shirin; Najafi Moghaddam, Arezou; Babaei, Hamidreza; Hamini, Kiavash; Fath Nojavan, Saghar; Bouyer, Asgarali; Arasteh, Bahman
    Soft clustering, a widely utilized method in data analysis, offers a versatile and flexible strategy for grouping data points. Most soft clustering algorithms assume that all the features present in the feature space of a dataset are of equal importance and neglect their degree of informativeness or irrelevance. Distinguishing between the relative importance of features in providing an optimal clustering structure has become a very challenging task. Many feature weighting methods have been proposed to deal with this problem in the field of soft clustering, which can broadly categorized into six major types: feature reduction-based, entropy-based, variance-based, membership-based, optimization-based, and meta-heuristic-based. This paper comprehensively reviews the most significant fuzzy clustering algorithms that employ feature weighting techniques. A taxonomy of the feature weighting-based fuzzy clustering algorithms is presented. Furthermore, all state-of-the-art approaches are implemented in Python and compared in terms of clustering performance by conducting various experimental evaluation schemes. In this comprehensive experimental analysis, 26 state-of-the-art clustering algorithms are evaluated on two synthetic and 18 benchmark UCI datasets based on Accuracy (ACC), Normalized Mutual Information (NMI), Precision (PR), Recall (RE), F1, Silhouette (SI) and Davies-Bouldin (DB) evaluation criteria. Moreover, the significance of the experimental comparisons is examined using Friedman and Holm's post-hoc statistical tests. The experimental analysis demonstrates the superior performance of variance-based feature weighting algorithms in most datasets. All the tested algorithms are implemented in Python, and the related source codes are shared publicly at https://github.com/Amin-Golzari-Oskouei/FWSCA. © 2024 Elsevier B.V.
  • Yükleniyor...
    Küçük Resim
    Öğe
    FIP: A fast overlapping community-based influence maximization algorithm using probability coefficient of global diffusion in social networks
    (Elsevier Ltd, 2023) Bouyer, Asgarali; Ahmadi Beni, Hamid; Arasteh, Bahman; Aghaee, Zahra; Ghanbarzadeh, Reza
    Influence maximization is the process of identifying a small set of influential nodes from a complex network to maximize the number of activation nodes. Due to the critical issues such as accuracy, stability, and time complexity in selecting the seed set, many studies and algorithms has been proposed in recent decade. However, most of the influence maximization algorithms run into major challenges such as the lack of optimal seed nodes selection, unsuitable influence spread, and high time complexity. In this paper intends to solve the mentioned challenges, by decreasing the search space to reduce the time complexity. Furthermore, It selects the seed nodes with more optimal influence spread concerning the characteristics of a community structure, diffusion capability of overlapped and hub nodes within and between communities, and the probability coefficient of global diffusion. The proposed algorithm, called the FIP algorithm, primarily detects the overlapping communities, weighs the communities, and analyzes the emotional relationships of the community's nodes. Moreover, the search space for choosing the seed nodes is limited by removing insignificant communities. Then, the candidate nodes are generated using the effect of the probability of global diffusion. Finally, the role of important nodes and the diffusion impact of overlapping nodes in the communities are measured to select the final seed nodes. Experimental results in real-world and synthetic networks indicate that the proposed FIP algorithm has significantly outperformed other algorithms in terms of efficiency and runtime.
  • Küçük Resim Yok
    Öğe
    A hybrid chaos-based algorithm for data object replication in distributed systems
    (Taylor & Francis Ltd, 2024) Arasteh, Bahman; Gunes, Peri; Bouyer, Asgarali; Rouhi, Alireza; Ghanbarzadeh, Reza
    One of the primary challenges in distributed systems, such as cloud computing, lies in ensuring that data objects are accessible within a reasonable timeframe. To address this challenge, the data objects are replicated across multiple servers. Estimating the minimum quantity of data replicas and their optimal placement is considered an NP-complete optimization problem. The primary objectives of the current research include minimizing data processing costs, reducing the quantity of replicas, and maximizing the applied algorithms' reliability in replica placement. This paper introduces a hybrid chaos-based swarm approach using the modified shuffle-frog leaping algorithm with a new local search strategy for replicating data in distributed systems. Taking into account the algorithm's performance in static settings, the introduced method reduces the expenses associated with replica placement. The results of the experiment conducted on a standard data set indicate that the proposed approach can decrease data access time by about 33% when using approximately seven replicas. When executed several times, the suggested method yields a standard deviation of approximately 0.012 for the results, which is lower than the result existing algorithms produce. Additionally, the new approach's success rate is higher in comparison with existing algorithms used in addressing the problem of replica placement.
  • Küçük Resim Yok
    Öğe
    Identifying influential nodes based on new layer metrics and layer weighting in multiplex networks
    (Springer London Ltd, 2024) Bouyer, Asgarali; Mohammadi, Moslem; Arasteh, Bahman
    Identifying influential nodes in multiplex complex networks have a critical importance to implement in viral marketing and other real-world information diffusion applications. However, selecting suitable influential spreaders in multiplex networks are more complex due to existing multiple layers. Each layer of multiplex networks has its particular importance. Based on this research, an important layer with strong spreaders is a layer positioned in a well-connected neighborhood with more active edges, active critical nodes, the ratio of active nodes and their connections to all possible connections, and the intersection of intralayer communication compared to other layers. In this paper, we have formulated a layer weighting method based on mentioned layer's parameters and proposed an algorithm for mapping and computing the rank of nodes based on their spreading capability in multiplex networks. Thus, the result of layer weighting is used in mapping and compressing centrality vector values to a scalar value for calculating the centrality of nodes in multiplex networks by a coupled set of equations. In addition, based on this new method, the important layer parameters are combined for the first time to utilize in computing the influence of nodes from different layers. Experimental results on both synthetic and real-world networks show that the proposed layer weighting and mapping method significantly is effective in detecting high influential spreaders against compared methods. These results validate the specific attention to suitable layer weighting measure for identifying potential spreaders in multiplex network.
  • Küçük Resim Yok
    Öğe
    Identifying top influential spreaders based on the influence weight of layers in multiplex networks
    (Pergamon-Elsevier Science Ltd, 2023) Zhou, Xiaohui; Bouyer, Asgarali; Maleki, Morteza; Mohammadi, Moslem; Arasteh, Bahman
    Detecting influential nodes in multiplex networks is a complex task due to the presence of multiple layers. In this study, we propose a method for identifying important layers with strong spreaders based on several key parameters. These include a layer's position within a well-connected neighborhood, the number of active edges and critical nodes, the ratio of active nodes to all possible connections, and the intersection of intra-layer communication compared to other layers. To accomplish this, we have formulated a layer weighting method which takes into account these parameters, and developed an algorithm for mapping and computing the rank of nodes based on their spreading capability within multiplex networks. The resulting layer weighting is then used to map and compress centrality vector values to a scalar value, allowing us to calculate node centrality in multiplex networks via a coupled set of equations. Moreover, our method combines the important layer parameters to compute the influence of nodes from different layers. Our experimental results, conducted on both synthetic and real-world networks, demonstrate that the proposed approach significantly outperforms existing methods in detecting high influential spreaders. These findings highlight the importance of using a suitable layer weighting measure for identifying potential spreaders in multiplex networks.
  • Küçük Resim Yok
    Öğe
    Local core expanding-based label diffusion and local deep embedding for fast community detection algorithm in social networks
    (Pergamon-elsevier science ltd, 2024) Bouyer, Asgarali; Shahgholi, Pouya; Arasteh, Bahman; Tirkolaee, Erfan Babaee
    Community detection is a key task in social network analysis, as it reveals the underlying structure and function of the network. Various global and local techniques exist for uncovering community structures in social networks wherein diffusion-based algorithms are proposed as novel methods for local community detection, particularly suited for large-scale networks. The efficacy of diffusion processes and initial detection is paramount in the successful identification of community structures within social networks. This effectiveness hinges significantly on the meticulous selection of the label diffuser core, which serves as the foundation for propagating labels through the network, and the precise labeling of boundary nodes. Addressing the constraints of current community detection algorithms, notably their time complexity and efficiency, this paper proposes a novel local community detection algorithm that combines core expansion with label diffusion, and deep embedding techniques. In the proposed method, a new centrality measure is introduced for appropriate core selection to facilitate precise label diffusion in the initial phase. Subsequently, a deep embedding technique is employed for updating labels of boundary and core nodes using the GraphSage embedding method. Finally, a rapid merging step is executed to amalgamate initially proximate communities into finalized community structures in large-scale social networks. We evaluate our algorithm on 14 real-world and 4 synthetic networks and show that it outperforms existing methods in terms of NMI, F-measure, ARI, and modularity. According to numerical results, the proposed method shows approximately 1.04 %, 1.03 %, and 1.12 % improvement in F-measure, NMI, and ARI measures respectively, compared to the second-best method, LBLD, in the networks with ground-truth. In addition, our method is able to accurately identify communities in large-scale networks such as Orkut, YouTube, and LiveJournal, where it ranks among the top-performing methods. Our approach exhibits the best performance in terms of ARI compared to other algorithms under comparison.
  • Küçük Resim Yok
    Öğe
    Meet User's Service Requirements in Smart Cities Using Recurrent Neural Networks and Optimization Algorithm
    (Ieee-Inst Electrical Electronics Engineers Inc, 2023) Sefati, Seyed Salar; Arasteh, Bahman; Halunga, Simona; Fratu, Octavian; Bouyer, Asgarali
    Despite significant advancements in Internet of Things (IoT)-based smart cities, service discovery, and composition continue to pose challenges. Current methodologies face limitations in optimizing Quality of Service (QoS) in diverse network conditions, thus creating a critical research gap. This study presents an original and innovative solution to this issue by introducing a novel three-layered recurrent neural network (RNN) algorithm. Aimed at optimizing QoS in the context of IoT service discovery, our method incorporates user requirements into its evaluation matrix. It also integrates long short-term memory (LSTM) networks and a unique black widow optimization (BWO) algorithm, collectively facilitating the selection and composition of optimal services for specific tasks. This approach allows the RNN algorithm to identify the top-K services based on QoS under varying network conditions. Our methodology's novelty lies in implementing LSTM in the hidden layer and employing backpropagation through time (BPTT) for parameter updates, which enables the RNN to capture temporal patterns and intricate relationships between devices and services. Further, we use the BWO algorithm, which simulates the behavior of black widow spiders, to find the optimal combination of services to meet system requirements. This algorithm factors in both the attractive and repulsive forces between services to isolate the best candidate solutions. In comparison with existing methods, our approach shows superior performance in terms of latency, availability, and reliability. Thus, it provides an efficient and effective solution for service discovery and composition in IoT-based smart cities, bridging a significant gap in current research.
  • Küçük Resim Yok
    Öğe
    A Modified Horse Herd Optimization Algorithm and Its Application in the Program Source Code Clustering
    (Wiley-Hindawi, 2023) Arasteh, Bahman; Gunes, Peri; Bouyer, Asgarali; Gharehchopogh, Farhad Soleimanian; Banaei, Hamed Alipour; Ghanbarzadeh, Reza
    Maintenance is one of the costliest phases in the software development process. If architectural design models are accessible, software maintenance can be made more straightforward. When the software's source code is the only available resource, comprehending the program profoundly impacts the costs associated with software maintenance. The primary objective of comprehending the source code is extracting information used during the software maintenance phase. Generating a structural model based on the program source code is an effective way of reducing overall software maintenance costs. Software module clustering is considered a tremendous reverse engineering technique for constructing structural design models from the program source code. The main objectives of clustering modules are to reduce the quantity of connections between clusters, increase connections within clusters, and improve the quality of clustering. Finding the perfect clustering model is considered an NP-complete problem, and many previous approaches had significant issues in addressing this problem, such as low success rates, instability, and poor modularization quality. This paper applied the horse herd optimization algorithm, a distinctive population-based and discrete metaheuristic technique, in clustering software modules. The proposed method's effectiveness in addressing the module clustering problem was examined by ten real-world standard software test benchmarks. Based on the experimental data, the quality of the clustered models produced is approximately 3.219, with a standard deviation of 0.0718 across the ten benchmarks. The proposed method surpasses former methods in convergence, modularization quality, and result stability. Furthermore, the experimental results demonstrate the versatility of this approach in effectively addressing various real-world discrete optimization challenges.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Program source code comprehension by module clustering using combination of discretized gray wolf and genetic algorithms
    (Elsevier Ltd, 2022) Arasteh, Bahman; Abdi, Mohammad; Bouyer, Asgarali
    Maintenance is a critical and costly phase of software lifecycle. Understanding the structure of software will make it much easier to maintain the software. Clustering the modules of software is regarded as a useful reverse engineering technique for constructing software structural models from source code. Minimizing the connections between produced clusters, maximizing the internal connections within the clusters, and maximizing the clustering quality are the most important objectives in software module clustering. Finding the optimal software clustering model is regarded as an NP-complete problem. The low success rate, limited stability, and poor modularization quality are the main drawbacks of the previous methods. In this paper, a combination of gray wolf optimization algorithm and genetic algorithms is suggested for efficient clustering of software modules. An extensive series of experiments on 14 standard benchmarks have been conducted to evaluated the proposed method. The results illustrate that using the combination of gray wolf and genetic algorithms to the software-module clustering problem increases the quality of clustering. In terms of modularization quality and convergence speed, proposed hybrid method outperforms the other heuristic approaches.
  • Küçük Resim Yok
    Öğe
    A quality-of-service aware composition-method for cloud service using discretized ant lion optimization algorithm
    (Springer London Ltd, 2024) Arasteh, Bahman; Aghaei, Babak; Bouyer, Asgarali; Arasteh, Keyvan
    In the cloud system, service providers supply a pool of resources in the form of a web service and the services are merged to provide the required composite services. Composing a quality-of-service aware web service is like the knapsack problem and this problem is NP-hard. Different artificial intelligence and heuristic methods have been used to achieve optimal or near-optimal composite services. In this paper, the Ant Lion optimization algorithm was modified and discretized to choose the appropriate web services from the existing services and to provide the optimal composite services. The QWS dataset contains a collection of 2507 real-world web services which are used to evaluate the proposed method. In this study, response time parameters, availability, throughput, success capability, reliability, and latency were used as the web service quality metrics. The results of the conducted experiments confirm that the provided composite service by the proposed method has considerably higher quality than the other related algorithms. Hence, the proposed method can be used in the cloud resource discovery layer.
  • Küçük Resim Yok
    Öğe
    Review of heterogeneous graph embedding methods based on deep learning techniques and comparing their efficiency in node classification
    (Springer Wien, 2024) Noori, Azad; Balafar, Mohammad Ali; Bouyer, Asgarali; Salmani, Khosro
    Graph embedding is an advantageous technique for reducing computational costs and effectively using graph information in machine learning tasks like classification, clustering, and link prediction. As a result, it has become a key method in various research areas. However, different embedding methods may be used depending on the variety of graphs available. One of the most commonly used graph types is the heterogeneous graph (HG) or heterogeneous information network (HIN), which presents unique challenges for graph embedding approaches due to its diverse set of nodes and edges. Several methods have been proposed for heterogeneous graph embedding in recent years to overcome these challenges. This paper aims to review the latest techniques used for this purpose, divided into two main parts: the first part describes the fundamental concepts and obstacles in heterogeneous graph embedding, while the second part compares the most critical methods. Finally, the results are summarized, outlining the challenges and opportunities for future directions.
  • Küçük Resim Yok
    Öğe
    Time and cost-effective online advertising in social Internet of Things using influence maximization problem
    (Springer, 2024) Molaei, Reza; Fard, Kheirollah Rahsepar; Bouyer, Asgarali
    Recently, a novel concept called the Social Internet of Things (SIoT) has emerged, which combines the Internet of Things (IoT) and social networks. SIoT plays a significant role in various aspects of modern human life, including smart transportation, online healthcare systems, and viral marketing. One critical challenge in SIoT-based advertising is identifying the most effective objects for maximizing advertising impact. This research paper introduces a highly efficient heuristic algorithm named Influence Maximization-Cost Minimization for Advertising in the Social Internet of Things (IMCMoT), inspired by real-world advertising strategies. The IMCMoT algorithm comprises three essential steps: Initial preprocessing, candidate objects selection and final seed set identification. In the initial preprocessing phase, the objects that are not suitable for advertising purposes are eliminated. Reducing the problem space not only minimizes computational overhead but also reduces execution time. Inspired by real-world advertising, we then select influential candidate objects based on their effective sociality rate, which accounts for both the object's sociality rate and relevant selection cost factors. By integrating these factors simultaneously, our algorithm enables organizations to reach a broader audience at a lower cost. Finally, in identifying the final seed set, our algorithm considers the overlapping of neighbors between candidate objects and their neighbors. This approach helps minimize the costs associated with spreading duplicate advertisements. Through experimental evaluations conducted on both real-world and synthetic networks, our algorithm demonstrates superior performance compared to other state-of-the-art algorithms. Specifically, it outperforms existing methods concerning attention to influence spread, achieves a reduction in advertising cost by more than 2-3 times and reduces duplicate advertising. Additionally, the running time of the IMCMoT algorithm is deemed acceptable, further highlighting its practicality and efficiency.
  • «
  • 1 (current)
  • 2
  • »

| İstinye Üniversitesi | Kütüphane | Açık Bilim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


İstinye Üniversitesi, İstanbul, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim