Arşiv logosu
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Yoon, Jin Hee" seçeneğine göre listele

Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    Design of hierarchical neural networks using deep LSTM and self-organizing dynamical fuzzy-neural network architecture
    (IEEE-inst electrical electronics engineers, 2024) Zhou, Kun; Oh, Sung-Kwun; Qiu, Jianlong; Pedrycz, Witold; Seo, Kisung; Yoon, Jin Hee
    Time series forecasting is an essential and challenging task, especially for large-scale time-series (LSTS) forecasting, which plays a crucial role in many real-world applications. Due to the instability of time series data and the randomness (noise) of their characteristics, it is difficult for polynomial neural network (PNN) and its modifications to achieve accurate and stable time series prediction. In this study, we propose a novel structure of hierarchical neural networks (HNN) realized by long short-term memory (LSTM), two classes of self-organizing dynamical fuzzy neural network architectures of fuzzy rule-based polynomial neurons (FPNs) and polynomial neurons constructed by variant generation of nodes as well as layers of networks. The proposed HNN combines the deep learning method with the PNN method for the first time and extends it to time series prediction as a modification of PNN. LSTM extracts the temporal dependencies present in each time series and enables the model to learn its representation. FPNs are designed to capture the complex nonlinear patterns present in the data space by utilizing fuzzy C-means (FCM) clustering and least-square-error-based learning of polynomial functions. The self-organizing hierarchical network architecture generated by the Elitism-based Roulette Wheel Selection strategy ensures that candidate neurons exhibit sufficient fitting ability while enriching the diversity of heterogeneous neurons, addressing the issue of multicollinearity and providing opportunities to select better prediction neurons. In addition, L-2-norm regularization is applied to mitigate the overfitting problem. Experiments are conducted on nine real-world LSTS datasets including three practical applications. The results show that the proposed model exhibits high prediction performance, outperforming many state-of-the-art models.
  • Küçük Resim Yok
    Öğe
    Design of progressive fuzzy polynomial neural networks through gated recurrent unit structure and correlation/probabilistic selection strategies
    (Elsevier, 2023) Wang, Zhen; Oh, Sung-Kwun; Wang, Zheng; Fu, Zunwei; Pedrycz, Witold; Yoon, Jin Hee
    This study focuses on two critical design aspects of a progressive fuzzy polynomial neural network (PFPNN): the influence of the gated recurrent unit (GRU) structure and the implementation of fitness-based candidate neuron selection (FCNS) through two probabilistic strategies. The primary objectives are to enhance modeling accuracy and to reduce the computational load associated with nonlinear regression tasks. Compared with the existing fuzzy rule-based modeling architecture, the proposed dynamic model consists of the GRU structure and the hybrid fuzzy polynomial architecture. In the initial two layers of the PFPNN, we introduce three types of polynomial and fuzzy rules into the GRU neurons (GNs) and fuzzy polynomial neurons (FPNs), which can effectively reveal potential complex relationships in the data space. The synergy of the FCNS strategies and the l2 regularization learning method is to design a progressive regression model adept at melding the GRU structure with a self-organizing architecture. The proposed GRU structure and polynomial-based neurons significantly improve the modeling accuracy for time-series datasets. The rational utilization of FCNS strategies can reinforce the network structure and discover the potential performance of neurons of the network. Furthermore, the inclusion of l2 norm regularization provides additional stability to the proposed model and mitigates the overfitting issue commonly encountered in many existing learning methods. We validated the proposed neural networks using six time-series, four machine learning, and two real-world datasets. The PFPNN outperformed other models in the comparison studies in 83.3% of the datasets, emphasizing its superiority in terms of developing a stable deep structure from diverse candidate neurons and reducing computational overhead. (c) 2023 Elsevier B.V. All rights reserved.
  • Küçük Resim Yok
    Öğe
    Reinforced Interval Type-2 Fuzzy Clustering-Based Neural Network Realized Through Attention-Based Clustering Mechanism and Successive Learning
    (Ieee-Inst Electrical Electronics Engineers Inc, 2024) Liu, Shuangrong; Oh, Sung-Kwun; Pedrycz, Witold; Yang, Bo; Wang, Lin; Yoon, Jin Hee
    In this article, a novel attention-based reinforced interval type-2 fuzzy clustering neural network (ARIT2FCN) is developed to improve the generalization performance of fuzzy clustering-based neural networks (FCNNs). Commonly, fuzzy rules in FCNNs are generated through the clustering-based rule generator. However, the generated fuzzy rules may not be able to fully describe the given data, because the clustering-based rule generator does not simultaneously consider the intracluster homogeneity and intercluster heterogeneity for both of data characteristics and label information when defining membership functions (MFs) of fuzzy rules. This negatively affects fuzzy rules to accurately quantify the interclass heterogeneity and intraclass homogeneity and degrades the performance of FCNNs. The ARIT2FCN is proposed with the aid of the attention-based clustering mechanism and the successive learning method. The attention-based clustering mechanism is designed to define MFs by simultaneously considering data characteristics and label information. The successive learning method is adopted to construct the desired fuzzy rules that can capture the interclass heterogeneity and intraclass homogeneity. Moreover, L-2 norm regularization is used to alleviate the overfitting effect. The performance of ARIT2FCN is evaluated on machine learning datasets with 16 comparative methods. In addition, two real-world problems are adopted to validate the effectiveness of ARIT2FCN. Experimental results demonstrate that the ARIT2FCN outperforms the comparative methods, and the statistical tests also support the superiority of ARIT2FCN.

| İstinye Üniversitesi | Kütüphane | Açık Bilim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


İstinye Üniversitesi, İstanbul, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim