Unified Hybrid Self-Supervised Architecture Combining Contrastive and Non-Contrastive Learning for Multi-Level Visual Representation
Keywords:
Contrastive learning, Representation learning, Non-contrastive learning, Unlabeled image datasetsAbstract
This paper presents a hybrid self-supervised learning framework for fine-grained visual representation using the STL-10 dataset. Addressing the challenges of limited labeled data and complex image variations, the proposed approach integrates four self-supervised paradigms, BYOL, SimCLR, MoCo v3, and DINO into a unified architecture. The model uses a shared backbone with projection and predictor heads, along with a target network updated via an exponential moving average (EMA), to extract robust representations from unlabeled images. Data augmentations, including random resized cropping (RRC), horizontal flipping, color jittering, and grayscale conversion, are employed to generate multiple correlated views for combined contrastive and non-contrastive learning. Experimental results demonstrate stable convergence with an average BYOL loss of –0.88. Downstream evaluation confirms the high quality of the learned embeddings, achieving a test accuracy of (87.20%), a recall of (87.20%), and an F1 score of (87.19%). Furthermore, the framework attained a mean Average Precision (mAP) of (93.70%), indicating highly discriminative and transferable feature representations. We demonstrate that hybrid self-supervision effectively leverages mutually complementary learning paradigms, yielding superior representation quality and faster convergence compared to single-method baselines.
Downloads
References
Downloads
Published
Issue
Section
Categories
License
Copyright (c) 2026 Journal of Smart Algorithms and Applications (JSAA)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Journal of Smart Algorithms and Applications (JSAA) content is published under a Creative Commons Attribution License (CCBY). This means that content is freely available to all readers upon publication, and content is published as soon as production is complete.
Journal of Smart Algorithms and Applications (JSAA) seeks to publish the most influential papers that will significantly advance scientific understanding. Selected articles must present new and widely significant data, syntheses, or concepts. They should merit recognition by the wider scientific community and the general public through publication in a reputable scientific journal.



