Том 8
Permanent URI for this collection
Browse
Browsing Том 8 by Author "Shvai, Nadiia"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Robustness of Neural Decision Trees to Noise in Input Data for Image Classification Tasks(2025) Mokryi, Mykhailo; Shvai, NadiiaNeural networks, particularly convolutional neural networks (CNNs), have demonstrated high effectiveness in image classification tasks. However, they are known to be vulnerable to input data perturbations and have weak interpretability due to their black-box nature. In contrast, traditional decision trees (DTs) provide transparent decision-making processes, but are limited to low-dimensional or tabular data, restricting their field of application in computer vision tasks such as image classification. To address this gap, a hybrid architecture known as Neural Decision Trees (NDTs) has emerged, combining strong generalization and learning capabilities of neural networks, with transparent hierarchical inference and interpretability of DTs. The article investigates the robustness of NDTs to noise in input data for image classification tasks. Despite the extensive studies covering the robustness of both CNNs and traditional DTs against various forms of input perturbations, the robustness of NDT models remains a largely underexplored area. This study provides two robust training methods to improve robustness: constant noise learning and incremental noise learning, originally developed for CNNs, but which can be effectively applied to NDT-based architectures and significantly improve the robustness to noisy images for models. These methods involve adding perturbed samples via a Gaussian blur during the training stage. The noisy test set consists of images perturbed by a Gaussian blur and is used to evaluate the robustness performance. A series of experiments were conducted on the CIFAR-10 dataset using the original training baseline and robust training methods. The results demonstrate that constant and incremental noise learning significantly improve the robustness of all tested NDT models to noisy images compared to their original training performance. While the ResNet18 baseline model demonstrates higher overall performance, the NDT models show comparable robustness improvements using the proposed robust training strategies. Constant noise learning offered an adjustable trade-off between performance on clean and noisy images, while incremental noise learning provided a more stable training process. The first method is considered preferable due to the simplicity of implementation. This study empirically confirms that NDT models can effectively use methods adapted from CNNs to improve their robustness against perturbations in input data. An NDT framework was developed to conduct training and validation using a standardized shared pipeline. It is available via the link: github.com/ MikhailoMokryy/NDTFramework.Item Validating Architectural Hypotheses in Neural Decision Trees with Neural Architecture Search(2025) Mykytyshyn, Artem; Shvai, NadiiaThis article introduces an automated and unbiased framework for validating architectural hypotheses for neural network models, with a particular focus on Neural Decision Trees (NDTs). The proposed methodology employs Neural Architecture Search (NAS) as an unbiased tool to explore architectural variations and empirically assess theoretical claims. To demonstrate this framework, we investigate a hypothesis found in the literature: that the complexity of decision nodes in NDTs decreases monotonically with tree depth. This assumption, initially motivated by the task of monocular depth estimation, suggests that deeper nodes in the tree require fewer parameters due to simpler split functions. To rigorously test this hypothesis, we conduct a series of NAS campaigns over the CIFAR-10 image and fully connected layers, while all other architectural components are held constant to isolate the effect of node depth. By applying Tree-structured Parzen Estimator (TPE)-based NAS and evaluating over 300 architectures, we quantify complexity metrics across tree levels and analyze their correlations using Spearman’s rank coefficient. The results provide no statistical or visual evidence supporting the hypothesized trend: node complexity does not decrease with depth. Instead, complexity remains nearly constant across levels, regardless of tree depth or search space size. These results suggest that assumptions derived from specific applications may not generalize to other domains, underscoring the importance of empirical validation and careful searchspace design. The presented framework may serve as a foundation for verifying other structural assumptions across various neural network families and applications.