Наукові записки НаУКМА. Комп'ютерні науки
Permanent URI for this community
Періодичне фахове видання "Наукові записки НаУКМА. Комп’ютерні науки" є друкованим засобом масової інформації, журналом відкритого доступу, в основу діяльності якого покладено публікацію наукових статей в галузі сучасної кiбернетики, інформатики та програмування. Засноване в 1996 році видання публікує статті українською та англійською мовами (за вибором автора) теоретичної та практичної спрямованості. До 2018 року журнал був складовою багатосерійного видання "Наукові записки НаУКМА" та мав назву "Наукові записки НаУКМА. Комп’ютерні науки".
Browse
Browsing Наукові записки НаУКМА. Комп'ютерні науки by Title
Now showing 1 - 20 of 150
Results Per Page
Sort Options
Item A hybrid AI model for financial market prediction(2025) Voitishyn, Mykyta; Kuzmenko, DmytroFinancial time series modeling is increasingly complex due to volatility, unexpected breakouts, and the impact of external factors, such as macroeconomic indicators, investor sentiment, company fundamentals, and extreme shocks, like geopolitical events or market manipulations. This paper introduces a hybrid artificial intelligence framework that integrates traditional statistical methods, machine learning models, and Bayesian neural networks (BNNs) to improve predictive performance and uncertainty quantification in financial forecasting. The model leverages a variety of engineered features, including rolling statistics, technical indicators, anomaly scores, interpolated macroeconomic data, and transformer-based sentiment scores. A complete ablation study compares various architectures, including ARIMA, SARIMA, MLR, SNN, and BNN, across multiple prediction windows (1, 3, 5 days) and feature combinations. Results show that while linear models yield the lowest MSE for short-term predictions, they fail to capture non-linear dependencies and uncertainty. In contrast, BNNs offer more reliable mid-term predictions by estimating predictive distributions. The best BNN configuration (Normal distribution, constant variation, TanH activation, 1 hidden layer) achieved an MSE of 0.00022, confirming the advantage of uncertainty-adjusted modeling. Sentiment analysis and anomaly detection were especially impactful when combined with macroeconomic indicators, improving signal reliability and behavioral insight. Our findings highlight the importance of integrating diverse data sources and accounting for predictive uncertainty in financial applications. Additionally, the experiments revealed that compact network architectures often outperform deeper ones when paired with engineered features. All experiments were systematically tracked to ensure reproducibility and facilitate future model benchmarking.Item Automated Approach for the Importing the New Photo Set to Private Photo Album to Make it More Searchable(2020) Nikulin, Dmytro; Buchko, OlenaThis paper focuses on describing several Informational Retrieval (IR) Multimedia Systems that work primarily with private photo albums and it provides comparison of IR Multimedia Systems in terms of functionality. The Informational Retrieval Multimedia systems, used for comparison are selected as the most known and widely used as well. The Informational Retrieval Multimedia systems compared are Mylio, Google Photo, Apple Photo, Synology Moments and digiKam. Moreover, based on these systems comparison, this paper provides the definition of the end-point user needs. Then it provides the identification and the prioritization (based on the author’s experience and opinion) the issues in private information Retrieval multimedia systems to be addressed by any technical solution. The following issues have been addressed as the most important from the end-point user’s perspective: First of all, it is “Not existing EXIF file metadata”.(It is divided into two sub issues: not existing taken date and GEO data for place it was taken at); as second, the linking a photo to an author person entity (Based on camera brand and model given as metadata); And as the last one, linking the set of photos to the predefined type of event, like birthdays, anniversaries, holidays, trips etc.(Based on tagging the photos in semi-automated way). For each issue, this document has provided the technical approaches and program algorithms (in a pseudo code) as well. Also, the using of relevant tools is provided if applicable. Each tool mentioned is described in a few words and given with a reference to read about. This document has also described the human involvement into the process, if a totally automated process is not possible. As a conclusion this paper describes the key points that were identified during the identification and addressing the issues. The conclusion also gives a proposal for a technical solution (that is required to be used during importing the new photo set into existing IR Multimedia systems) with context diagram that represents the user, its devices with multimedia data, the system to import the data from user multimedia devices and Informational Retrieval multimedia system to keep the whole private multimedia and to search the target photo set. Finally, it gives the definition of workflow steps to describe the main activity flow (with human involvement and software as well) to be implemented in a technical solution in future.Item Development of an iOS Application for Task Planning with Consideration of the User’s Emotional State(2025) Piz, Mariana; Nahirna, AllaThe article focuses on the development of a digital tool designed to address the problem of decreased productivity caused by emotional exhaustion. The main objective of the study is to create an iOS application for task planning that takes into account the user’s emotional state, offers mindful breaks for emotional awareness and recovery, and provides analytics on emotional trends. The research includes a comparative analysis of existing software solutions in the areas of time management and mental well-being. During the development process, modern frameworks, tools, and architectural patterns for iOS development were analyzed. An adaptive planning algorithm was implemented, that takes into accountboth the user’s emotional feedback and the attributes of tasks. As a result of the research, a mobile application named Moodpace was developed using Swift, with SwiftUI for building the user interface, SwiftData for data persistence, and the MVVM architectural pattern to ensure maintainable code structure. During the development process, SwiftLint was used for static code analysis, and SwiftFormat was integrated for automatic code formatting. The app was localized into Ukrainian using String Catalog. The developed application is designed to help users manage their tasks while maintaining balance with their mental well-being. It is suitable for everyday use and especially beneficial for individuals with flexible schedules.Item Distributed Systems Technical Audit(2020) Gorokhovskyi, Kyrylo; Zhylenko, Oleksii; Franchuk, OlehModern enterprise systems consist of many deployment artifacts, thus, microservices architecture is extremely popular now. The use of distributed systems is rapidly growing despite the increased complexity and the difficulty of building. The main reason for such a trend is that the advantages of distributed systems outweigh their disadvantages. Nevertheless, product release into the market is not a final step of software development lifecycle. Next important step is maintenance that continues much longer than development. System failures and delays in finding and fixing a problem can cause huge financial and reputational expenses. In addition, the new features introduced due to changes on the market should take place on time. Prior to releasing a product to market, we would like to know in advance possible technical gaps to understand what we can expect and maybe fix some issues in order to save time and money in future. In other words, we should be able to make a decision, if the product is ready for launch or not, relying on some data. Such analysis is as well necessary when we obtain ownership for an existing product. Technical audit helps to find out technical debt and assess risks related to maintenance and extension of the system. It should be considered as mandatory activity during release preparation and ownership transfer. Well-defined criteria would help to conduct an audit smoothly and find out most of the technical debt. Properly conducted technical audit reduces risks of problems after release but it does not guarantee commercial success of product or absence of problems at all. In this article we will define what distributed systems are, we will review Monolithic, Microservices and Serverless architectures, describe what are quality attributes and what should be taken into account during technical audits. Next, we will deep dive into the technical audit process, specify what aspects of the system must be considered during an audit. Then we will iterate over checklists items in order to provide guidelines based on the best practices in industry which helps to prepare for software system audit.Item Energy conservation for autonomous agents using reinforcement learning(2025) Beimuk, Volodymyr; Kuzmenko, DmytroReinforcement learning (RL) has shown strong potential in autonomous racing for its adaptability to complex and dynamic driving environments. However, most research prioritizes performance metrics such as speed and lap time. Limited consideration is given to improving energy efficiency, despite its increasing importance in sustainable autonomous systems. This work investigates the capacity of RL agents to develop multi-objective driving strategies that balance lap time and fuel consumption by incorporating a fuel usage penalty into the reward function. To simulate realistic uncertainty, fuel usage is excluded from the observation space, forcing the agent to infer fuel consumption indirectly. Experiments are conducted using the Soft Actor-Critic algorithm in a high-fidelity racing simulator, Assetto Corsa, across multiple configurations of vehicles and tracks. We compare various penalty strengths against the non-penalized agent and evaluate fuel consumption, lap time, acceleration and braking profiles, gear usage, engine RPM, and steering behavior. Results show that mild to moderate penalties lead to significant fuel savings with minimal or no loss in lap time. Our findings highlight the viability of reward shaping for multi-objective optimization in autonomous racing and contribute to broader efforts in energy-aware RL for control tasks. Results and supplementary material are available on our project website.Item Euclidean Algorithm for Sound Generation(2021-12-10) Gorokhovskyi, Semen; Laiko, ArtemEuclidean algorithm is known by humanity for more than two thousand years. During this period many applications for it were found, covering different disciplines and music is one of those. Such algorithm application in music first appeared in 2005 when researchers found a correlation between world music rhythm and the Euclidean algorithm result, defining Euclidean rhythms as the concept. In the modern world, music could be created using many approaches. The first one being the simple analogue, the analogue signal is just a sound wave that emitted due to vibration of a certain medium, the one that is being recorded onto a computer hard drive or other digital storage called digital and has methods of digital signal processing applied. Having the ability to convert the analogue signal or create and modulate digital sounds creates a lot of possibilities for sound design and production, where sonic characteristics were never accessible because of limitations in sound development by the analogue devices or instruments, nowadays become true. Sound generation process, which usually consists of modulating waveform and frequency and can be influenced by many factors like oscillation, FX pipeline and so on. The programs that influence synthesised or recorded signal called VST plugins and they are utilising the concepts of digital signal processing. This paper aims to research the possible application of Euclidean rhythms and integrate those in the sound generation process by creating a VST plugin that oscillates incoming signal with one of the four basic wave shapes in order to achieve unique sonic qualities. The varying function allows modulation with one out of four basic wave shapes such as sine, triangle, square and sawtooth, depending on the value received from the Euclidean rhythm generator, switching modulating functions introduces subharmonics, with the resulting richer and tighter sound which could be seen on the spectrograms provided in the publication.Item Image Shadow Removal Based on Generative Adversarial Networks(2020) Andronik, Vladyslav; Buchko, OlenaAccurate detection of shadows and removal in the image are complicated tasks, as it is difficult to understand whether darkening or gray is the cause of the shadow. This paper proposes an image shadow removal method based on generative adversarial networks. Our approach is trained in unsupervised fashion which means it does not depend on time-consuming data collection and data labeling. This together with training in a single end-to-end framework significantly raises its practical relevance. Taking the existing method for unsupervised image transfer between different domains, we have researched its applicability to the shadow removal problem. Two networks have been used. Тhe first network is used to add shadows in images and the second network for shadow removal. ISTD dataset has been used for evaluation clarity because it has ground truth shadow free images as well as shadow masks. For shadow removal we have used root mean squared error between generated and real shadow free images in LAB color space. Evaluation is divided into region and global where the former is applied to shadow regions while the latter to the whole images. Shadow detection is evaluated with the use of Intersection over Union, also known as the Jaccard index. It is computed between the generated and ground-truth binary shadow masks by dividing the area of overlap by the union of those two. We selected random 100 images for validation purposes. During the experiments multiple hypotheses have been tested. The majority of tests we conducted were about how to use an attention module and where to localize it. Our network produces better results compared to the existing approach in the field. Analysis showed that attention maps obtained from auxiliary classifier encourage the networks to concentrate on more distinctive regions between domains. However, generative adversarial networks demand more accurate and consistent architecture to solve the problem in a more efficient way.Item Information system assessment of the creditworthiness of an individual(2022) Nahirna, Alla; Chumachenko, Oleksandra; Pyechkurova, OlenaNowadays enterprise information systems of banks provide modules for calculating creditworthiness of the business. Such systems are complex and it is difficult to maintain and develop them. Moreover, it requires the involvement of large teams. In addition, systems are complicated to change and update in accordance with changes in current legislation. From another point of view, demand for consumer loans is high, and creating a separate module for calculating the creditworthiness of an individual is appropriate in case of increasing the adaptability to changes and updates of the system. Calculating the creditworthiness of an individual is relevant not only for the banking system, but also for other spheres such as logistics and marketing. The work describes the created information system for calculating the creditworthiness of an individual, which calculates the class of the borrower based on data from credit history, credit rating, quality characteristics, financial indicators of the person and characteristics of the credit transaction. The use of the Asp.Net Core platform and the Vue.js framework to build a software module that can be used both independently and easily integrated into other corporate systems is demonstrated. In this work the major steps of designing and developing the system are described.Item Investigation of the Relationship Between Software Metrics Measurements and its Maintainability Degree(2020) Hlybovets, Andrii; Shapoval, OleksandrThe goal of this work is to practically apply methods of empirical engineering software, algorithms for data collection and data analysis. The results include software measurement, analysis and selection of direct and indirect metrics for research and identification of dependencies between direct and indirect metrics. Based on the received results, there were built dependencies between software metrics and software expertise properties were selected by individual variation. For measurement results analysis there were used primary statistical analysis, expert estimations, correlation and regression analysis. Expert estimation is the dominant strategy when estimating software development effort. Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. Primary data analysis is the process of comprehending the data collected to answer research questions or to support or reject research hypotheses that the study was originally designed to evaluate. Correlation analysis gives possibility to make some conclusions about which metrics and expert estimations are much coupled, and which are not. Regression analysis involves both graphical construction and analytical research and gives an ability to make a conclusion about which metrics and expert estimations are the most coupled. Analyzing regression lines for metrics of normal and nonnormal distributions give an ability to identify pairs of ‘metric – expert estimation’. There have been calculated and measured metrics relations for defining relation of such quality attributes as Understandability and Functionality Completeness. Understandability expresses the clarity of the system design. If the system is well designed, new developers are able to understand easily the implementation details and quickly begin contributing to the project. Functionality Completeness refers to the absence of omission errors in the program and database. It is evaluated against a specification of software requirements that define the desired degree of generalization and abstraction. Relationship between metric and expertise includes building direct relationships between the metric and expertise, indirect metrics and expertise. Additionally, it has been determined whether they have common trends of the relationship between those direct metrics and expert estimates, indirect metrics and expert estimates. The practical results of this work can be applied for software measurements to analyze what changes in the code (affecting given metric) will cause increasing or decreasing of what quality attribute.Item Modern approaches to controllable emotional speech synthesis(2025) Ivashchenko, Dmytro; Marchenko, OleksandrThe generation of emotionally expressive and controllable speech is one of the most dynamic and technically demanding areas in the intersection of artificial intelligence, natural language processing, and speech synthesis. Recent progress in emotional text-to-speech (TTS) systems has enabled increasingly natural and emotionally nuanced voice generation, shifting from early concatenative methods to advanced neural models. This review provides a structured overview of the state of the art in controllable emotional TTS, highlighting key architectural paradigms. A special focus is placed on emotional control mechanisms, including discrete emotional tagging with categorical or dimensional labels, reference-based control which conditions synthesis on prosodic or stylistic exemplars, and prompt-based techniques that leverage the capabilities of large language models for flexible and intuitive emotional specification. Despite substantial improvements in synthesis quality and emotional expressiveness, several critical challenges remain unresolved. These include the disentanglement of emotional, speaker, and prosodic features, the lack of standardized evaluation metrics for emotional clarity and naturalness, and the significant computational demands associated with training high-fidelity models. Furthermore, the scarcity of diverse and emotion-labeled speech data, especially for low-resource and morphologically rich languages, continues to limit the generalizability of current approaches. This review not only summarizes existing methods and their trade-offs but also outlines promising research directions, aiming to support the development of more robust, efficient, and emotionally expressive speech generation systems.Item MorphoNAS-Benc: a benchmark suite for morphogenetic neural network generation(2025) Medvid, SerhiiWe present MorphoNAS-Bench, a benchmark and toolkit for neural architecture search (NAS) using a generative, developmentally inspired design space. Unlike current NAS benchmark datasets (NAS-Bench-101, NATS-Bench) that use static graph encodings of networks, in MorphoNAS-Bench networks are simple, compact genomes that drive morphogenetic development, allowing for a variety of richly defined, spatially embedded recurrent architectures that emerge through different forms of deterministic growth. The following local developmental rules are used in MorphoNAS to grow genomes: morphogen diffusion, cell division, differentiation, and axon guidance as key mechanisms. The seed benchmark dataset presented in this work consists of 1,000 genome-architecture pairs, taken from a pool of over 50,000 generation attempts using the following quality thresholds: a minimum 5 neurons, 3 edges, and 70% out-degree coverage. The dataset was constructed using Latin Hypercube Sampling (LHS) with orthogonal array design to ensure comprehensive parameter space coverage. The attempts were conducted using both fully stratified parameter sampling and a biologically inspired Genome.random() sampling method, ensuring a reasonable level of coverage of the search space while being plausible. Each sample includes detailed annotations of graph entropy, hierarchy scores, core-periphery structure, transitivity, reciprocity, and structural balance metrics. We share an analysis of the emergent properties like size, modularity, grouping, and efficiency, demonstrating that both generation strategies can produce structured networks that are rich in their nontriviality. The provided Python toolkit provides the means of investigation to test how genomes develop into neural networks, with associated structural analysis, framing MorphoNAS-Bench as a reproducible and biologically inspired testbed for any research studies exploring architecture diversity, evolution, and emergent structure in NAS.Item A nucleolus-based approach for cloud resource allocation(2024) Artiushenko, BohdanCloud computing has transformed organizational operations by enabling flexible resource allocation and reducing upfront hardware investments. However, the growing complexity of resource management, particularly for computing instances, has led to challenges in cost control and resource allocation. Fair allocation policies, such as max-min fairness and Dominant Resource Fairness, aim to distribute resources fairly among users. In recent years, the FinOps framework has emerged to address cloud cost management, empowering teams to manage their own resource usage and budgets. The allocation of resources among competing product teams within an organization can be modelled as a cooperative game, where teams with competing priorities must negotiate resource allocation based on their claims and the available budget. The article explores cloud resource allocation as a cooperative game, particularly in situations where the total budget is insufficient to meet all teams’ demands. Several resource allocation methods are discussed, including the proportional rule and the nucleolus-based approach, which seeks to minimize the coalitions’ incentives to deviate. The nucleolus method offers a stable and fair solution by distributing resources in a way that maximizes stability and reduces the likelihood of coalitions deviating from the overall allocation. This approach ensures that no team is allocated more than its claim and maintains fairness by adhering to principles such as claim boundaries, monotonicity, and resource constraints. Ultimately, the nucleolus-based method is proposed as an effective solution for allocating cloud resources in a cooperative and stable manner, ensuring that resource allocation is both fair and efficient.Item Robustness of Neural Decision Trees to noise in input data for image classification tasks(2025) Mokryi, Mykhailo; Shvai, NadiiaNeural networks, particularly convolutional neural networks (CNNs), have demonstrated high effectiveness in image classification tasks. However, they are known to be vulnerable to input data perturbations and have weak interpretability due to their black-box nature. In contrast, traditional decision trees (DTs) provide transparent decision-making processes, but are limited to low-dimensional or tabular data, restricting their field of application in computer vision tasks such as image classification. To address this gap, a hybrid architecture known as Neural Decision Trees (NDTs) has emerged, combining strong generalization and learning capabilities of neural networks, with transparent hierarchical inference and interpretability of DTs. The article investigates the robustness of NDTs to noise in input data for image classification tasks. Despite the extensive studies covering the robustness of both CNNs and traditional DTs against various forms of input perturbations, the robustness of NDT models remains a largely underexplored area. This study provides two robust training methods to improve robustness: constant noise learning and incremental noise learning, originally developed for CNNs, but which can be effectively applied to NDT-based architectures and significantly improve the robustness to noisy images for models. These methods involve adding perturbed samples via a Gaussian blur during the training stage. The noisy test set consists of images perturbed by a Gaussian blur and is used to evaluate the robustness performance. A series of experiments were conducted on the CIFAR-10 dataset using the original training baseline and robust training methods. The results demonstrate that constant and incremental noise learning significantly improve the robustness of all tested NDT models to noisy images compared to their original training performance. While the ResNet18 baseline model demonstrates higher overall performance, the NDT models show comparable robustness improvements using the proposed robust training strategies. Constant noise learning offered an adjustable trade-off between performance on clean and noisy images, while incremental noise learning provided a more stable training process. The first method is considered preferable due to the simplicity of implementation. This study empirically confirms that NDT models can effectively use methods adapted from CNNs to improve their robustness against perturbations in input data. An NDT framework was developed to conduct training and validation using a standardized shared pipeline. It is available via the link: github.com/ MikhailoMokryy/NDTFramework.Item Serverless Event-driven Applications Development Tools and Techniques(2020) Morenets, Ihor; Shabinskiy, AntonServerless, a new cloud-based architecture, brings development and deployment flexibility to a new level by significantly decreasing the size of the deployment units. Nevertheless, it still hasn’t been clearly defined for which applications it should be employed and how to use it most effectively, and this is the focus of this research. The study uses Microsoft Azure Functions – one of the popular mature tools – because of its stateful orchestrators – Durable Functions. The tool is used to present and describe four flexible serverless patterns with code examples. The first pattern is HTTP nanoservices. The example demonstrates how flexible can be the Function-asa-Service model, which uses relatively small functions as deployment units. The second usage scenario described is a small logic layer between a few other cloud services. Thanks to its event-driver nature, serverless is well-suited for such tasks as making an action in one service after a specific event from another one. New functions easily integrate with the API from the first example. The third scenario – distributed computing – relies on the ability of Durable Functions to launch a myriad of functions in parallel and then aggregate their results. and distributed computing. A custom MapReduce implementation is presented in this section. The last pattern described in this research significantly simplifies concurrent working with mutable data by implementing the actor model. Durable Entities guarantee that messages are delivered reliably and in order, and also the absence of deadlocks. The results of this work can be used as a practical guide to serverless main concepts and usage scenarios. Main topic of future research was chosen to be the development of a full-fledged serverless application using typical patterns to study the architecture in more depth.Item Statical and Dynamical Software Analysis(2020) Sosnytskyi, Serhii; Glybovets, Mykola; Pechkurova, OlenaThe development of software built with quality has become an important trend and a natural choice in many organisations. Currently, methods of measurement and assessment of software quality, security, trustworthiness cannot guarantee safe and reliable operations of software systems completely and effectively. In this article statistical and dynamical software analysis methods, main concepts and techniques families are overviewed. The article has an overview of why combination of several analysis techniques is necessary for software quality and examples how static and dynamical analysis may be introduced in a modern agile software development life cycle. As a summary of techniques for software analysis, represented on Table 1, due to the computability barrier, no technique can provide fully automatic, robust, and complete analyses. Testing sacrifices robustness. Assisted proving is not automatic (even if it is often partly automated, the main proof arguments generally need to be human provided). Model-checking approaches can achieve robustness and completeness only with respect to finite models, and they generally give up completeness when considering programs (the incompleteness is often introduced in the modeling stage). Static analysis gives up completeness (though it may be designed to be precise for large classes of interested programs). Last, bug finding is neither robust nor complete. Another important dimension is scalability. In practice, all approaches have limitations regarding scalability, although these limitations vary depending on the intended applications (e.g., input programs, target properties, and algorithms used). Already implemented code could be analysed in a continuous integration environment by a tool like SonarQube. Properly configured metrics and quality gates provide fast and detailed feedback on incremental changes starting from development machine till highload enterprise production environments. Software analysis helps to improve quality and development speed in Agile development life cycle with reasonable cost.Item Technical comparison aspects of leading blockchain-based platforms on key characteristics(2018) Ivanov, Alexander; Babichenko, Yevhenii; Kanunnikov, Hlib; Karpus, Paul; Foiu-Khatskevych, Leonid; Kravchenko, Roman; Gorokhovskyi, Kyrylo; Nevmerzhitskyi, IevhenBlockchain as a technology is rapidly developing, finding more and more new entry points into everyday life. This is one of the elements of the technical Revolution 4.0, and it is used in the field of supply, maintenance of various types of registers, access to software products, combating DDOS attacks, distributed storage, fundraising for projects, IoT, etc. Nowadays, there are many blockchain-platforms in the world. They have one technological root but different applications. There are many prerequisites to the fact that in the future the number of new decentralized applications will increase. Therefore, it is important to develop a methodology for determining the optimal blockchain-based platform to solve a specific problem. As an example, consider the worldfamous platforms Ethereum, Nem, and Stellar. Each of them allows to develop decentralized applications, issue tokens, and execute transactions. At the same time, the key features of these blockchain-based platforms are not similar to one another. These very features will be considered in the article. Purpose. Identify the key parameters that characterize the blockchain-based platforms. This will provide an opportunity to present a complex blockchain technology in the form of a simple and understandable architecture. Based on these parameters and using the expertise of the article’s authors, we will be able to develop a methodology to be used to solve the problems of choosing the optimal blockchain-based platform for solving the problem of developing smart contracts and issuing tokens. Methods. Analysis of the complexity of using blockchain-based platforms. Implementation of token issuance, use of test and public networks, execution of transactions, analysis of the development team and the community, analysis of the user interface and the developer interface. Discussion. By developing a platform comparison methodology to determine optimal characteristics, we can take the development process to a new level. This will allow to quickly and effectively solve the tasks. Results. Creation of a methodology for comparison blockchain-based platforms.Item Validating architectural hypotheses in Neural Decision Trees with Neural Architecture Search(2025) Mykytyshyn, Artem; Shvai, NadiiaThis article introduces an automated and unbiased framework for validating architectural hypotheses for neural network models, with a particular focus on Neural Decision Trees (NDTs). The proposed methodology employs Neural Architecture Search (NAS) as an unbiased tool to explore architectural variations and empirically assess theoretical claims. To demonstrate this framework, we investigate a hypothesis found in the literature: that the complexity of decision nodes in NDTs decreases monotonically with tree depth. This assumption, initially motivated by the task of monocular depth estimation, suggests that deeper nodes in the tree require fewer parameters due to simpler split functions. To rigorously test this hypothesis, we conduct a series of NAS campaigns over the CIFAR-10 image and fully connected layers, while all other architectural components are held constant to isolate the effect of node depth. By applying Tree-structured Parzen Estimator (TPE)-based NAS and evaluating over 300 architectures, we quantify complexity metrics across tree levels and analyze their correlations using Spearman’s rank coefficient. The results provide no statistical or visual evidence supporting the hypothesized trend: node complexity does not decrease with depth. Instead, complexity remains nearly constant across levels, regardless of tree depth or search space size. These results suggest that assumptions derived from specific applications may not generalize to other domains, underscoring the importance of empirical validation and careful searchspace design. The presented framework may serve as a foundation for verifying other structural assumptions across various neural network families and applications.Item Автоматизована генерація і налаштування мікросервісів для спрощення процесу розроблення(2024) Колінько, ПавлоУ статті розглянуто підходи і методи автоматизованої генерації коду та структури застосунків, зокрема таких, що базуються на мікросервісній архітектурі. Описано розроблений програмний застосунок для генерації мікросервісної архітектури засобами платформи Node.js. В основу розробки покладено використання автоматизованої генерації програмного коду та архітектури на базі поняття scaffolding.Item Автоматизована локалізація застосунків у мікросервісній архітектурі(2024) Верета, Владислав; Ткаченко, ВладиславУ цій статті проведено аналіз різних інструментів і сервісів для здійснення локалізації (перекладу) вебзастосунків, методів і підходів до їх інтеграції та масштабування. Розглянуто наявні рішення для управління локалізаціями вебсервісів. Описано розроблену архітектуру, яка дає змогу швидко адаптувати сервіс під потреби різних користувачів і різних проєктів та ефективно інтегруватися з різними платформами з урахуванням простого масштабування сервісу. На базі цієї архітектури реалізовано вебсервіс EchoLocal. Він покращує взаємодію учасників процесу локалізації вебзастосунків, а також дає змогу оптимально інтегрувати цей процес у розподілену мікросервісну архітектуру сучасних застосунків і просто розгортати у своїй закритій екосистемі. EchoLocal надає можливість своїм користувачам інтегрувати та управляти локалізаціями своїх продуктів в одному місці в реальному часі. EchoLocal допоможе знизити витрати на локалізацію, спростити процес адаптації контенту під різні мовні ринки та підвищити ефективність комунікації між усіма учасниками процесу локалізації.Item Автоматизована система виявлення аномалій у бізнес-даних(2025) Постніков, Михайло; Гороховський, СеменУ статті описано проведений аналіз процесу виявлення аномалій у бізнес-даних, відомі програмні рішення, сформульовано вимоги до системи та описано розроблену автоматизовану програмну систему виявлення аномалій. Ця система складається з програмних модулів, має високу адаптивність, є легкою до модифікації і зручною у використанні. Розроблена система повністю відповідає поставленим раніше вимогам: легкість у налаштуванні забезпечується інтерфейсом користувача і інтерактивним процесом, гнучкість і легкість кастомізації — обраними технологіями та архітектурними абстракціями, надійність — розділенням компонентів через чергу задач, функціональні вимоги — розробленими складовими модулями. Система виконує поставлену задачу автоматизованого виявлення аномалій у бізнес-даних і відповідає сучасним стандартам у галузі даних.