Full Issue
Robotics, Automation and Control Systems
-
Creation of fully autonomous unmanned underwater vehicles and systems capable of performing various research and technological operations under uncertainty conditions is a pressing issue. The key problem is automatic mission correction in real time based on data from onboard systems. The aim of this work is to develop a method for intelligent mission planning at the strategic control level of autonomous underwater robotic systems, enabling automatic generation of adaptive plans and their transformation into tactical-level executable commands for operation in changing environmental conditions. In the article, the authors define the principles of developing an intelligent mission planner for autonomous underwater robotic systems (AURS) at the strategic level and a mission manager for managing the mission at the tactical level with the formation of specific tasks for performers. A formal model for planning missions through many linear sections with preconditions and postconditions has been developed. The key aspect of the proposed solution is the use of an ontological approach to standardize the description of missions and ensure their software interpretation. A specialized mission development environment has been created on the IACPaaS cloud platform, allowing experts to form and adapt mission plans without delving into technical details. The set of tools with a modular architecture has been developed, ensuring scalability and adaptation of the solution for various classes of submarines and types of missions. The results of testing have shown that the proposed solution allows for the formation of flexible plans that take into account the diversity of situations and automatically select command sequences depending on the incoming data. The results obtained open up new possibilities for creating fully autonomous underwater systems capable of performing complex research and technological operations without constant operator control. Further research is aimed at improving the mission manager algorithms, as well as integrating the planner with other onboard support components.
-
This article presents an integrated algorithm for multicriteria group decision-making based on an intuitionistic fuzzy hybrid averaging operator for selecting the optimal evacuation strategy. The algorithm is multilevel, where the first level involves constructing a task model in the form of an evacuation transportation network. At the second level, the ranking of evacuation spaces is performed to determine their optimal order based on various criteria, such as room capacity, ease of evacuation, safety level of the room as a shelter, time required to organize evacuation from the location, and distance from the source of danger. The ranking of evacuation spaces is carried out using multicriteria group decision-making and an intuitionistic fuzzy hybrid averaging operator to model the doubts and uncertainties of experts in evaluating evacuation criteria, alternatives, and expert importance. The algorithm operates with linguistic expert assessments, allowing for the calculation of expert and criterion weights based on these assessments for effective decision-making. Aggregation of assessments is performed using a modified algorithm that allows for the operation of criterion weights represented as intuitionistic fuzzy values, unlike traditional crisp numbers, based on developed modified operators for raising fuzzy numbers to a fuzzy power to account for the doubts and uncertainties of the expert group. At the third level, macroscopic dynamic flow evacuation is carried out, considering the possibility of accommodating evacuees in spaces that are not shelters. The advantage of the proposed algorithm is its ability to model the transportation of evacuees under dynamic conditions from hazardous areas, taking into account their placement in intermediate points to maximize the number of lives saved, considering the uncertainty of the environment, the fuzzy nature of factors influencing evacuation decisions, and the uncertainties and doubts of the expert group in evaluating evacuation strategies. To confirm the effectiveness of the developed algorithm, evacuation modeling was conducted, and a software environment was created, implemented in JavaScript. A comparison of the developed decision-making algorithm based on the IFHA operator, operating with fuzzy weights of criteria, with existing algorithms was carried out, and a conclusion was made about the reliability of the proposed algorithm. An assessment of the dependence of the algorithm’s runtime on the input size was conducted, confirming the possibility of using the proposed algorithm for large buildings and transportation networks.
-
The article presents the results of a study of a hierarchical two-level vector control system for a multi-connected object whose evolution is described by a state vector that changes in response to actions on actuators, each of which includes a drive and a working mechanism. The control system under consideration is distinguished by the presence of an additional tuning loop for the virtual regulator at the upper and functional-logical levels. A mathematical model of the impulse response of the actuator of the system is synthesized, taking into account dry friction, backlash, and limitations on the speed and position of the working element of the controlled object. The original model of the actuator is presented in the Cauchy form, and its impulse response is approximated by the impulse response of a second-order linear link, optimal according to the criterion of the minimum approximation error. It is proved that the parameters of the linearized impulse response depend on the operating parameters of the drive. A model of a closed control system for the object as a whole is constructed, and it is shown that its parameters depend on the operating parameters of the drives, the desired value of the state vector of the control object, and the parameters of the virtual controllers implemented at the functional-logical and upper hierarchical levels. The obtained results demonstrate that a change in the operating parameters of the object can be compensated for by structural and parametric changes in the genetic control algorithm. A technique for synthesizing a genetic control algorithm for complex multi-loop objects implemented by a controller at the upper level of the hierarchy based on the use of a neural network has been developed. It is shown that the proposed approach ensures the achievement of a synergetic effect, when the control actions implemented by different modifications of the control algorithm are less effective than the control actions implemented using a composite algorithm subject to evolutionary changes during the operation of the system. The correctness of the theoretical positions is confirmed by the results of computational modeling of the virtual controller control using a neural network, which demonstrated a significant improvement in the control characteristics due to a decrease in the time to reach a steady state and the overshoot time.
-
The paper proposes a method for estimating the state vector of an agent in a multi-agent biological system based on noisy measurements using recurrent filters. It addresses the issues of scalability in existing approaches to monitoring the behavior of laboratory rodents and the absence of a unified mathematical framework. A mathematical description of an agent in the biological system is provided, along with the formulation of the task of estimating its state. The mathematical model is built upon a non-linear discrete-time system in state space. The solution to this problem is demonstrated using the example of skeletal keypoints in a Wistar rat, which are detected using a pre-trained detector. A fully connected neural network is proposed to parameterize the unknown dynamics of the system. The particle filter (a sequential Monte Carlo method) and the unscented Kalman filter were selected for a comparative analysis. The comparison of the methods was conducted on a collected and preprocessed dataset comprising images with a resolution of 1060×548 pixels and annotations of rat skeletal keypoints. The experimental results demonstrate the high efficacy of the proposed method and its advantage over an analytical description of the system's nonlinear dynamics. Among the compared methods, the dual estimation of both the state vector and the neural network parameters using two unscented Kalman filters achieved the minimal mean error of 6.4 pixels. However, for practical applications in real-time scenarios, a single filter employing a pre-trained neural network proves to be more advantageous. Moreover, the unscented Kalman filter in this case demonstrated higher accuracy than the particle filter (mean error of 8.1 pixels vs. 12.0 pixels). The results of this study can be used to solve the task of automated registration of Wistar rat behavior by parameterizing the functions that link state vectors with the output vectors of individual and group behavior.
-
The rapid advancement of technology has a profound impact on logistics and freight transportation. Efficient management of transportation schedules is vital for businesses seeking to minimize costs, reduce delivery delays, and improve customer satisfaction. One of the most important challenges in this field is the Vehicle Routing Problem with Time Windows (VRPTW), which requires not only finding optimal delivery routes but also adhering to specific timing constraints for each customer or delivery point. Traditional optimization methods often struggle with the complexity and dynamic nature of real-world logistics, particularly when dealing with large-scale datasets and unpredictable factors such as traffic congestion or weather conditions. To address these limitations, this study introduces a machine learning-based system that enhances the performance of existing VRPTW solutions. Unlike conventional approaches that rely solely on heuristics or static planning, our system employs modern machine learning models to predict key time-related parameters – including transit time, availability time, and service time – based on historical and contextual data. These predictive capabilities allow the routing algorithms to make more informed decisions, resulting in more accurate and adaptable scheduling. Building on previous research involving Random Forest models, we propose a more robust framework that incorporates advanced preprocessing techniques and feature engineering to improve model accuracy. By training and evaluating the system using real-world datasets, we are able to simulate practical scenarios and validate the effectiveness of our approach. Experimental results show that our proposed method consistently outperforms other commonly used machine learning models in terms of Mean Absolute Error (MAE), thus confirming its potential for real-world applications. Overall, this study contributes a scalable and intelligent solution to a longstanding logistics problem, paving the way for more responsive and cost-effective transportation systems.
Artificial Intelligence, Knowledge and Data Engineering
-
In neuroscience, neural engineering, and biomedical engineering, electroencephalography (EEG) is widely used because of its non-invasiveness, high temporal resolution, and affordability. However, noise and physiological artifacts, such as cardiac, myogenic, and ocular artifacts, frequently contaminate raw EEG data. Deep learning (DL)-based denoising techniques can reduce or eliminate these artifacts, which degrade the EEG signal. Despite these techniques, significant artifacts can still hinder the performance, making noise removal a major requirement for accurate EEG analysis. Furthermore, for strong artifact removal, an Optimized Hierarchical 1D Convolutional Neural Network (1D CNN) is introduced. For effective feature extraction, the hierarchical CNN combines max-pooling, ReLU activation, and adaptive convolutional windows. An Annealed Grasshopper Algorithm (AGA) is employed to optimize the network parameters, further improving artifact removal. To ensure comprehensive exploration and convergence toward ideal CNN settings, AGA combines the fine-tuning accuracy of Simulated Annealing (SA) with the global exploration capabilities of the Grasshopper Optimization Algorithm (GOA). By utilizing a hybrid technique, the network can more effectively eliminate artifacts from various hierarchical levels, leading to a notable improvement in signal clarity and overall accuracy. The cleaned EEG data is represented by the recovered features in the last dense layer of the Hierarchical 1D CNN, which employs a sigmoid function. Based on experimental results, the proposed method achieved a PSNR of 29.5dB, MAE of 11.32, RMSE of 0.011, and CC of 0.93, which outperforms prior works. The proposed method can improve the precision of EEG artifact removal, which is a useful addition to biomedical signal processing and neuro-engineering.
-
The article presents a method for identifying Russian-language texts generated by large language models (LLMs). The method was developed with a focus on short messages from 100 to 200 characters long. The relevance of the work is due to the widespread use of generative models, such as GPT-3.5, GPT-4o, LLaMA, GigaChat, DeepSeek, and Yandex GPT. The method is based on an ensemble of machine learning models, and features of three levels are also used: linguistic (structure, punctuation, morphology, lexical diversity), statistical (entropy, perplexity, n-gram frequency), and semantic (RuBERT embeddings). LightGBM, BiLSTM, and the pre-trained transformer model RuRoBERTa are used as basic models, combined by stacking through logistic regression. The choice of a hybrid ensemble approach is due to the desire to take into account features at different levels of the text hierarchy and to ensure the reliability of classification in the context of different topics of generated texts, versions, and types of language models. The use of an ensemble is an advantage in the analysis of short texts, since LightGBM, based on averaged indicators, is less sensitive to length (the perplexity metric is already averaged over the entire text), while BiLSTM and RoBERTa are able to identify local features of an LLM text, and not just global ones. The dataset of natural texts includes more than 2.8 million user comments from the VK social network. The LLM text dataset contains 700 thousand texts generated by seven relevant large language models. Topic modeling (LDA) and role generation using prompt engineering were used in the text generation. The methodology was evaluated on open datasets of Russian-language LLM texts. The experimental results showed an accuracy of up to 0.95 in the binary classification task (Human–LLM) and up to 0.89 in the multi-class task of determining the model-generator. The method demonstrates robustness to the diversity of sources, styles, and LLM versions.
Information Security
-
The research is devoted to solving the problem of synthesizing a model of a critical information infrastructure cloud platform with cyber immunity. The relevance of the research is due to the need to resolve a problematic situation characterized by contradictions in science and practice. The contradiction in practice is observed between increased requirements for the resilience of critical information infrastructure cloud platforms and the growth of threats associated with the exploitation of previously unknown vulnerabilities and the overcoming of protective measures. The contradiction in science is that it is impossible to ensure the required resilience of such platforms using existing models and methods. Thus, existing approaches do not fully account for the specific features of critical information infrastructure cloud platforms, such as hierarchical architecture, the presence of undetected vulnerabilities, operation under targeted cyberattacks, increased requirements for resilience, and the need for rapid restoration of normal operation. This paper aims to synthesize a new model of a critical information infrastructure cloud platform with cyber immunity. A hypothesis has been formulated that endowing cloud platforms with the property of cyber immunity has a positive effect on their resilience when subjected to cyberattacks. Research methods include methods of system analysis, probability theory, theory of formal semantics, theory of similarity and dimensional analysis, as well as cyber immunology methods. The concept of cyber immunity has been substantiated, which involves providing cloud platforms with the ability to counteract known and previously unknown cyberattacks, quickly restore normal operation, and memorize malicious input data, thereby preventing their processing in the future. The indicators of the resilience of critical information infrastructure cloud platforms have also been substantiated. A new model of a critical information infrastructure cloud platform with cyber immunity has been developed. The scientific novelty of the proposed model lies in the introduction, for the first time, of components such as a semantic violation detector, a normal operation restorer, and cyber immune memory. These components collectively implement a new emergent property of cyber immunity. Theoretical and experimental studies of the model have been conducted, confirming the proposed hypothesis. The practical significance of the research results lies in providing technical recommendations on the architecture of the software complex, which can be applied in the development of means for protecting critical information infrastructure cloud platforms, in particular, the GosTech cloud platform, against cyberattacks.
-
The paper addresses the task of static (without execution) comparison of binary executable files. A program and any of its procedures can be represented as a directed graph. For a program, the corresponding graph is a function (procedure) call graph, where the nodes are the functions themselves, and an edge from vertex a to b describes a call to function b from function a. For a procedure, such a graph is a control flow graph, where the vertices are basic blocks, and an edge between nodes a and b means that the commands of block b can be executed after the commands of block a. The study proposes an algorithm for comparing directed graphs, which is then applied to program comparison. The graph comparison algorithm is based on applying a node similarity function. For comparing procedure graphs, a fuzzy hash function and a cryptographic hash function are used as this similarity function. Subsequently, this method of comparing procedure graphs is used as the node similarity function for comparing program graphs. Based on the proposed algorithm, a method for program comparison has been developed, and its investigation was conducted through two experiments. In the first experiment, the method’s behavior was studied when comparing programs compiled with different optimization options (O0, O1, O2, O3, and Os). In the second experiment, the possibility of identifying effective and resilient obfuscating transformations within a previously developed model was investigated. Within the first experiment, evidence was obtained supporting the hypothesis that similarity decreases as optimization increases from O1 to O3. Within the second experiment, some previously obtained results concerning the effectiveness (ineffectiveness) and resilience (non-resilience) of obfuscating transformations were confirmed.