Prediction of the State of the Cardiovascular System based on the Allocation of the Boundaries of the Implementation of the Dynamic System

Recently, research methods for rare experimental events have been intensively developed [1-9]. For a quantitative assessment of rare events, the time intervals between occurrences of successive events above (or below) some threshold Q are usually considered. In this case, both the probability density function of these repeated intervals and their long-term dependencies are investigated (autocorrelation function, conditional repetition periods, etc.). In the numerical analysis necessarily considers not very large thresholds Q, which provide good statistical estimates of repeated intervals, and then these results are extrapolated to very large thresholds, for which the statistics are very poor [1-5].


Introduction
Recently, research methods for rare experimental events have been intensively developed [1][2][3][4][5][6][7][8][9]. For a quantitative assessment of rare events, the time intervals between occurrences of successive events above (or below) some threshold Q are usually considered. In this case, both the probability density function of these repeated intervals and their long-term dependencies are investigated (autocorrelation function, conditional repetition periods, etc.). In the numerical analysis necessarily considers not very large thresholds Q, which provide good statistical estimates of repeated intervals, and then these results are extrapolated to very large thresholds, for which the statistics are very poor [1][2][3][4][5].
In [6], based on 24-hour Holter-monitoring data, it was shown that linear and non-linear long-term memory, inherent in repeated heartbeat intervals, leads to a power law of PDF change. As a consequence, the power law will be satisfied by the probability WQ(t; ∆t) of the fact that for ∆t time units that have passed since the last repeated interval with an extreme event (record) greater than the threshold Q (briefly, Q-interval) , at least one Q-interval will appear, if for t time units before the last Q -a Q-interval of heartbeat has appeared. Using the probabilities WQ(t; ∆t) using long-term memory, the procedure for predicting Q-intervals (RIA-procedure) was proposed in [6][7][8][9][10][11][12][13][14], which is an alternative to the traditional technique of recognition of specified patterns, the so-called PRT-recognition technique (pattern recognition technique) based on short term memory. The RİA approach does not require the limitedness of the used Q-interval statistics and in all cases gives the best result. Based on this approach, a procedure for predicting a larger (with a record exceeding a certain large threshold Q) repeated Q-interval of heartbeat was developed in [15] using the preliminary selection of repeated Q-intervals with persistent (steadily growing) records based on the transformation [16,17] of the original signals (in this case, repeated Q-intervals) in a series of times to achieve a given threshold of change. However, the implementation of the approach to predicting complex signals proposed in [17] based on the selection of the dynamic system implementation boundary provides for a preliminary selection of a "stable" segment at the end of the time series where the predictor is trained taking into account the latest dynamics of the process under study. In this regard, in [18,19], a computational procedure was proposed for segmentation of nonstationary time signals with fractal properties.
However, the implementation of the approach to predicting complex signals proposed in [17] based on the selection of the dynamic system implementation boundary provides for a preliminary selection of a "stable" segment at the end of the time series where the predictor is trained considering the latest dynamics of the process under study.

Minimal cCoverage and Fractality Index
This dimension is introduced by Hausdorff by the formula where N (δ) is the minimum number of balls of radius δ covering the set A. The basis for the definition (1.1) is the asymptotics for δ, which is defined by the expression for fractal sets.
The fractal dimension of the time series can be calculated directly through the cell dimension D c , which is also called the Minkowski dimension or the box dimension [20].
To determine the value of D, the plane on which the time series graph is defined is divided into cells of size δ, and the number of cells N (δ) is determined, where at least one point of this graph is located.   [21]. Within this scale, a time series can change many times the nature of its behaviour.
In order to associate the dynamics of the process corresponding to the series { ( )} i X t , with the fractal dimension of this time series, it is necessary to determine the dimension D locally. For this, it is necessary to find an approximation sequence that is optimal in a certain sense for each fixed δ. For this purpose, the following procedure for minimal coverages is proposed in [3]. To calculate from the fractal dimension of a time series, or the graph of a real function of one scalar variable ( ) y f t = , defined on a certain interval [a, b], you can directly use the procedure for determining cell dimensions, considering cell coverage as a special case of covering with rectangles.
We introduce a uniform partition of the segment [a, b] by the points 1 It is obvious that ( ) Sµ δ is the minimum area of the graphics coverage of the class of coatings with rectangles. Therefore, such a coating is called minimal [19].
After multiplying both parts in (1.2) by From (1.5) and (1.7) we find The index μ is called [19] the fractality index, and the dimension D µ is the dimension of the minimum coverage. The value of μ is related to the stability of the time series: the more stable the behavior of the original series (i.e., oscillations occur near one level), the greater the value of μ, while the opposite is true.
If the function f(t) is called stable (quasistationary) on the interval (a, b), when its volatility is invariant on this interval, which has long been evaluated [22] by the standard deviation of changes in the selected time window (a, b), then the degradation point time series, i.e. The point at which stability is violated (quasistationarity) can be determined by the jump in volatility. For the evaluation, change in volatility points that divide the original time series into quasistationary portions (segments) corresponding jumps volatility functions developed [18] computing segmentation procedure generalizes iterative method centered cumulative sum of squares (ICSS) [23]. In general, the problem of segmentation, non-stationary time series, i.e. splitting it into non-intersecting adjacent fragments that will be statistically homogeneous (or at least possess this property in a general degree than the initial data) is known as the task of finding the chance-point problem [24,25].
The main advantage of the fractality index μ compared to other fractal indicators (in particular, with the Hurst index H) is that the corresponding value ( ) f V δ has a quick access to the power asymptotic mode (1.7). The Hurst index H is determined based on the assumption that Angle brackets here mean averaging over the interval (a, b). For comparison of μ and H, we introduce the definition of the average amplitude As is known [26], if f(t) is a realization of a Gaussian random process, then the Hurst index H is related to the dimension D µ and therefore to the index μ by the relation will give an idea of the behavior of the local characteristics μ and b for the time series under study. Experimental studies conducted in [19] for various types of fractal time series show a low variability of the values of μ and b with a lower variability of the value of b.
Since the value of b is closely related to the distribution of the increments of the time series and, therefore, to the volatility of the time series, for to apply the computational segmentation procedure [18] based on the detection of the jump of the volatility function, it is advisable to use to divide the non-stationary series into "quasistationary" parts with the initial series f(t), the series of values of b(t) and as an estimate of the points of change of the studied series to take the average value of the points of change obtained for the original series f(t) and the number of the values of b(t), respectively.

Algorithm for Detecting Changes in Volatility
We give a brief description of the theoretical part of the method for estimating the point of change in the volatility of the time series used in [18].
Consider a time series model.
Following the procedure for finding a single point change (single change-point) volatility [23], we set Let the hypothesis H0 mean that volatility has no points of change. When this hypothesis is fulfilled, it follows from (2.1) that We introduce the notation and determine the statistics If ( ) t µ and 0 ( ) G t are known, the least-squares method (MHK) for the volatility jump point k0 can be used to obtain the following estimate [27]ˆa The value of k V will be close to 0 when the hypothesis H0 is fulfilled and is non-zero if the volatility changes. Simple calculations lead to equality it follows that Dk can also be used as an estimate of the point of change k0. 1 2 arg max arg max( ( )) * In [27], more general statistics for detecting changes in volatility are introduced.
To establish the asymptotic distribution of the statistics v k V with the known functions of regression μ(t) and conditional variation σ(t), the notation is introduced The approximation of the random variable from the right-hand side of (2.13) with ν=1/2 is obtained using the formula [28]: If there is a jump in the volatility of the function, i.e. heteroscedastic regression model σ ≠ , the estimate of the point of change is defined as [27]: To estimate 0 ( ) t σ (2.3), it is advisable (especially when there is a certain amount of outliers or when the observation distribution function has "heavy tails") to use the absolute deviation estimate [29] ( ) = -sequence of strip widths. As a nuclear function K(t), one can adopt the standardized Epechnikov core or Gauss core Here I(A) is an indicator function of set A: To estimate the widths of the bands in (2.18), we use the method using the 2 χ statistics of 2ˆ( , , ) bn R y µ σ , which is defined as [30]  Since R2 depends on the estimated (conditional) variation, equality (2.21) can be used for the optimal choice of the bandwidth bn: ( , ) K t t , depending only on the difference of the arguments. The verification of the a(t) process for stationarity is carried out using a sample autocorrelation correlogram function [31], which for a stationary time series must rapidly decrease with increasing delay time. In [27], a modified ICSS algorithm was proposed (let's call it MICSS), in which the statistics D_k is replaced by V for ν=0 are given in [23], and for ν=1/2 in [28]. For the level of significance, α=0.05 is usually applied.
In the MICSS method in step 1, the statistics t t     means that the sample of values of at is taken at 1 In equality (2.26), the estimate σ is determined by the formula is a sequence of positive integers with 1 n l n ≤ ≤ and the notation is entered  ( Solve x nonlinear equation x obtained from the curve of part (2.14) with h l δ = =  . Under the initial condition x=x0 (for example, x 0 =100) we find the solution x* of equation (

Modified Method for Predicting the Status of the Cardiovascular Systems on the Basis of Identifying the Boundaries of Quasistationary Sections of The Time Series
To solve the problem of predicting complex digital signals, it was proposed in [17] to use the last "stable" fragment of a time series for training a predictor, obtained by dividing into quasi-stationary regions. Such a task is known [33,34] as a problem of detecting a "debugging", i.e. changes in the probability properties of the signal. Such a definition can give for complex signals an inaccurate border of a true change in the dynamics of a signal. To detect the change in the dynamics of a complex signal, it was proposed to use an estimate of the local fractal dimension based on the fractality index μ of the time series [19].
As shown in section 1, it is most appropriate to characterize the non-stationary time series with two indicators b(t) and μ(t) included in the power representation (1.14) of the amplitude variation ( ) f V δ of the function f(t) graph corresponding to the original time series. In this case, the value of b(t) is closely related to the volatility (conditional variation) of the time series, according to a jump in which in [18] an algorithm for segmentation of a nonstationary time series was developed.
Given this property of the function b(t), we can propose the following method for detecting the boundaries of quasistationary portions of the time series. Using the sliding window's original signal f(t), the derived estimation signal b(t), which characterizes the local fractal dimension of the time series, is plotted. In the derivative and in the original signal according to the algorithm [18] are the points of disorder. The positions of these points are compared and the average is taken, which will be the boundary of the adjacent quasi-stationary areas. The fragment of the last generated quasi-stable signal dynamics is used as a learning set for the predictor.
To increase the efficiency of the Q-event forecast, i.e. a record of the rhythmogram of a heartbeat, a larger threshold Q, in the selected last quasi-stationary segment of the time series of the rhythmogram, a combination of the algorithm of repeated intervals using long-term memory [6] with the method of reaching the threshold of change [16] is used.
The RIA approach is based on the analysis of the nonlinear component of the long-term dependence of repeated intervals rj between events xi exceeding a certain defined threshold Q (Q-events).
Sometimes, instead of the threshold K, the average repeat interval or repetition period is set to In this approach, to predict random signals with fractal properties, the mathematical apparatus of interval statistics is used when the probability W Q (t;∆t) that during the time ∆t appears at least one Q-interval if during the time t for the last Q-event Q event has occurred.  Under the records of the heartbeat can be understood Q-events, and the intervals-Q intervals. Transformation (3.3) can be used to pre-select heartbeat records in the Q-interval prediction procedure, based on the RİA technique, which allows speeding up the last procedure and increasing the reliability of the prediction obtained.