In this chapter, realization methods for one-dimensional regular state space models and descriptor systems are described. The presented algorithm is know as the algorithm of Kung~\cite{kung1978new} and uses the singular value decomposition to construct a numerical base of the observability and state sequence matrices. The estimated observability and state sequence matrices are than used to estimate the a priori unknown system parameters such as the order, the system matrices and the state sequence.

Problem statement

We first formally define what we mean by `identification’ of a system. The first definition is for regular systems, followed by the definition for singular systems.

Given \(N\) measurements \(y[k]\in\real^q\) generated by the unknown regular state space model of order \(m\)

\begin{equation} \begin{aligned} x[k+1] = & A x[k] \\ y[k] = & C x[k]. \end{aligned} \end{equation}

Determine;

  • the order of the unknown system,
  • the system matrix \(A\in \real^{m\times m}\), output matrix \(C\in \real^{q\times m}\) and initial state \(x\) up to a similarity transformation.

For descriptor systems we have the following problem statement.

Given \(N\) measurements \(y[k]\in\real^q\) generated by the unknown descriptor system of order \(m\)

\begin{equation} \begin{aligned} E x[k+1] = & A x[k] \\ y[k] = & C x[k], \end{aligned} \end{equation}

where \((A,E)\) is a regular commuting matrix pencil. Determine;

  • the order of the unknown system,
  • the system matrices \(E\in \real^{m\times m}, A\in \real^{m\times m}\), output matrix \(C\in \real^{q\times m}\) and generalized state \(x\) up to a left and right transformation of full rank.

Both definitions differ only slightly by the introduction of the matrix \(E\) for descriptor systems. Note that in Definition~\ref{def:ident-1S} we assumed that both system matrices commute. This assumption can be made without loss of generality, as it is always possible to transform a regular descriptor system such that the system matrices commute.

Realization algorithm

Two realization algorithms are presented one for regular systems and one for descriptor systems. The presented algorithm for regular systems is knows as Kung’s algorithm~\cite{kung1978new}. To identify descriptor systems we present a novel algorithm and compare it with the classical results formulated in~\cite{moonen1992subspace}.

Regular systems

Consider the time series \(y[k]\in\real^q\) with \(N\) measurements, such that \(0\leq k \leq N-1\), generated by a regular systems of order \(m\). We formulate the identification algorithm with the goal to identify the system as described in Definition~\ref{def:ident-1R}. The steps in the algorithm can be summarized as follows;

  • Create a rank deficient Hankel matrix.
  • Factorize the Hankel matrix as the product of an extended observability matrix and an extended state sequence matrix.
  • Calculate past and future observability matrices by removing respectively the last and first row of the extended observability matrix.
  • Estimate the system matrix via least squares.
  • Determine the output matrix and initial state.

In step 1 and 2 we create and factorize the Hankel matrix with \(p\) block rows. Denote the Hankel matrix as \(Y_{0|p-1} \in qp\times N-p+1\), which has rank \(m\). In order for the realization algorithm to succeed, this matrix must be rank deficient. This condition is further elaborated on in Section~\ref{sec:partial-real-1d}.

Next, we calculate the row and column space of the Hankel matrix by using the singular value decomposition

\begin{equation} Y_{0|p-1} = U \Sigma V^T = \begin{bmatrix} U_0 & 0 \end{bmatrix} \begin{bmatrix} \Sigma_0 & 0 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} V_0^T \\ V_1^T \end{bmatrix} = U_0 \Sigma_0 V_0^T. \end{equation}

As shown in the previous chapters, the row and column space of a Hankel matrix are respectively span by an extended observability matrix \(\obs\) and an extended state sequence matrix \(X\). We define two new matrices

\begin{equation} \hat{\obs} = U_0 \Sigma_0^{1/2}, \hat{X} = \Sigma_0^{1/2} V_0^T . \end{equation}

The columns of \(\hat{\obs}\) now span the same space as the columns of \(\obs\), in the same way, do the rows of \(\hat{X}\) span the same row space as the rows of \(X\). We can therefore write

\begin{equation} \hat{\obs}P^{-1} = \obs,~P\hat{X} = {X}, \end{equation}

for some unknown invertible matrix \(P\), as

\begin{equation} Y_{0|p-1} = \obs X = \hat{\obs} \hat{X} = \hat{\obs} P^{-1}P\hat{X}. \end{equation}

The matrices \(\hat{\obs}\) and \(\hat{X}\) are thus equal to

\begin{equation} \hat{\obs} = \begin{bmatrix} CP \\ CAP \\ \vdots \\ CA^{p-1}P \end{bmatrix}, \hat{X} = \begin{bmatrix} P^{-1}x & P^{-1}Ax & \cdots & P^{-1}A^{n-p}x \end{bmatrix}. \end{equation}

We now introduce two new matrices, \(\underline{\hat{\obs}}\) and \(\overline{\hat{\obs}}\), respectively called the past and future matrices

\begin{equation} \underline{\hat{\obs}} = \begin{bmatrix} CP \\ CAP \\ \vdots \\ CA^{p-2}P \end{bmatrix},~ \overline{\hat{\obs}} = \begin{bmatrix} CAP \\ CA^2P \\ \vdots \\ CA^{p-1}P \end{bmatrix}. \end{equation}

Both matrices are related to each other and we have that

\begin{equation}\label{eqn:1SI-estimation-A} \underline{\hat{\obs}} \hA = \overline{\hat{\obs}}, \end{equation}

where \(P^{-1}AP= \hA\). The least squares solution for \(\hA\) exists if and only if \(\underline{\hat{\obs}}\) has full rank. The solution is than

\begin{equation} \hA = \underline{\hat{\obs}}^{\dagger} \overline{\hat{\obs}}. \end{equation}

Which is an estimate for the system matrix in the basis \(P\). The output matrix \(C\) can be estimated by taking the first \(q\) rows of \({\hat{\obs}}\), which is \(CP = \hC\). In the same way, the initial state is equal to the first column of \(\hat{X}\) which is \(\hat{x} = P^{-1}x\). The estimated state space model is now equal to

\begin{equation} \begin{aligned} \hat{x}[k+1] = & \hA\hat{x}[k] \\ y[k] = & \hC \hat{x}[k], \end{aligned} \end{equation}

with \(\hat{x} = \hat{x}[0]\).

In the next section we present a more general algorithm to estimate descriptor systems. This algorithm can also be used when the system matrix \(E\) has full rank. In this case the descriptor system can be reduced to a regular state space model.

Descriptor systems

The realization of descriptor systems has been discussed in~\cite{moonen1992subspace}. In this thesis we first present an alternative realization algorithm that works under the same partial realization condition as the classical algorithm from~\cite{moonen1992subspace}. After the introduction, both the classical and new algorithm are compared. The partial realization condition is derived in Section~\ref{sec:partial-real-1d}.

Extended algorithm of Kung

Our novel realization algorithm is a direct generalization of Kung’s algorithm presented in the previous section. The outline of the algorithm is as follows;

  • Create a rank deficient Hankel matrix.
  • Factorize the Hankel matrix as the product of an extended observability matrix and an extended state sequence matrix.
  • Calculate past and future observability matrices by removing respectively the last and first row of the extended observability matrix.
  • Estimate the system matrices \(A\) and \(E\) in a single step via least squares.
  • Transform both matrices such that they form a commuting matrix pencil.
  • Estimate the output matrix and initial state.

Only the last three steps of the extended algorithm of Kung are different from the classical approach.

To start the algorithm, a rank deficient Hankel matrix with \(p\) block rows is constructed and factorized as the product of \(\hat{\obs}\) and \(\hat{X}\)

\begin{equation} Y_{0|p-1} = \hat{\obs} \hat{X}. \end{equation}

As demonstrated in the previous chapters, the column and row space of this Hankel matrix are respectively span by an extended state sequence matrix \(X\) and extended observability matrix \(\obs\) for descriptor systems. Similar as before, the columns of \(\hat{\obs}\) now span the same space as the columns of \(\obs\), in the same way, do the rows of \(\hat{X}\) span the same row space as the rows of \(X\), and

\begin{equation} \hat{\obs}P^{-1} = \obs,~P\hat{X} = {X}, \end{equation}

for some unknown matrix \(P\). The structure of the estimated observability and state sequence matrices for the descriptor systems are equal to

\begin{equation} \hat{\obs} = \obs P = \begin{bmatrix} C\mathcal{A}_{p-1}^{0}P \\ C\mathcal{A}_{p-1}^{1}P \\ \vdots \\ C\mathcal{A}_{p-1}^{p-1} P \end{bmatrix}, \end{equation}

\begin{equation} \hat{X} = P^{-1}X = \begin{bmatrix} P^{-1}x\mathcal{A}_{n-p-1}^{0} & P^{-1}x\mathcal{A}_{n-p-1}^{1} & \cdots & P^{-1}x\mathcal{A}_{n-p-1}^{n-p-1} \end{bmatrix}, \end{equation}

with \(\mathcal{A}_{d}^{k} = A^{k}E^{d-k}\) for some \(d\). We now define three new matrices

\begin{equation} \hA = P^{-1}AP,~\hE = P^{-1}EP,~\hC=CP, \end{equation}

and the associated evolution operator \(\hat{\mathcal{A}}_{p-1}^k = \hA^k \hE^{p-1-k}\). Using these definitions, the matrices \(\hat{\obs}\) and \(\hat{X}\) are

\begin{equation} \hat{\obs} = \begin{bmatrix} \hC\hat{\mathcal{A}}_{p-1}^{0} \\ \hC\hat{\mathcal{A}}_{p-1}^{1} \\ \vdots \\ \hC\hat{\mathcal{A}}_{p-1}^{p-1} \end{bmatrix},~ \hat{X} = \begin{bmatrix} \hat{\mathcal{A}}_{n-p-1}^{0} \hx & \hat{\mathcal{A}}_{n-p-1}^{1}\hx & \cdots & \hat{\mathcal{A}}_{n-p-1}^{n-p-1}\hx \end{bmatrix}. \end{equation}

Similar as before, we define the past and future matrices as

\begin{equation} \underline{\hat{\obs}} = \begin{bmatrix} \hC\hat{\mathcal{A}}_{p-1}^{0} \\ \hC\hat{\mathcal{A}}_{p-1}^{1} \\ \vdots \\ \hC\hat{\mathcal{A}}_{p-1}^{p-2} \end{bmatrix}, \overline{\hat{\obs}} = \begin{bmatrix} \hC\hat{\mathcal{A}}_{p-1}^{1} \\ \hC\hat{\mathcal{A}}_{p-1}^{2} \\ \vdots \\ \hC\hat{\mathcal{A}}_{p-1}^{p-1} \end{bmatrix}. \end{equation}

Both matrices are obtained by respectively removing the last and first block row of the estimated extended observability matrix. By using the properties of the evolution operator we find that the past and future matrices are related to each other via

\begin{equation} \underline{\hat{\obs}} \hA = \overline{\hat{\obs}} \hE. \end{equation}

To determine the a priori unknown matrices \(\hA\) and \(\hE\), the above equation is rewritten as

\begin{equation}\label{eqn:A-E} \begin{bmatrix} \underline{\hat{\obs}} & \overline{\hat{\obs}} \end{bmatrix} \begin{bmatrix} \hA \\ -\hE \end{bmatrix} = 0 = \Xi \begin{bmatrix} \hA \\ -\hE \end{bmatrix}. \end{equation}

We have just proven that the matrix obtained by concatenating \(\hat{A}\) and \(-\hat{E}\) lies in the right null space of \(\Xi\). Because the matrix pencil \((A,E)\) is regular, this concatenated matrix has full rank, i.e.~has rank \(m\). This implies that the nullity of \(\Xi\) is larger than, or equal to \(m\). Only if the nullity of \(\Xi\) is exactly equal to \(m\) we can obtain both matrices \(\hat{A}\) and \(\hat{E}\). In Section~\ref{sec:partial-real-1d} we will show that this is the case when the Hankel matrix is rank deficient.

Assume for now that the condition is met, we can obtain a numeric basis of the right null space of \(\Xi\) by using the singular value decomposition. It is not possible to obtain the two matrices \(\hat{A}\) and \(\hat{E}\) directly by solving Equation~\eqref{eqn:A-E}, instead two matrices \(\hat{A}Q\) and \(\hE Q\) are obtained, where \(Q\) is an unknown matrix of full rank. The obtained matrix pencil \((\hat{A}Q,\hat{E}Q)\) has the same generalized eigenvalues as the pencil \((\hA,\hE)\) and is also regular. Because of the regularity there exist a matrix \(R\) such that \((R \hA Q,R \hE Q) = (\tilde{A},\tilde{E})\) is a standard pencil. Which implies that \(\tilde{A}\) and \(\tilde{E}\) commute, such that the evolution operator

\begin{equation} \tilde{\mathcal{A}}_{p-1}^k = \tilde{A}^k\tilde{E}^{p-1-k}. \end{equation}

exists and is well-defined. The matrices \(\tilde{A}\) and \(\tilde{E}\) are respectively the estimated matrices for \(A\) and \(E\).

After the estimation of the system matrices we can estimate the output matrix and generalized state vector. The estimated output matrix, denoted by \(\tilde{C}\) can be chosen randomly taking into account that the estimated system with regular commutation matrix pencil \((\tilde{A},\tilde{E})\) is observable. To obtain the generalized state vector we solve the following problem using least squares

\begin{equation} \begin{bmatrix} y[0] \\ y[1] \\ \vdots \\ y[n-1] \end{bmatrix} = \begin{bmatrix} \tilde{C} \tilde{\mathcal{A}}^0_{n-1} \\ \tilde{C} \tilde{\mathcal{A}}^1_{n-1} \\ \vdots \\ \tilde{C} \tilde{\mathcal{A}}^{n-1}_{n-1} \\ \end{bmatrix}\tilde{x}. \end{equation}

This system has a unique solution if and only if the system \((\tilde{C},\tilde{\mathcal{A}}_{n-1})\) is observable, which is guaranteed by the specific choice of \(\tilde{C}\).

The classical result: Mind the gap

Classically descriptor systems are identify via a different algorithm. This algorithm relies on separating the causal dynamics and the non-causal dynamics from the system. This is done by performing a partial column compression and identifying the so-called gap. For the analysis and estimation, the descriptor system is assumed to be in the Weierstrass canonical form, i.e.~the system is described by

\begin{equation}\label{eqn:moonen_2} \begin{bmatrix} \eye & 0 \\ 0 & N\end{bmatrix} \begin{bmatrix} x_{R}[k+1] \\ x_{S}[k+1] \end{bmatrix} = \begin{bmatrix} A_{R}& 0 \\ 0 & \eye \end{bmatrix} \begin{bmatrix} x_{R}[k] \\ x_{S}[k] \end{bmatrix}, \end{equation}

\begin{equation} y[k] = \begin{bmatrix} C_{R} & C_{S} \end{bmatrix}\begin{bmatrix} x_{R}[k] \\ x_{S}[k] \end{bmatrix}. \end{equation}

Denote the size of \(A_r\) by \(m_1\times m_1\) and the size of \(N\) by \(m_2\times m_2\). In this case, we showed that the extended observability matrix of the systems is given by

\begin{equation} \obs = \begin{bmatrix} C_R & C_S N^{p-1} \\ C_R A_R & C_S N^{p-2} \\ \vdots & \vdots \\ C_R A_R^{p-2} & C_SN \\ C_R A_R^{p-1} & C_S \end{bmatrix}. \end{equation}

Recall that the matrix \(N\) is nilpotent, therefore there exists a value \(q\) such that \(N^{q-1}\neq0\) and \(N^{q}=0\). This value is called the nilpotency index~\cite{herstein1975topics} of \(N\). By using this property, the above matrix can be rewritten as

\begin{equation}\label{eqn:ident-1s-obs-canonical} \obs = \begin{bmatrix} C_R & 0 \\ C_R A_R & 0 \\ \vdots & \vdots \\ C_R A_R^{p-q-3} & 0 \\ C_R A_R^{p-q-2} & C_SN^{q-1} \\ \vdots & \vdots \\ C_R A_R^{p-1} & C_S \end{bmatrix} = \begin{bmatrix} \obs_R & \obs_S \end{bmatrix} \end{equation}

In this basis, two state space models \((C_R,A_R)\) and \((C_S,N)\) can be identified independently from each other by using Kung’s algorithm respectively on the matrices \(\obs_R\) and \(\obs_S\). The system matrices are the solution of

\begin{equation} \underline{\obs_R}A_R = \overline{\obs_S},~\overline{\obs_S}N = \underline{\obs_S}, \end{equation}

Note that the roles of the future and past matrices are reversed when estimating \(N\). This is because the dynamics of the singular part of the system run backwards. The output matrices \(C_R\) and \(C_S\) are respectively equal to the first and last row of \(\obs_R\) and \(\obs_S\). The state sequence matrix has a similar sparse structure which can be exploited to estimate the initial state \(x_R\) and final state \(x_S\).

In general however, we can not calculate the observability matrix in the canonical form from Equation~\eqref{eqn:ident-1s-obs-canonical} directly. Instead, this matrix is obtained by factorizing the Hankel matrix and we have no control over the basis in which this matrix is represented. The estimated observability matrix is thus related the observability matrix via

\begin{equation} \hat{\obs} = \obs P, \end{equation}

where \(P\) is an unknown matrix of full rank. It is however possible to separate the regular and singular dynamics via the so called `Mind the gap’~\cite{dreesen2013back} which was the topic of Section \ref{sec:finding-the-gap}. When both parts have been separated, the algorithm of Kung can be used to estimate the system matrices, as described above.

Partial realization criteria

The presented extended and regular algorithms of Kung are only valid under a certain condition. This condition is called the partial realization condition. For both regular and singular systems this condition is exactly the same, with a small exception that, when using the classical realization algorithm for descriptor systems, we also need to be able to determine the gap. The partial realization condition for regular and descriptor systems is formulated in the following theorem.

Given \(n\) measurements \(y[k]\in\real^q\) generated by the unknown deterministic system of order \(m\)

\begin{equation} \begin{aligned} E x[k+1] = & A x[k] \\ y[k] = & C x[k], \end{aligned} \end{equation}

where \((A,E)\) is a regular matrix pencil and \(E\) can be either singular or regular. The system can be identified by starting from the Hankel matrix \(Y_{0|p}\) if

\begin{equation} \rank{Y_{0|p-1}} = \rank{Y_{0|p}} = m. \end{equation}

When \(E\) is a regular matrix, the system can be transformed to

\begin{equation} \begin{aligned} x[k+1] = & E^{-1}A x[k] \\ y[k] = & C x[k], \end{aligned} \end{equation}

which is a regular system with system matrix \(E^{-1}A\), making Theorem~\ref{thm:prc-1D} valid for regular state space models.

Theorem \ref{thm:prc-1D} is first proven for the classical algorithm of Kung for regular systems in Section~\ref{sec:1SI-condition-reg}, followed by the proof for the generalized algorithm of Kung for descriptor systems in Section~\ref{sec:1SI-condition-sing}.

Regular systems

As noted in Kung’s algorithm, the system matrix \(A\) of a regular state space model can only be estimated when Equation~\eqref{eqn:1SI-estimation-A} has a unique solution. For this to be the case, the past matrix must be of full rank. Assume that the partial realization condition is satisfied and

\begin{equation} \rank{Y_{0|p-1}} = \rank{Y_{0|p}} = m. \end{equation}

It follows that the column space of \(Y_{0|p-1}\) has dimension \(m\). This space is is exactly the same space as the one span by the columns of the past matrix obtained by factorizing the Hankel matrix \({Y_{0|p}}\). Which proofs that Equation~\eqref{eqn:1SI-estimation-A} has a single solution.

Descriptor systems

In one step of the extended algorithm of Kung the system matrices \((A,E)\) are estimated. In order for this estimation to have a unique solutions, we have shown that the nullity of \(\Xi\) must be exactly \(m\). We will show that this is true when the partial realization condition formulated in Theorem~\ref{thm:prc-1D} is met. In order to proof this, we first proof the following lemma.

Define

\begin{equation} \Xi = \begin{bmatrix} \underline{\obs_{0|p}} & \overline{\obs_{0|p}} \end{bmatrix}. \end{equation}

The rank of \(\Xi\) is equal to \(m\) if

\begin{equation} \rank{\obs_{0|p}} = \rank{\obs_{0|p-1}} = m, \end{equation}

for some value of \(p\).

Take a value \(p\) such that \(\rank{\obs_{0|p}} = \rank{\obs_{0|p-1}} = m\). Place the system in the Weierstrass canonical form. The extended observability matrix is than equal to

\begin{equation} {\obs_{0|p}} = \begin{bmatrix} C_R & C_S N^p \\ C_RA_R & C_S N^{p-1} \\ \vdots & \vdots \\ C_R A_R^{p-1} & C_SN \\ C_R A_R^{p} & C_S \\ \end{bmatrix} = \begin{bmatrix} \obs_{R_{0|p}} & \obs_{S_{0|p}} \end{bmatrix}. \end{equation}

Denote the rank of both matrices \(\obs_{R_{0|p}}\) and \(\obs_{S_{0|p}}\) respectively as \(m_R\) and \(m_S\). We have that \(m_R + m_S = m\). The structure of \(\obs_{0|p-1}\) is

\begin{equation} {\obs_{0|p-1}} = \begin{bmatrix} C_R & C_S N^{p-1} \\ C_RA_R & C_S N^{p-2} \\ \vdots & \vdots \\ C_R A_R^{p-2} & C_SN \\ C_R A_R^{p-1} & C_S \\ \end{bmatrix} = \begin{bmatrix} \obs_{R_{0|p-1}} & \obs_{S_{0|p-1}} \end{bmatrix}. \end{equation}

Because \(p\) was taken such that \(\rank{\obs_{0|p}} = \rank{\obs_{0|p-1}} = m\), we have that \(\rank{\obs_{R_{0|p-1}} }= m_R\) and \(\rank{\obs_{S_{0|p-1}}} = m_S\). We can now rewrite \(\Xi\) as

\begin{equation} \begin{aligned} \Xi = \begin{bmatrix} \underline{\obs_{0|p}}&\overline{\obs_{0|p}} \end{bmatrix} = & \begin{bmatrix} C_R & C_S N^p & C_RA_R & C_S N^{p-1} \\ C_RA_R & C_S N^{p-1} & C_RA_R^2 & C_S N^{p-2} \\ \vdots & \vdots & \vdots & \vdots \\ C_R A_R^{p-2} & C_SN^{2} & C_R A_R^{p-1} & C_SN \\ C_R A_R^{p-1} & C_SN & C_R A_R^{p} & C_S \\ \end{bmatrix} \\ \\ = & \begin{bmatrix} \obs_{R_{0|p-1}} & \obs_{S_{0|p-1}}N & \obs_{R_{0|p-1}}A_R & \obs_{S_{0|p-1}} \end{bmatrix} \end{aligned} \end{equation}

This shows that

\begin{equation} \rank{\Xi} = \rank{ \begin{bmatrix} \obs_{R_{0|p-1}} & \obs_{S_{0|p-1}} \end{bmatrix} } = \rank{\obs_{0|p-1}} = m. \end{equation}

This proofs Lemma~\ref{thm:prc-1D}.

Using the result from Lemma~\ref{lem:prc} we can proof the partial realization theorem.

The proof follows directly from the result of Lemma~\ref{lem:prc}. Assume that the partial realization condition is met and

\begin{equation} \rank{Y_{0|p}} = \rank{Y_{0|p-1}}, \end{equation}

for some \(p\). According to Lemma~\ref{lem:prc} the matrix \(\Xi\) has rank \(m\). This matrix also has \(2m\) columns, from which it follows that the right null space has \(m\) dimensions. This implies that Equation \eqref{eqn:A-E} has a unique solution modulo a right non-singular multiplication and that an estimate for the regular matrix pencil can be calculated.

Example Realization problem

As an example consider the descriptor system of order 4 with the following system matrices;

\begin{equation} A = \begin{bmatrix} -8 & 13 & -7 & 2 \\ -13 & 20 & -9 & 2 \\ -17 & 25 &-10 & 2 \\ -22 & 32 &-16 & 5 \end{bmatrix}, E = \begin{bmatrix} 7 & -6 & 4 & -2 \\ 19 & -19 & 13 & -6 \\ 45 & -48 & 32 & -14 \\ 61 & -65 & 42 & -18 \end{bmatrix}, \end{equation}

\begin{equation} C = \begin{bmatrix} -3 & 5 & -3 & 1 \end{bmatrix}, x = \begin{bmatrix} 6 \\ 10 \\ 17 \\ 23 \end{bmatrix}. \end{equation}

The matrix pencil \((A,E)\) is regular and both system matrices commute. We simulate the first 10 data points of this system. Two Hankel matrices \(Y_{0|3}\) and \(Y_{0|4}\) are created, the last one, of order 5, is equal to

\begin{equation} Y_{0|4} = \begin{bmatrix} 2 & 5 & 13 & 35 & 97 & 275 \\ 5 & 13 & 35 & 97 & 275 & 793 \\ 13 & 35 & 97 & 275 & 793 & 2315 \\ 35 & 97 & 275 & 793 & 2315 & 6818 \\ 97 & 275 & 793 & 2315 & 6818 & 20197 \end{bmatrix} \end{equation}

Both matrices have rank \(4\) such that the partial realization condition is satisfied. Next, we calculate the singular value decomposition of the Hankel matrix

\begin{equation} \begin{bmatrix} U_0 & U_1 \end{bmatrix} \begin{bmatrix} \Sigma_0 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} V_0^T \\ V_1^T \end{bmatrix} = U_0 \Sigma_0^{1/2} \Sigma_0^{1/2}V_0^T = Y_{0|4}. \end{equation}

The extended observability matrix is than given By

\begin{equation} \obs_{0|4} = U_0 \Sigma_0^{1/2}. \end{equation}

We can now construct the matrix \(\Xi\) by shifting \(\obs_{0|4}\) and concatenating the columns. The right null space is than equal to

\begin{equation} \begin{bmatrix} AP \\ -EP \end{bmatrix}. \end{equation}

The estimated system matrices are than equal to \(\hA=QAP\) and \(\hE=QEP\), for some \(Q\) such that \((\hA,\hE)\) is standard matrix pencil. This matrix \(Q\) is equal to \((\hA + \mu \hE)^{-1}\), for some random \(\mu\) chosen in such a way that the matrix inverse exists. The existence of \(\mu\) is guaranteed by the regularity of the matrix pencil.

In the last step of this identification, we estimate the output matrix and the initial state. Take for \(\hC\) a random matrix, such that the estimated system is observable. The generalized state vector is than obtained by solving

\begin{equation} \begin{bmatrix} y[0] \\ y[1] \\ \vdots \\ y[9] \end{bmatrix} = \begin{bmatrix} \hC \hat{\mathcal{A}}^0_{9} \\ \hC \hat{\mathcal{A}}^1_{9} \\ \vdots \\ \hC \hat{\mathcal{A}}^{9}_{9} \\ \end{bmatrix}\hat{x}. \end{equation}

We solve this problem using least squares. In our example the residual is \(0.34\mathrm{e}{-24}\), which shows that the correct system has been estimated.

Conclusion

In this chapter the realization of one-dimensional singular and regular systems was discussed. Apart from serving as further reference material and a stepping stone for introducing the general two-dimensional and $n$-dimensional realization algorithms, we also introduced a novel approach to estimate descriptor systems.

The classical realization algorithm for descriptor systems relies heavily on rank test and separating the regular and singular dynamics of the systems via a so called column compression. Further more, the partial realization condition is also more strict, as the so called gap must be determined. The extended algorithm of Kung has the main advantage that the Hankel matrix must only be rank deficient and no gap must be constructed, reducing the overall problem size. In practice eliminating the need to identify the gap will be more import for multi-dimensional systems as the size of the involved (recursive) Hankel matrices grow exponentially as a function of the dimension of the time series.