In this chapter we introduce three realization algorithms based on the algorithm of Kung, one for regular two-dimensional commuting state space models and two for two-dimensional commuting descriptor systems. All algorithms follow the same approach as in the one-dimensions case, except multiple shifts are needed to estimate the full system dynamics.

We start by formulating the problem statement, followed by the description of three realization algorithms. Next the partial realization condition for two-dimensional commuting state space models and descriptor systems is derived.

The results for multi-dimensional systems is briefly described at the end of this chapter.

Problem statement

The identification problem for two-dimensional regular commuting state space models is defined as follows.

Given \(n_1\times n_2\) measurements \(y[k,l]\in\real^q\) generated by the unknown regular system of order \(m\)

\begin{equation} \begin{aligned} x[k+1,l] = & Ax[k,l] \\ x[k,l+1] = & Bx[k,l] \\ y[k,l] = & C x[k,l], \end{aligned} \end{equation}

Determine;

  • the order of the unknown system,
  • the system matrices \(A,B\in \real^{m\times m}\), output matrix \(C\in \real^{q\times m}\) and initial state \(x\) up to a change of basis.

The identification problem for commuting descriptor systems is defined as follows.

Given \(n_1\times n_2\) measurements \(y[k,l]\in\real^q\) generated by the unknown descriptor system of order \(m\)

\begin{equation} \begin{aligned} E x[k+1,l] = & A x[k,l] \\ F x[k,l+1] = & B x[k,l] \\ y[k,l] = & C x[k,l], \end{aligned} \end{equation}

Determine

  • the order of the unknown system,
  • the system matrices \(A,E,B,F\in \real^{m\times m}\), output matrix \(C\in \real^{q\times m}\) and generalized state \(x\) up to a similarity transformation.

Past-Future and Left-Right matrices

In the presented algorithm of Kung, from the previous chapter, we defined past and future matrices by respectively removing the last and first block row of the observability matrix. For a one-dimensional system that evolves in time, this naming convention makes sense as the past matrix contains `older’ data as the future matrix, i.e.~the past matrix contain the time series data \(y[0]\) up to \(y[p-1]\) whereas the future matrix contains the data \(y[1]\) up to \(y[p]\), for some value of \(p\).

When working with two-dimensional time series we can extend this concept and define past, future, left and right data. In this context the naming convention can be misleading as it implies that the data must explicitly depend on time and space, which does not need to be the case. For now we stick to this naming convention when working with two-dimensional systems as it is intuitive to work with. At the end of this chapter when dealing with more general $n$-dimensional systems we will generalize the naming convention to a more abstract notation.

For some values of \(\mu\) and \(\nu\), indicating the order of the extended recursive observability matrix, we define four new matrices namely the past, future, left and right matrices as;

  • past matrix: \(\Gamma_{\underline{0|\mu},0|\nu}\) = \(\shiftpast{\robs_{0|\mu,0|\nu}}\) = \(\Gamma_{0|\mu-1,0|\nu}^{\mu,\nu}\),
  • future matrix: \(\Gamma_{\overline{0|\mu},0|\nu}\) = \(\shiftfuture{\robs_{0|\mu,0|\nu}}\) = \(\Gamma_{1|\mu,0|\nu}^{\mu,\nu}\),
  • left matrix: \(\Gamma_{0|\mu,\underline{0|\nu}}\) = \(\shiftleft{\robs_{0|\mu,0|\nu}}\) = \(\Gamma_{0|\mu,0|\nu-1}^{\mu,\nu}\) ,
  • right matrix: \(\Gamma_{0|\mu,\overline{0|\nu}}\) = \(\shiftright{\robs_{0|\mu,0|\nu}}\) = \(\Gamma_{0|\mu,1|\nu}^{\mu,\nu}\).

Two notations are introduced and are used interchangeably depending on the context. Every one of these matrices can be obtained by removing specific rows of the extended observability matrix \(\Gamma_{0|\mu,0|\nu}\). This can be done by using the selector matrices \(S\), introduced in Chapter~\ref{ch:2DS}. As an example, the future matrix \(\Gamma_{\overline{0|\mu},0|\nu}\) is equal to

\begin{equation} \Gamma_{\overline{0|\mu},0|\nu} = \begin{bmatrix} \Gamma_{\overline{0|\mu},0} \\ \Gamma_{\overline{0|\mu},1} \\ \vdots \\ \Gamma_{\overline{0|\mu},\nu} \\ \end{bmatrix} = \begin{bmatrix} \Gamma_{1|\mu,0} \\ \Gamma_{1|\mu,1} \\ \vdots \\ \Gamma_{1|\mu,\nu} \\ \end{bmatrix} = (S_{0|\nu}^{\nu} \kron S_{1|\mu}^{\mu} )\Gamma_{0|\mu,0|\nu}, \end{equation}

whereas the left matrix is equal to

\begin{equation} \Gamma_{0|\mu,\underline{0|\nu}} = \begin{bmatrix} \Gamma_{{0|\mu},0} \\ \Gamma_{{0|\mu},1} \\ \vdots \\ \Gamma_{{0|\mu},\nu-1} \end{bmatrix} = (S_{0|\nu-1}^{\nu} \kron S_{0|\mu}^{\mu} )\Gamma_{0|\mu,0|\nu}. \end{equation}

The past, future left and right matrices are related to each other via the dynamic equations of the model classes. In the next two sections we will derive these relations, which are later used to calculate a system realization.

Relation for commuting state space model

The past and future matrices are respectively equal to \(\Gamma_{\underline{0|\mu},0|\nu}\) and \(\Gamma_{\overline{0|\mu},0|\nu}\). When multiplying the first matrix from the right by the system matrix \(A\) we get

\begin{equation} \Gamma_{\underline{0|\mu},0|\nu} A = \begin{bmatrix} \Gamma_{\underline{0|\mu},0}A \\ \Gamma_{\underline{0|\mu},1}A \\ \vdots \\ \Gamma_{\underline{0|\mu},\nu}A \\ \end{bmatrix} = \begin{bmatrix} \Gamma_{\overline{0|\mu},0} \\ \Gamma_{\overline{0|\mu},1} \\ \vdots \\ \Gamma_{\overline{0|\mu},\nu} \\ \end{bmatrix} = \Gamma_{\overline{0|\mu},0|\nu}. \end{equation}

This follows from the properties of the observability matrix for one-dimensional systems.

A similar result can be obtained for the left and right matrices, which are respectively given by \(\Gamma_{0|\mu,\underline{0|\nu}}\) and \(\Gamma_{0|\mu,\overline{0|\nu}}\). Multiplying the left matrix by \(B\) yields

\begin{equation} \Gamma_{0|\mu,\underline{0|\nu}}B = \begin{bmatrix} \Gamma_{0|\mu,0}B \\ \Gamma_{0|\mu,1}B \\ \vdots \\ \Gamma_{0|\mu,\nu-1}B \\ \end{bmatrix} = \begin{bmatrix} \Gamma_{0|\mu,1} \\ \Gamma_{0|\mu,2} \\ \vdots \\ \Gamma_{0|\mu,\nu} \\ \end{bmatrix} = \Gamma_{0|\mu,\overline{0|\nu}}. \end{equation}

This illustrates that multiplying the left matrix by \(B\) is equal to the right matrix.

Relation for commuting descriptor system

When working with descriptor systems, the relation between the past, future, left and right matrices is slightly different. In this section we briefly mention these relations as they are used when estimating the system matrices. To demonstrate the relation we first look at the effect of post multiplying the observability matrix by one of the system matrices. Recall that for the regular commuting matrix pencil \((A,E)\)

\begin{equation} \begin{aligned} \mathcal{A}_p^k A = \mathcal{A}_{p+1}^{k+1}~ & ~\text{and}~ & ~\mathcal{A}_p^k E = \mathcal{A}_{p+1}^{k}. \end{aligned} \end{equation}

Using these properties… on the matrix \(\Gamma_{\underline{0|\mu},l}\) we get

\begin{equation} \Gamma_{\underline{0|\mu},l} A = \begin{bmatrix} C \mathcal{A}^0_{\mu} \mathcal{B}^l_{\nu}A \\ C \mathcal{A}^1_{\mu} \mathcal{B}^l_{\nu}A \\ \vdots \\ C \mathcal{A}^{\mu-1}_{\mu} \mathcal{B}^l_{\nu}A \end{bmatrix} = \begin{bmatrix} C \mathcal{A}^1_{\mu+1} \mathcal{B}^l_{\nu} \\ C \mathcal{A}^2_{\mu+1} \mathcal{B}^l_{\nu} \\ \vdots \\ C \mathcal{A}^{\mu}_{\mu+1} \mathcal{B}^l_{\nu} \end{bmatrix} = \begin{bmatrix} C \mathcal{A}^1_{\mu} \mathcal{B}^l_{\nu}E \\ C \mathcal{A}^2_{\mu} \mathcal{B}^l_{\nu}E \\ \vdots \\ C \mathcal{A}^{\mu}_{\mu} \mathcal{B}^l_{\nu}E \end{bmatrix} = \Gamma_{\overline{0|\mu},l} E. \end{equation}

In general, the matrix \(\Gamma_{\underline{0|\mu},0|\nu}\) contains the block matrices \(\Gamma_{\underline{0|\mu},l}\) for all \(0\leq l \leq \nu\) and we have that

\begin{equation} \Gamma_{\underline{0|\mu},0|\nu}A = \Gamma_{\overline{0|\mu},0|\nu}E. \end{equation}

A similar relation can be established between the left and right matrices. We have that

\begin{equation} \Gamma_{{0|\mu},\underline{0|\nu}}B = \Gamma_{{0|\mu},\overline{0|\nu}}F. \end{equation}

Both relations are used in the next section where we discuss realization of commuting descriptor systems.

Realization algorithm

In this section we will derive thee realization algorithms to find a suitable state space or descriptor system that fits a given set of observations.

Regular systems

Consider the two-dimensional time series \(y[k,l]\in\real^q\) with \(n_1 \times n_2\) measurements, such that \(0\leq k \leq n_1-1\) and \(0\leq l \leq n_2-1\), generated by a commuting state space model of order \(m\). We formulate the identification algorithm with the goal to identify the system as described in Definition~\ref{def:ident-2R}. The steps in the algorithm can be summarized as follows;

  • Create a rank deficient recursive Hankel matrix \(Y_{0|\mu-1,0|\nu-1}\) such that

    \begin{equation*} \rank{Y_{0|\mu-1,0|\nu-1}} = \rank{Y_{0|\mu-2,0|\nu-1}} = \rank{Y_{0|\mu-1,0|\nu-2}} = m. \end{equation*}

  • Factorize the recursive Hankel matrix as the product of an extended recursive observability matrix and an extended recursive state sequence matrix.

  • Calculate past, future, left and right observability matrices.

  • Estimate the system matrices via least squares.

  • Determine an output matrix and initial state.

In step 1 and 2 we create and factorize a recursive Hankel matrix of order \((\mu,\nu)\), the size of this matrix is \((\mu \nu q )\times (n_1-\mu+1)(n_2-\nu+1)\). It is important that the rank of this matrix has stabilized, i.e.~that

\begin{equation} \rank{Y_{0|\mu-1,0|\nu-1}} = \rank{Y_{0|\mu-2,0|\nu-1}} = \rank{Y_{0|\mu-1,0|\nu-2}} = m. \end{equation}

This condition is discussed in Section~\ref{sec:partial-real-2d}.

We now calculate the row and column spaces of the recursive Hankel matrix by using the singular value decomposition

\begin{equation} Y_{0|\mu-1,0|\nu-1} = U \Sigma V^T = \begin{bmatrix} U_0 & 0 \end{bmatrix} \begin{bmatrix} \Sigma_0 & 0 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} V_0^T \\ V_1^T \end{bmatrix} = U_0 \Sigma_0 V_0^T. \end{equation}

As shown in the previous chapters, the row and column space of a recursive Hankel matrix are respectively span by an extended recursive state sequence and extended recursive observability matrix. We define two new matrices

\begin{equation} \hat{\robs}_{0|\mu-1,0|\nu-1} = U_0 \Sigma_0^{1/2}, \hat{\rX}_{{0|n_1-\mu,0|n_2-\nu} } = \Sigma_0^{1/2} V_0^T . \end{equation}

The columns of \(\hat{\robs}_{0|\mu-1,0|\nu-1}\) now span the same space as the columns of \({\robs}_{0|\mu-1,0|\nu-1}\), in the same way, do the rows of \(\hat{\rX}_{0|n_1-\mu,0|n_2-\nu}\) span the same space as the rows of \({\rX}_{0|n_1-\mu,0|n_2-\nu}\). We can therefore write

\begin{equation} \hat{\robs}_{0|\mu-1,0|\nu-1}P^{-1} = \hat{\robs}_{0|\mu-1,0|\nu-1},~P\hat{\rX}_{0|n_1-\mu,0|n_2-\nu} = {\rX}_{0|n_1-\mu,0|n_2-\nu}, \end{equation}

for some unknown invertible matrix \(P\). As shown in the previous section, the past and future matrices are related, as are the left and right matrices. Both system matrices can be estimated by solving

\begin{equation}\label{sec:R2D-real-2DR-eqn1} \hat{A} = \hat{\Gamma}_{\underline{0|\nu-1},{0|\mu-1}}^{\dagger}{\hat{\Gamma}}_{\overline{0|\nu-1},{0|\mu-1}} \end{equation}

\begin{equation}\label{sec:R2D-real-2DR-eqn2} \hat{B} = \hat{\Gamma}_{{0|\nu-1},\underline{0|\mu-1}}^{\dagger}{\hat{\Gamma}}_{{0|\nu-1},\overline{0|\mu-1}}, \end{equation}

where the dagger \(\dagger\) notation is the pseudo inverse of the matrix. Note that the pseudo-inverse only exists for matrices of full rank. In Section \ref{sec:partial-real-2d} the partial realization condition is derived, which ensures that both matrices \(\hat{\Gamma}_{\underline{0|\nu-1},{0|\mu-1}}\) and \(\hat{\Gamma}_{{0|\nu-1},\underline{0|\mu-1}}\) have full rank. The matrices \(\hat{A}\) and \(\hat{B}\) are estimations of the system matrices. Both matrices satisfy the following relation

\begin{equation} \hat{A} = P^{-1}AP,~\hat{B} = P^{-1}BP. \end{equation}

The output matrix of the system can be estimated as the first \(q\) rows of \(\hat{\Gamma}_{0|\nu-1,0|\mu-1}\). This matrix is denoted by \(\hat{C} = CP\). In the same way, the initial state is equal to the first column of \(\hat{\rX}_{0|n_1-\mu,0|n_2-\nu}\) which is \(\hat{x} = P^{-1}x\). The estimated state space model is now equal to

\begin{equation} \begin{aligned} \hat{x}[k+1,l] = \hA\hat{x}[k,l] \\ \hat{x}[k,l+1] = \hB\hat{x}[k,l] \\ y[k,l] = \hC \hat{k}[k,l]. \end{aligned} \end{equation}

It is clear that the estimated system is well-posed as the system matrices commute. In the next section we present two more algorithms to calculate a suitable realization for an unknown descriptor systems based on observed data.

Commuting descriptor systems

Two algorithms to identity descriptor systems are presented in this section. In the first algorithm both the singular and regular part of the descriptor system are estimated in one step. In the second algorithm we separate the singular and regular part of the dynamics and identify them separately in two steps, the second algorithm requires the calculation of the so called `gap'.

Extended algorithm of Kung for commuting descriptor systems

The outline of the algorithm goes as follows;

  • Create a rank deficient recursive Hankel matrix \(Y_{0|\mu-1,0|\nu-1}\) such that

    \begin{equation*} \rank{Y_{0|\mu-1,0|\nu-1}} = \rank{Y_{0|\mu-2,0|\nu-1}} = \rank{Y_{0|\mu-1,0|\nu-2}} = m. \end{equation*}

  • Factorize the recursive Hankel matrix as the product of an extended recursive observability matrix and an extended recursive state sequence matrix.

  • Calculate past, future, left and right observability matrices.

  • Create two new matrices, by concatenating the columns of the past and future matrices and the left and right matrices.

  • The right null space of both matrices is span by concatenated system matrices.

  • Place the system matrices in the canonical form.

  • Determine an output matrix and initial state.

In the first step of the algorithm a rank deficient recursive Hankel matrix \(Y_{0|\mu-1,0|\nu-1}\) is created and factorized as the product of an extended recursive observability matrix and extended recursive state sequence matrix. These calculations are done by computing the singular value decomposition of the recursive Hankel matrix

\begin{equation} Y_{0|\mu-1,0|\nu-1} = U \Sigma V^T = \begin{bmatrix} U_0 & 0 \end{bmatrix} \begin{bmatrix} \Sigma_0 & 0 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} V_0^T \\ V_1^T \end{bmatrix} = U_0 \Sigma_0 V_0^T. \end{equation}

We define two new matrices, which are the estimates of the extended recursive observability and extended recursive state sequence matrices

\begin{equation} \hat{\robs}_{0|\mu-1,0|\nu-1} = U_0 \Sigma_0^{1/2}, \hat{\rX}_{0|n_1-\mu,0|n_2-\nu} = \Sigma_0^{1/2} V_0^T . \end{equation}

Similar as for the regular realization algorithm, both matrices respectively span the same column and row space as the extended observability and extended state sequence matrices, with the exception that these matrices are placed in a different basis, such that

\begin{equation} \hat{\robs}_{{0|\mu-1},0|\nu-1}P^{-1} = {\robs}_{{0|\mu-1},0|\nu-1},~P\hat{\rX}_{0|n_1-\mu,0|n_2-\nu} = {\rX}_{0|n_1-\mu,0|n_2-\nu}, \end{equation}

for some unknown matrix \(P\) of full rank. We now calculate the past, future, left and right matrices as defined above, by removing specific rows of the extended observability matrix. In Section~\ref{sec:relation-descriptor} we have shown that

\begin{equation}\label{eqn:past-future-est} \hat{\robs}_{\past{0|\mu-1},0|\nu-1} \hat{A} = \hat{\robs}_{\future{0|\mu-1},0|\nu-1} \hat{E}, \end{equation}

with

\begin{equation} \hat{A} = P A P^{-1}, \hat{E} = P E P^{-1}. \end{equation}

Equation~\eqref{eqn:past-future-est} can be rewritten as

\begin{equation}\label{eqn:past-future-nullspace} \begin{bmatrix} \hat{\robs}_{\past{0|\mu-1},0|\nu-1} & \hat{\robs}_{\future{0|\mu-1},0|\nu-1} \end{bmatrix} \begin{bmatrix} \hat{A} \\ -\hat{E} \end{bmatrix} = \Xi_{1} \begin{bmatrix} \hat{A} \\ -\hat{E} \end{bmatrix} = 0. \end{equation}

We introduced \(\Xi_1\) as a short hand notation for the concatenated past and future matrices. Equation~\eqref{eqn:past-future-nullspace} demonstrates that the matrix

\begin{equation}\label{eqn:concat-A-E} \begin{bmatrix} \hat{A} \\ -\hat{E} \end{bmatrix} \end{equation}

lies in the right null space of \(\Xi_1\). Due to the regularity of the matrix pencil \((A,E)\), which implies the regularity of \((\hA,\hE)\), we also know that the matrix in Equation~\eqref{eqn:concat-A-E} has full rank. We can thereby conclude that the nullity of \(\Xi_1\) is at least equal to \(m\). Recall that \(\Xi_1\) is a known matrix, and that we want to find a suitable realization for the matrices \(\hA\) and \(\hE\). This is done by calculating the right null space of \(\Xi_1\), via, for example, the singular value decomposition.

Apart from proving that the null space of \(\Xi_1\) is at least $m-$dimensional, we also need to proof that the null space is not larger than \(m\). Otherwise, it is not possible to estimate the system matrices in this way. In section~\ref{sec:partial-real-2d} we derive the necessary condition such that this is the case.

Assume for now that this condition is met and that the entire right null space of \(\Xi_1\) is span by the matrix \(N\), such that

\begin{equation} \Xi_1 N = 0. \end{equation}

The number of columns of \(N\) is equal to the nullity of \(\Xi_1\). If and only if the number of columns of \(N\) is equal to \(m\) we have that

\begin{equation} N R^{-1} = \begin{bmatrix} \hA \\ -\hE \end{bmatrix}~\text{or}~N = \begin{bmatrix} \hA R \\ -\hE R \end{bmatrix} \end{equation}

with \(R \in \real^{m\times m}\) an invertible matrix. The first \(m\) rows of \(N\) are thus equal to \(\hA R\) whereas the last \(m\) rows are equal to \(\hE R\). The obtained pencil \((\hA R, \hE R)\) has the same eigenvalues as \((A,E)\), but it is not guaranteed that it is a commuting pencil.

Similarly the matrices \(\hB\) and \(\hE\) can be estimated by calculating the right null space of the concatenated left-right matrices, we have that

\begin{equation}\label{eqn:left-right-nullspace} \begin{bmatrix} \hat{\robs}_{{0|\mu-1},\past{0|\nu-1}} & \hat{\robs}_{{0|\mu-1},\future{0|\nu-1}} \end{bmatrix} \begin{bmatrix} \hat{B} \\ -\hat{F} \end{bmatrix} = \Xi_{2} \begin{bmatrix} \hat{B} \\ -\hat{F} \end{bmatrix}. \end{equation}

Assume that the right null space of \(\Xi_2\) is span by \(M\). In this case we write

\begin{equation} M S^{-1} = \begin{bmatrix} \hB \\ -\hF \end{bmatrix}~\text{or}~M = \begin{bmatrix} \hB S \\ -\hF S \end{bmatrix}, \end{equation}

with \(S\in\real^{m\times m}\) invertible. This matrix only exists if and only if the nullity of \(\Xi_2\) is exactly equal to \(m\), which is true under the assumption that the partial realization condition is satisfied (see Section~\ref{sec:partial-real-2d}).

At this point we have obtained two matrix pencils \((\hA R, \hE R)\) and \((\hB S, \hF S)\). Because the original system with system matrices \(A,E,B\) and \(F\) is well-posed it follows that the system with system matrices \(\hA R\), \(\hE R\), \(\hB S\) and \(\hF S\) is also well-posed (Lemma~\ref{lem:2DS-manipulations}). According to Theorem~\ref{thm:2S-all-commuting} from Chapter~\ref{ch:2DS} we know that there exists three matrices \(P\), \(Q_1\) and \(Q_2\) such that all four matrices

\begin{equation} \begin{aligned} Q_1\hA R P = & \tilde{A}, & Q_1\hE R P = & \tilde{E} \\ Q_2\hB S P = & \tilde{B}, & Q_2\hF S P = & \tilde{F} \\ \end{aligned} \end{equation}

form a commuting family. Calculate these three matrices as described in Section~\ref{sec:descriptor2d} to obtain the following dynamic system

\begin{equation} \begin{aligned} \tilde{E}x[k+1,l] = \tilde{A}x[k,l] \\ \tilde{F}x[k,l+1] = \tilde{B}x[k,l] \\ \end{aligned} \end{equation}

where all 4 system matrices commute.

We still need to estimate the generalized state vector and the output matrix of this system. The output matrix can be taken at random taking in to account that the estimated system is observable. Denote this matrix as \(\tilde{C}\) and introduce two new evolution operators

\begin{equation} \tilde{\mathcal{A}}_{\mu-1}^k = \tilde{A}^k\tilde{E}^{\mu-1-k}~\text{and}~ \tilde{\mathcal{B}}_{\nu-1}^l = \tilde{A}^l\tilde{E}^{\nu-1-l}. \end{equation}

To determine the generalized state vector we have to solve the following linear problem

\begin{equation} \begin{bmatrix} y[0,0] \\ y[1,0] \\ \vdots \\ y[\nu-1,0] \\ \hline y[0,1] \\ \vdots \\ y[\nu-1,1] \\ \hline \vdots \\ \hline y[0,\nu-1] \\ \vdots \\ y[\mu-1,\nu-1] \\ \end{bmatrix} = \begin{bmatrix} \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^0\tilde{\mathcal{B}}_{\nu-1}^0 \\ \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^1\tilde{\mathcal{B}}_{\nu-1}^0 \\ \vdots \\ \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^{\mu-1}\tilde{\mathcal{B}}_{\nu-1}^{0} \\ \hline \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^0\tilde{\mathcal{B}}_{\nu-1}^{1} \\ \vdots \\ \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^{\mu-1}\tilde{\mathcal{B}}_{\nu-1}^{1} \\ \hline \vdots \\ \hline \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^0\tilde{\mathcal{B}}_{\nu-1}^{\nu-1} \\ \vdots \\ \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^{\mu-1}\tilde{\mathcal{B}}_{\nu-1}^0 \\ \end{bmatrix} \tilde{x}. \end{equation}

This system of equations has a unique solution if and only if \(\tilde{C}\) was chosen correctly. The final calculated system realization is than equal to

\begin{equation} \begin{aligned} \tilde{E}\tilde{x}[k+1,l] = \tilde{A}\tilde{x}[k,l] \\ \tilde{F}\tilde{x}[k,l+1] = \tilde{B}\tilde{x}[k,l] \\ y[k,l] = \tilde{C} \tilde{x}[k,l], \end{aligned} \end{equation}

and the output of the system is

\begin{equation} y[k,l] = \tilde{C}\tilde{\mathcal{A}}_{\mu-1}^k \tilde{\mathcal{B}}_{\nu-1}^l \tilde{x}, \end{equation}

for \(0\leq k \leq \mu-1\) and \(0\leq l \leq \nu-1\).

The presented algorithms is especially useful in situations where we are only interested in the system matrices and the eigenvalues of the associated matrix pencils. In practice, calculating the transformations to ensure all four system matrices commute is not trivial. The next classical algorithms is more suitable for this case.

Mind the gap

The second algorithm to calculate a realization of a commuting descriptor system presented in this thesis is bases on the classical algorithms of Kung for one-dimensional descriptor systems. When estimating one-dimensional descriptor systems we had to separate the regular and singular part of the dynamics by using column compression on the estimated extended observability matrix, this phenomena is called `mind the gap’. The benefit of this algorithm is that the estimated system realization is placed in the canonical from described by

\begin{equation}\label{eqn:R2D-descriptor-system} \begin{aligned} E x[k+1,l] = & A x[k,l], \\ F x[k,l+1] = & B x[k,l], \\ y[k,l] = & C x[k,l], \end{aligned} \end{equation}

with \((A,E)\) and \((B,F)\), two regular pencils, and

\begin{equation} E = \begin{bmatrix} \eye & & & \\ & \eye & & \\ & & E_{1} & \\ & & & E_{2} \\ \end{bmatrix}, PAQ = \begin{bmatrix} A_{1}& & & \\ & A_{2} & & \\ & & \eye & \\ & & & \eye \\ \end{bmatrix}, \end{equation}

\begin{equation} F = \begin{bmatrix} \eye & & & \\ & F_{1} & & \\ & & \eye & \\ & & & F_{2} \\ \end{bmatrix}, B = \begin{bmatrix} B_{1}& & & \\ & \eye & & \\ & & B_{2} & \\ & & & \eye \\ \end{bmatrix}, \end{equation}

\begin{equation} C = \begin{bmatrix} C_{RR} & C_{RS} & C_{SR} & C_{SS} \end{bmatrix}, \end{equation}

where the matrices \(E_{1}\), \(E_{2}\), \(F_{1}\), and \(F_{2}\) are nilpotent. And

  • \(A_1\) and \(B_1\) commute,
  • \(A_2\) and \(F_1\) commute,
  • \(E_1\) and \(B_2\) commute,
  • \(E_2\) and \(F_2\) commute.

In the case of two-dimensional commuting descriptor systems we have shown that we can classify four kinds of solutions/eigenmodes of the systems;

  • strictly regular solutions (RR),
  • strictly singular solutions (SS),
  • solutions that are regular in the first independent variable and singular in the second (SR) and
  • solutions that are singular in the first independent variable and regular in the second (RS).

The realization algorithm goes as follow;

  • Create a rank deficient recursive Hankel matrix \(Y_{0|\mu-1,0|\nu-1}\) such that

    \begin{equation*} \rank{Y_{0|\mu-1,0|\nu-1}} = \rank{Y_{0|\mu-2,0|\nu-1}} = \rank{Y_{0|\mu-1,0|\nu-2}} = m. \end{equation*}

  • Factorize the recursive Hankel matrix as the product of an extended recursive observability matrix and an extended recursive state sequence matrix.

  • Calculate the gaps of the recursive observability matrix and apply the inverse transformation to the state sequence matrix.

  • Split the recursive observability matrix and recursive state sequence matrices in four matrices, each one associated with the a different kind of solution (\(RR\),\(RS\),\(SR\) and \(SS\)).

  • Apply the classical algorithm of Kung to each of these 4 sub-problems.

  • Combine the 4 separate results into a single two-dimensional descriptor system.

The first step of the algorithm, as per usual, is to factorize a rank deficient recursive Hankel matrix \(Y_{0|\mu-1,0|\nu-1}\) as the product of an extended recursive observability matrix and extended recursive state sequence matrix. As before, these calculations are done by computing the singular value decomposition of the recursive Hankel matrix

\begin{equation} Y_{0|\mu-1,0|\nu-1} = U \Sigma V^T = \begin{bmatrix} U_0 & 0 \end{bmatrix} \begin{bmatrix} \Sigma_0 & 0 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} V_0^T \\ V_1^T \end{bmatrix} = U_0 \Sigma_0 V_0^T. \end{equation}

We define two new matrices, which are the estimates of the extended recursive observability and extended recursive state sequence matrices

\begin{equation} \hat{\robs}_{0|\mu-1,0|\nu-1} = U_0 \Sigma_0^{1/2}, \hat{\rX}_{0|n_1-\mu,0|n_2-\nu} = \Sigma_0^{1/2} V_0^T . \end{equation}

Similar as for the regular realization algorithm, both matrices respectively span the same column and row space as the extended observability and extended state sequence matrices, with the exception that these matrices are placed in a different basis, such that

\begin{equation} \hat{\robs}_{{0|\mu-1},0|\nu-1}P^{-1} = {\robs}_{{0|\mu-1},0|\nu-1},~P\hat{\rX}_{0|n_1-\mu,0|n_2-\nu} = {\rX}_{0|n_1-\mu,0|n_2-\nu}, \end{equation}

for some unknown matrix \(P\) of full rank. In Section \ref{sec:2DS-mind-the-gap} we have demonstrated that the extended observability matrix can be transformed to a matrix

\begin{equation} \begin{bmatrix} \robs_{RR} & \robs_{SR} & \robs_{RS} & \robs_{SS} \end{bmatrix}, \end{equation}

by using a column compression and selector matrices. In this basis, the observability matrix is explicitly written as

\begin{equation}\label{eqn:2DS-obs-wcf-full-2Dreal} \resizebox{.88\hsize}{!}{\( \begin{bmatrix} C_{RR} & {C_{SR}E_{{1}}^{\mu-1}} & {C_{RS}F_{{1}}^{\nu-1}} & {C_{SS}E_{{2}}^{\mu-1}F_{{2}}^{\nu-1}} \\ C_{RR}A_{{1}} & C_{SR}E_{{1}}^{\mu-2} & C_{RS}A_{{2}}F_{{1}}^{\nu-1} & C_{SS}E_{{2}}^{\mu-2}F_{{2}}^{\nu-1} \\ \vdots & \vdots & \vdots & \vdots \\ {C_{RR}A_{{1}}^{\mu-1}} & {C_{SR}} & {C_{RS}A_{{2}}^{\mu-1}F_{{1}}^{\nu-1}} & {C_{SS}F_{{2}}^{\nu-1}} \\ \hline C_{RR}B_{1} & {C_{SR}E_{{1}}^{\mu-1}}B_{2} & {C_{RS}F_{{1}}^{\nu-2}} & {C_{SS}E_{{2}}^{\mu-1}F_{{2}}^{\nu-2}} \\ C_{RR}A_{{1}}B_{1} & C_{SR}E_{{1}}^{\mu-2}B_{2} & C_{RS}A_{{2}}F_{{1}}^{\nu-2} & C_{SS}E_{{2}}^{\mu-2}F_{{2}}^{\nu-2} \\ \vdots & \vdots & \vdots & \vdots \\ {C_{RR}A_{{1}}^{\mu-1}}B_{1} & {C_{SR}}B_{2} & {C_{RS}A_{{2}}^{\mu-1}F_{{1}}^{\nu-2}} & {C_{SS}F_{{2}}^{\nu-2}} \\ \hline \vdots & \vdots & \vdots & \vdots \\ \hline C_{RR}B_{1}^{\nu-1} & {C_{SR}E_{{1}}^{\mu-1}}B_{2}^{\nu-1} & C_{RS} & {C_{SS}E_{{2}}^{\mu-1}} \\ C_{RR}A_{{1}}B_{1}^{\nu-1} & C_{SR}E_{{1}}^{\mu-2}B_{2}^{\nu-1} & C_{RS}A_{{2}} & C_{SS}E_{{2}}^{\mu-2} \\ \vdots & \vdots & \vdots & \vdots \\ {C_{RR}A_{{1}}^{\mu-1}}B_{1}^{\nu-1} & {C_{SR}}B_{2}^{\nu-1} & {C_{RS}A_{{2}}^{\mu-1}} & C_{SS} \end{bmatrix}. \)} \end{equation}

Each one of the 4 column matrices is the observability matrix of a sub-system that occur in the canonical from of the two-dimensional commuting descriptor system. These 4 systems are shown in Table~\ref{tbl:overview-of-partial-moddels} from Chapter~\ref{ch:2DS}. We will now briefly discuss how all sub-systems can be estimated separately from each other.

\paragraph{Estimating the $RR$-sub-system.} To estimate the $RR$-sub-system of the state space model the classical realization algorithm for two-dimensional commuting state space models is applied to the matrix \(\robs_{RR}\). This matrix has full rank as \(\hat{\robs}_{0|\mu-1,0|\nu-1}\) has also full rank. The classical realization algorithm for regular systems can be used to estimate \(A_0\), \(B_0\), \(C_{RR}\) and \(x_{RR}\). Both system matrices can be obtained by solving the following two equations

\begin{equation} \shiftfuture{\robs_{RR}}A_1 = \shiftpast{\robs_{RR}},~ \shiftleft{\robs_{RR}}B_1 = \shiftright{\robs_{RR}} \end{equation}

\paragraph{Estimating the $SR$-sub-system and the $RS$-sub-systems.} In a similar way as before, we can estimate the system parameters \(E_1\), \(B_2\) and \(C_{SR}\) of the $SR$-sub-system. When estimating \(E_1\), the shift has to be reversed, as the dynamics run backwards. To obtain both system matrices we solve

\begin{equation} \shiftfuture{\robs_{SR}} = E_1\shiftpast{\robs_{SR}},~ \shiftleft{\robs_{SR}}B_2 = \shiftright{\robs_{SR}}. \end{equation}

In a similar way, we can estimate the system dynamics of the $RS$-sub-system. The system matrices \(A_2\) and \(F_1\) are estimated by solving

\begin{equation} \shiftfuture{\robs_{RS}}A_2 = \shiftpast{\robs_{RS}},~ \shiftleft{\robs_{RS}} = F_1\shiftright{\robs_{RS}}. \end{equation}

\paragraph{Estimating the $SS$-sub-system.} The $SS$-sub-system is estimated by performing two backward shifts. Both system matrices are obtained by solving

\begin{equation} \shiftfuture{\robs_{SS}} = E_2\shiftpast{\robs_{SS}}, \shiftleft{\robs_{SS}} = F_2\shiftright{\robs_{SS}}. \end{equation}

Partial realization condition

The partial realization condition for the presented algorithms is the condition that guarantees that a solution exists. When calculating a system realization for a commuting state space model, a system of equations needed to be solved. The partial realization condition ensures that this systems has a unique solution.

Given \(n_1\times n_2\) measurements \(y[k,l]\in\real^q\) generated by the well-posed unknown deterministic system of order \(m\)

\begin{equation}\label{eqn:cpchsnthhhtstkpdgyx} \begin{aligned} E x[k+1,l] = & A x[k,l] \\ F x[k,l+1] = & B x[k,l] \\ y[k,l] = & C x[k,l], \end{aligned} \end{equation}

where \((A,E)\) and \((B,F)\) are both regular matrix pencil and \(E\) and \(F\) can be either singular or regular. The system can be identified by starting from the Hankel matrix \(Y_{0|\mu-1,0|\nu-1}\) if

\begin{equation} \rank{Y_{0|\mu-1,0|\nu-1}} = \rank{Y_{0|\mu-2,0|\nu-1}} = \rank{Y_{0|\mu-1,0|\nu-2}} = m. \end{equation}

When both matrices \(E\) and \(A\) are matrices of full rank, the system in Equation \eqref{eqn:cpchsnthhhtstkpdgyx} can be transformed to the commuting states space model

\begin{equation} \begin{aligned} x[k+1,l] = & E^{-1}A x[k,l] \\ x[k,l+1] = & F^{-1}B x[k,l] \\ y[k,l] = & C x[k,l]. \end{aligned} \end{equation}

The commutativity from \(E^{-1}A\) and \(F^{-1}B\) follows from the well-posedness of \eqref{eqn:cpchsnthhhtstkpdgyx}. We will now proof this theorem for the three presented realization algorithms.

Regular commuting state space models

In the algorithm of Kung for two-dimensional commuting state space models presented in Section~\ref{sec:R2D-real-2DR}, both system matrices are estimated in Equations~\eqref{sec:R2D-real-2DR-eqn1} and \eqref{sec:R2D-real-2DR-eqn2}. In order for these systems of equations to have a unique solution, both past and left matrices \(\hat{\Gamma}_{\underline{0|\nu-1},{0|\mu-1}}\) and \(\hat{\Gamma}_{{0|\nu-1},\underline{0|\mu-1}}\) must be of full rank. The past and left matrices are respectively equal to

\begin{equation} \hat{\Gamma}_{{0|\nu-2},{0|\mu-1}}~\text{and}~\hat{\Gamma}_{{0|\nu-1},{0|\mu-2}} \end{equation}

which span the same column space, and therefore have the same rank as

\begin{equation} {\Gamma}_{{0|\nu-2},{0|\mu-1}}~\text{and}~{\Gamma}_{{0|\nu-1},{0|\mu-2}}. \end{equation}

Both matrices are of full rank if the condition in Theorem \ref{thm:prc-2D} is satisfied.

Descriptor systems

The analysis of the partial realization condition for descriptor systems is carried out in the Weierstrass canonical form for two-dimensional descriptor systems, as shown in Equation~\eqref{eqn:R2D-descriptor-system}. In order to proof the partial realization condition, we first proof the following lemma:

Define two matrices

\begin{equation} \Xi_1 = \begin{bmatrix} \hat{\robs}_{\past{0|\mu-1},0|\nu-1} & \hat{\robs}_{\future{0|\mu-1},0|\nu-1} \end{bmatrix} \end{equation}

and

\begin{equation} \Xi_2 = \begin{bmatrix} \hat{\robs}_{{0|\mu-1},\past{0|\nu-1}} & \hat{\robs}_{{0|\mu-1},\future{0|\nu-1}} \end{bmatrix} \end{equation}

Both matrices have rank \(m\) if

\begin{equation} \rank{\robs_{0|\mu-1,0|\nu-1}} = \rank{\robs_{0|\mu-2,0|\nu-1}}= \rank{\robs_{0|\mu-1,0|\nu-2}} \end{equation}

To proof this lemma, we will analyze the descriptor system in the canonical form. We have already shown that the nullity of both \(\Xi_1\) and \(\Xi_2\) is at least equal to \(m\) by demonstrating that the null space of both matrices is span by two vertically concatenated matrices of regular matrix pencils. Because both matrices \(\Xi_1\) and \(\Xi_2\) have \(2m\) columns, which implies that \(2m\) is an upper limit of the matrix rank, this shows that

\begin{equation}\label{eqn:thsaotnhacv} \rank{\Xi_1} \leq m~\text{and}~\rank{\Xi_2}\leq m. \end{equation}

To proof the rank equality, presented in Lemma \ref{lem:2Dreal-lem-lem} we only have to proof that

\begin{equation}\label{eqn:thsaotnhacv2} \rank{\Xi_1} \geq m~\text{and}~\rank{\Xi_2}\geq m. \end{equation}

Combining both equalities from Equations \eqref{eqn:thsaotnhacv} and \eqref{eqn:thsaotnhacv2} proves Lemma \ref{lem:2Dreal-lem-lem}. In this thesis we will only prove the results for \(\Xi_1\) as the results for \(\Xi_2\) follow directly as per argument of symmetry.

In the Weierstrass canonical form \(\Xi_1\) can be written as

\begin{equation} \Xi_1 = \begin{bmatrix} \shiftpast{\robs_{RR}} & \shiftpast{\robs_{RS}} & \shiftpast{\robs_{SR}} & \shiftpast{\robs_{SS}} & \shiftfuture{\robs_{RR}} & \shiftfuture{\robs_{RS}} & \shiftfuture{\robs_{SR}} & \shiftfuture{\robs_{SS}} \end{bmatrix} \end{equation}

The most straightforward way to proof this results is by an explicit construction of all involved matrices and calculating their rank. This is also the approach taken in this thesis. The matrix \(\robs_{0|\mu-2,0|\nu-1}\) is equal to

\begin{equation} \begin{bmatrix} C_{RR} & {C_{SR}E_{{1}}^{\mu-2}} & {C_{RS}F_{{1}}^{\nu-1}} & {C_{SS}E_{{2}}^{\mu-2}F_{{2}}^{\nu-1}} \\ C_{RR}A_{{1}} & C_{SR}E_{{1}}^{\mu-3} & C_{RS}A_{{2}}F_{{1}}^{\nu-1} & C_{SS}E_{{2}}^{\mu-3}F_{{2}}^{\nu-1} \\ \vdots & \vdots & \vdots & \vdots \\ {C_{RR}A_{{1}}^{\mu-2}} & {C_{SR}} & {C_{RS}A_{{2}}^{\mu-2}F_{{1}}^{\nu-1}} & {C_{SS}F_{{2}}^{\nu-1}} \\ \hline C_{RR}B_{1} & {C_{SR}E_{{1}}^{\mu-2}}B_{2} & {C_{RS}F_{{1}}^{\nu-2}} & {C_{SS}E_{{2}}^{\mu-2}F_{{2}}^{\nu-2}} \\ C_{RR}A_{{1}}B_{1} & C_{SR}E_{{1}}^{\mu-3}B_{2} & C_{RS}A_{{2}}F_{{1}}^{\nu-2} & C_{SS}E_{{2}}^{\mu-3}F_{{2}}^{\nu-2} \\ \vdots & \vdots & \vdots & \vdots \\ {C_{RR}A_{{1}}^{\mu-2}}B_{1} & {C_{SR}}B_{2} & {C_{RS}A_{{2}}^{\mu-2}F_{{1}}^{\nu-2}} & {C_{SS}F_{{2}}^{\nu-2}} \\ \hline \vdots & \vdots & \vdots & \vdots \\ \hline C_{RR}B_{1}^{\nu-1} & {C_{SR}E_{{1}}^{\mu-2}}B_{2}^{\nu-1} & C_{RS} & {C_{SS}E_{{2}}^{\mu-2}} \\ C_{RR}A_{{1}}B_{1}^{\nu-1} & C_{SR}E_{{1}}^{\mu-3}B_{2}^{\nu-1} & C_{RS}A_{{2}} & C_{SS}E_{{2}}^{\mu-3} \\ \vdots & \vdots & \vdots & \vdots \\ {C_{RR}A_{{1}}^{\mu-2}}B_{1}^{\nu-1} & {C_{SR}}B_{2}^{\nu-1} & {C_{RS}A_{{2}}^{\mu-2}} & C_{SS} \end{bmatrix}, \end{equation}

which was assumed to have full rank. This matrix is the same as

\begin{equation} \begin{bmatrix} \shiftpast{\robs_{RR}} & \shiftfuture{\robs_{RS}} & \shiftpast{\robs_{SR}} & \shiftfuture{\robs_{SS}} \end{bmatrix}, \end{equation}

which is a subset of the columns of \(\Xi_1\), proving that the rank of \(\Xi_1\) is at least \(m\).

When the nullity of \(\Xi_1\) and \(\Xi_2\) is \(m\), Equations \eqref{eqn:past-future-nullspace} and \eqref{eqn:left-right-nullspace} have a unique solution, module a right non-singular transformation. Which is a sufficient condition for the presented algorithm to work.

Realization theory for higher dimensional systems

The realization of higher dimensional systems follows the same way as for two-dimensional systems, with the only exception that the notation becomes more difficult. In this section we define the general realization problem and sketch the realization algorithm.

The identification problem for commuting descriptor systems is defined as follows.

Given \(N_1\times N_2 \cdots \times N_n\) measurements \(y[k_1,\cdots,k_n]\in\real^q\) generated by the unknown descriptor system of order \(m\)

\begin{equation} \begin{aligned} E_1 x[k_1+1,\cdots,k_n] = & A_1 x[k_1,\cdots,k_n] \\ \vdots & \\ E_1 x[k_1,\cdots,k_n+1] = & A_n x[k_1,\cdots,k_n] \\ y[k_1,\cdots,k_n] = & C x[k_1,\cdots,k_n] \end{aligned} \end{equation}

Determine

  • the order of the unknown system,
  • the system matrix \(A_1,\cdots,A_n,E_1,\cdots,E_n,\in \real^{m\times m}\), output matrix \(C\in \real^{q\times m}\) and generalized state \(x\) up to a similarity transformation.

The matrices \(E_i\) can be regular. In the special case, where every matrix \(E_i\) is regular, the system can be transformed to

\begin{equation} \begin{aligned} x[k_1+1,\cdots,k_n] = & E_1^{-1}A_1 x[k_1,\cdots,k_n] \\ \vdots & \\ x[k_1,\cdots,k_n+1] = & E_n^{-1}A_n x[k_1,\cdots,k_n] \\ y[k_1,\cdots,k_n] = & C x[k_1,\cdots,k_n] \end{aligned} \end{equation}

Mode-n past and future matrices

We extend the concept of past, future left and right matrices to mode-\(n\) past and future matrices and define

\begin{equation} \past{\Gamma_{0|\mu_1,\cdots,0|\mu_i,\cdots,0|\mu_n}}_i = {\Gamma_{0|\mu_1,\cdots,\past{0|\mu_i},\cdots,0|\mu_n}} \end{equation}

\begin{equation} \future{\Gamma_{0|\mu_1,\cdots,0|\mu_i,\cdots,0|\mu_n}}^i = {\Gamma_{0|\mu_1,\cdots,\future{0|\mu_i},\cdots,0|\mu_n}} \end{equation}

The first matrix is called the mode-\(i\) past matrix whereas, the second matrix is called the mode-\(i\) future matrix. Both matrices can be obtained by removing specific rows of the recursive observability matrix.

Relation for commuting descriptor system

When working with descriptor systems, the relation between the past, future, left and right matrices is slightly different. In this section we briefly mention these relations as they are used when estimating the system matrices. To demonstrate the relation we first look at the effect of post multiplying the observability matrix by one of the system matrices. Recall that

\begin{equation} \Ai_p^k A_i = \Ai_{p+1}^{k+1}~\text{and}~\Ai_p^k E_i = \Ai_{p+1}^{k}. \end{equation}

Using this property on the matrix \(\Gamma_{{0|\nu-1},k_n}\) we get

\begin{multline} \Gamma_{{k_1,\cdots,\past{0|\mu_i},\cdots,k_n}} A_i = \begin{bmatrix} C \An{1}_{\mu_1}^{k_1}\cdots\Ai^0_{\mu_i}\cdots\An{n}_{\mu_n}^{k_n} \\ C \An{1}_{\mu_1}^{k_1}\cdots\Ai^1_{\mu_i}\cdots\An{n}_{\mu_n}^{k_n} \\ \vdots \\ C \An{1}_{\mu_1}^{k_1}\cdots\Ai^{\mu-1}_{\mu_i}\cdots\An{n}_{\mu_n}^{k_n} \end{bmatrix} A_i = \\ \begin{bmatrix} C \An{1}_{\mu_1}^{k_1}\cdots\Ai^1_{\mu_i+1}\cdots\An{n}_{\mu_n}^{k_n} \\ C \An{1}_{\mu_1}^{k_1}\cdots\Ai^2_{\mu_i+1}\cdots\An{n}_{\mu_n}^{k_n} \\ \vdots \\ C \An{1}_{\mu_1}^{k_1}\cdots\Ai^{\mu_i}_{\mu_i+1}\cdots\An{n}_{\mu_n}^{k_n} \end{bmatrix} = \Gamma_{{k_1,\cdots,\future{0|\mu_i},\cdots,k_n}} E_i\\ \end{multline}

Realization algorithm

A single identification algorithm is briefly described to calculate a system realization as described in Definition~\ref{def:ident-nS}.

The outline of the algorithm goes as follows;

  • Create a rank deficient recursive Hankel matrix \(Y_{0|\mu_1-1,\cdots,0|\mu_n-1}\) such that

    \begin{multline} \rank{Y_{0|\mu_1-1,\cdots,0|\mu_n-1}} = \rank{Y_{0|\mu_1-2,\cdots,0|\mu_n-1}} = \cdots = \\ \rank{Y_{0|\mu_1-1,\cdots,0|\mu_n-2}} \end{multline}

  • Factorize the recursive Hankel matrix as the product of an extended recursive observability matrix and an extended recursive state sequence matrix.

  • Calculate the mode-\(n\) past and future matrices.

  • Estimate the system matrices via least squares.

  • Determine an output matrix and initial state.

  • Factorize the recursive Hankel matrix as the product of an extended recursive observability matrix and an extended recursive state sequence matrix.

In the first step of the algorithm a rank deficient recursive Hankel matrix \(Y_{0|\mu_1-1,\cdots,0|\mu_n-1}\) is created and factorized as the product of an extended recursive observability matrix and extended recursive state sequence matrix. These calculations are done by computing the singular value decomposition of the recursive Hankel matrix

\begin{equation} Y_{0|\mu_1-1,\cdots,0|\mu_n-1} = U \Sigma V^T = \begin{bmatrix} U_0 & 0 \end{bmatrix} \begin{bmatrix} \Sigma_0 & 0 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} V_0^T \\ V_1^T \end{bmatrix} = U_0 \Sigma_0 V_0^T. \end{equation}

We define two new matrices, which are the estimates of the extended recursive observability and extended recursive state sequence matrices

\begin{equation} \hat{\robs}_{0|\mu_1-1,\cdots,0|\mu_n-1} = U_0 \Sigma_0^{1/2}, \hat{\rX}_{0|\mu_1-1,\cdots,0|\mu_n-1} = \Sigma_0^{1/2} V_0^T . \end{equation}

Both mode-\(i\) past and future matrices are related via

\begin{equation} \past{\Gamma_{0|\mu_1-1,\cdots,0|\mu_n-1}}_i A_i = \future{\Gamma_{0|\mu_1-1,\cdots,0|\mu_n-1}}^i E_i. \end{equation}

Which can be rewritten as

\begin{equation}\label{eqn:left-rigth-nullspace} \begin{bmatrix} \past{\Gamma_{0|\mu_1-1,\cdots,0|\mu_n-1}}_i & \future{\Gamma_{0|\mu_1-1,\cdots,0|\mu_n-1}}^i \end{bmatrix} \begin{bmatrix} A_i \\ -E_i \end{bmatrix} = \Xi_{i} \begin{bmatrix} A_i \\ -E_i \end{bmatrix} = 0. \end{equation}

The rest of the algorithm is identically the same as for the two-dimensional case. We do not proof the partial realization condition explicitly in this thesis.

Conclusion

In total three algorithms where presented in this chapter to calculate a system realization for two-dimensional state space models and descriptor systems and a single algorithm to calculate a system realization for higher order systems.

The first algorithm to identify two-dimensional state space models is a generalization of the algorithm of Kung, with the exception that two shifts are carried out, one shift for every dimension. To find a realization for descriptor systems, two algorithms where presented, one algorithm that relies on the calculation of the gap and one algorithm that skips this step.

The extension to higher dimensional system identification is trivial and the algorithm was only briefly sketched out in this chapter.