radar:radardetection
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| radar:radardetection [2018/06/06 18:21] – dicristofaro | radar:radardetection [2026/04/28 15:13] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 24: | Line 24: | ||
| Due to the continuous nature of the noise, we can suppose that the dimensionality of $\Omega$ is infinite even if the size of // | Due to the continuous nature of the noise, we can suppose that the dimensionality of $\Omega$ is infinite even if the size of // | ||
| - | Finally we define the space of the possibile decisions // | + | Finally, we define the space of the possibile decisions // |
| - | Due to the fact that the noise is a random process, the law that decide the transaction from the event' | + | Due to the fact that the noise is a random process, the law that decide the transaction from the event' |
| \begin{equation} | \begin{equation} | ||
| Line 35: | Line 35: | ||
| The introduction of the probability distribution to describe the transaction from the space of the events and the space of the observable allow to define some decision procedures that can be of two types: parametric and non-parametric. | The introduction of the probability distribution to describe the transaction from the space of the events and the space of the observable allow to define some decision procedures that can be of two types: parametric and non-parametric. | ||
| We talk about parametric decision procedures when the probabilities of transition $p(x \mid M_i)$ , $i=0, | We talk about parametric decision procedures when the probabilities of transition $p(x \mid M_i)$ , $i=0, | ||
| - | We talk about //non parametric decisions// when the different | + | We talk about //non parametric decisions// when the different |
| An example of paremetric decision between the events $M_0$ and $M_1$ could be: | An example of paremetric decision between the events $M_0$ and $M_1$ could be: | ||
| Line 59: | Line 59: | ||
| Once we have the observables' | Once we have the observables' | ||
| The decision rules allow to go from the space of the observable $\Omega$ to the space of the decisions $D$. | The decision rules allow to go from the space of the observable $\Omega$ to the space of the decisions $D$. | ||
| - | If the space of the events is continous | + | If the space of the events is continuous |
| For example if $M$ is the space of the possible values of a parameter, to estimate this value we can use decision theory metodologies. | For example if $M$ is the space of the possible values of a parameter, to estimate this value we can use decision theory metodologies. | ||
| - | The definition of decision rules determinates a partition of the observable' | + | The definition of decision rules determinates a partition of the observable' |
| Line 69: | Line 69: | ||
| </ | </ | ||
| - | The above scheme (figure {{ref> | + | The above scheme (figure {{ref> |
| Let's suppose to have received the vector $\vec{X}$, the decision rule is: | Let's suppose to have received the vector $\vec{X}$, the decision rule is: | ||
| Line 87: | Line 87: | ||
| - | __The radar detection procedures are particular cases of the decision problems__ and we are going to analize | + | __The radar detection procedures are particular cases of the decision problems__ and we are going to analyze |
| From what we said until here, we have to decide considering two events: | From what we said until here, we have to decide considering two events: | ||
| Line 175: | Line 175: | ||
| Note that we can also associate a cost to the correct decisions. | Note that we can also associate a cost to the correct decisions. | ||
| - | Since we operate in an uncertainity | + | Since we operate in an uncertainty |
| Line 205: | Line 205: | ||
| - | We remember that to estabilish | + | We remember that to establish |
| The space $\Omega_1$ is defined such that the result of (11) is minimum. We are going to analyze this problem in the unidimensional case for semplicity: | The space $\Omega_1$ is defined such that the result of (11) is minimum. We are going to analyze this problem in the unidimensional case for semplicity: | ||
| - | Let's consider the geneirc | + | Let's consider the generic |
| <figure generic_function> | <figure generic_function> | ||
| Line 271: | Line 271: | ||
| \begin{equation} | \begin{equation} | ||
| - | l(\vec{X}) \geq \eta \Rightarrow M_1 \text{ | + | l(\vec{X}) \geq \eta \Rightarrow M_1 \text{ |
| \end{equation} | \end{equation} | ||
| Line 286: | Line 286: | ||
| \end{equation} | \end{equation} | ||
| - | Due to the fact that the main difficulty in the application of the Bayes method consists in defining the values of costs $L_{ij}$ and in the knowledege | + | Due to the fact that the main difficulty in the application of the Bayes method consists in defining the values of costs $L_{ij}$ and in the knowledge |
| Line 334: | Line 334: | ||
| \end{equation} | \end{equation} | ||
| - | and we can obtain (24). The probability error criterion | + | and we can obtain (24). The probability error criterion |
| Alternatively, | Alternatively, | ||
| Line 343: | Line 343: | ||
| \begin{equation} | \begin{equation} | ||
| \begin{array} | \begin{array} | ||
| - | \text{p}(\vec{X} \mid M_0) = p(\vec{X}) p(M_0 \mid \vec{X}) | + | \text{p}(\vec{X} \mid M_0)\text{p_0} |
| \end{array} | \end{array} | ||
| \end{equation} | \end{equation} | ||
| Line 366: | Line 366: | ||
| - | We obtain that the cost is minimum if the decision region for each hypotesis | + | We obtain that the cost is minimum if the decision region for each hypothesis |
| Line 374: | Line 374: | ||
| - | Therefore, the MAP decisor is: the calculation of the likelihood function $p(\vec{X} \mid M_k)$ for each hypotesis, multiplied by a priori probability of that hypotesis | + | Therefore, the MAP decisor is: the calculation of the likelihood function $p(\vec{X} \mid M_k)$ for each hypothesis, multiplied by a priori probability of that hypothesis |
| <figure map_reciever> | <figure map_reciever> | ||
| Line 401: | Line 401: | ||
| \end{equation} | \end{equation} | ||
| - | where $\eta$ is a parameter that depends from the costs and from the a priori probabilities. Due to the fact that often we don't know such values, in the hypotesis | + | where $\eta$ is a parameter that depends from the costs and from the a priori probabilities. Due to the fact that often we don't know such values, in the hypothesis |
| - | If we assume $\eta = 1$ (maximum | + | If we assume $\eta = 1$ (maximum |
| \begin{equation} | \begin{equation} | ||
| Line 471: | Line 471: | ||
| - | The Bayes criterion, MAP and Neyman-Pearson are equavalent | + | The Bayes criterion, MAP and Neyman-Pearson are equivalent |
| In general the most used criterion is the Neyman-Pearson' | In general the most used criterion is the Neyman-Pearson' | ||
| Line 487: | Line 487: | ||
| - | This problem is a tipical | + | This problem is a typical |
| We need to identify the regions $\Omega_0$ and $\Omega_1$. The problem is to find the threshold value $X_T$ that delimits the two decision regions. | We need to identify the regions $\Omega_0$ and $\Omega_1$. The problem is to find the threshold value $X_T$ that delimits the two decision regions. | ||
| - | If we adopt the maximum likelihood criterion (for which $\lambda = 1$) the threshold value $X_T$ is the point $X_T=\mu/2$ that is at the intercption | + | If we adopt the maximum likelihood criterion (for which $\lambda = 1$) the threshold value $X_T$ is the point $X_T=\mu/2$ that is at the interception |
| Line 506: | Line 506: | ||
| \end{equation} | \end{equation} | ||
| - | where the fucntion | + | where the function |
| Line 531: | Line 531: | ||
| // | // | ||
| - | Let's suppose that we have a target which produce an echo with a constant amplitude $A$ (the received signal is a sine with amplitude $A$) and the RMS voltage of the gaussian noise is $\sigma$. We suppose to take a decision on a single pulse recived in output from an enevelope | + | Let's suppose that we have a target which produce an echo with a constant amplitude $A$ (the received signal is a sine with amplitude $A$) and the RMS voltage of the gaussian noise is $\sigma$. We suppose to take a decision on a single pulse recived in output from an envelope |
| Using the theory illustrated in the previous section we can say that: | Using the theory illustrated in the previous section we can say that: | ||
| - | a) If only noise is present ($M_0$ | + | a) If only noise is present ($M_0$ |
| \begin{equation} | \begin{equation} | ||
| Line 543: | Line 543: | ||
| - | b) If the target is present ($M_1$ | + | b) If the target is present ($M_1$ |
| \begin{equation} | \begin{equation} | ||
| Line 557: | Line 557: | ||
| - | The decisor will decide | + | The decisor will decide |
| \begin{equation} | \begin{equation} | ||
| Line 581: | Line 581: | ||
| The value $T$ can be calculated once we choose the decision criterion. | The value $T$ can be calculated once we choose the decision criterion. | ||
| - | If the decision criterion is the Neyman-Pearson' | + | If the decision criterion is the Neyman-Pearson' |
| \begin{equation} | \begin{equation} | ||
| Line 596: | Line 596: | ||
| Note that the calculation indicated in the previous equations is optimal due to the Neyman-Pearson criterion such that it give us the maximum $P_d$. | Note that the calculation indicated in the previous equations is optimal due to the Neyman-Pearson criterion such that it give us the maximum $P_d$. | ||
| Due to the fact that the value of $T$ is not dependent from $A$ the procedure is optimal also if $A$ is unknown. | Due to the fact that the value of $T$ is not dependent from $A$ the procedure is optimal also if $A$ is unknown. | ||
| - | So, in case of fixed target when the SNR ratio is unknown (remember that $SNR = \frac{A^2}{2\sigma^2}$), | + | So, in case of fixed target when the SNR ratio is unknown (remember that $SNR = \frac{A^2}{2\sigma^2}$), |
| ====== Neyman-Pearson (single pulse + SW2 target) ====== | ====== Neyman-Pearson (single pulse + SW2 target) ====== | ||
| Line 602: | Line 602: | ||
| // | // | ||
| - | The Neyman-Pearson criterion can be succesfully applicated | + | The Neyman-Pearson criterion can be successfully applied |
| We assume $x(t)$ is the amplitude of the received voltage, sum of the interesting signal $s(t)$ and the noise $n(t)$, $\sigma$ is the variance of the noise process and $\sigma^2_s$ the power of the signal. | We assume $x(t)$ is the amplitude of the received voltage, sum of the interesting signal $s(t)$ and the noise $n(t)$, $\sigma$ is the variance of the noise process and $\sigma^2_s$ the power of the signal. | ||
| Line 639: | Line 639: | ||
| - | Since $\frac{1}{\sigma^2} - \frac{1}{s^2} > 0$ and the log function is monotonically increasing, the relation (56) can be expressed taking the natural log; So, we can write the quadratic detector: | + | Since $\frac{1}{\sigma^2} - \frac{1}{s^2} > 0$ and the log function is monotonically increasing, the relation (56) can be expressed |
| \begin{equation} | \begin{equation} | ||
| Line 646: | Line 646: | ||
| - | (Obviusly | + | (Obviously |
| The decision threshold $T$ must be chosen such that: | The decision threshold $T$ must be chosen such that: | ||
| Line 681: | Line 681: | ||
| \begin{equation} | \begin{equation} | ||
| - | \Phi_x(\omega) = E[e^{j \omega x}] = \int_{-\infty}^{\infty} f_x(x) e^{j \omega x} \,dx = exp(j \omega | + | \Phi_x(\omega) = E[e^{j \omega x}] = \int_{-\infty}^{\infty} f_x(x) e^{j \omega x} \,dx = exp(j \omega |
| \end{equation} | \end{equation} | ||
| Line 700: | Line 700: | ||
| \end{equation} | \end{equation} | ||
| - | If we multiply the vector $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}$ for the matrix $P = \frac{\sigma}{\sqrt{2}} \begin{bmatrix} \alpha & \beta \\ -\alpha & \beta \end{bmatrix}$ , where $\alpha = \sqrt{1-\rho} \ \text{,}\ \beta=\sqrt{1+\rho}$, | + | If we multiply the vector $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}$ for the matrix $P = \frac{\sigma}{\sqrt{2}} \begin{bmatrix} \alpha & \beta \\ -\alpha & \beta \end{bmatrix}$ , where $\alpha = \sqrt{1-\rho} \ \text{,}\ \beta=\sqrt{1+\rho}$, |
| \begin{equation} | \begin{equation} | ||
| Line 766: | Line 766: | ||
| - | In that way we can demostrate | + | In that way we can demonstrate |
| \begin{equation} | \begin{equation} | ||
| Line 772: | Line 772: | ||
| \end{equation} | \end{equation} | ||
| - | this means that the two random variables are independent. For the gaussian random variable | + | this means that the two random variables are independent. For the gaussian random variable, independence |
| - | The cross-correlation coefficient $\rho$, gives an idea of the dipendence | + | The cross-correlation coefficient $\rho$, gives an idea of the dependence |
| If instead $\rho=0$ we can have the situation shown in the figure {{ref> | If instead $\rho=0$ we can have the situation shown in the figure {{ref> | ||
| - | The characteristic function | + | The characteristic function |
| \begin{equation} | \begin{equation} | ||
| Line 787: | Line 787: | ||
| <figure scattering_diagrams> | <figure scattering_diagrams> | ||
| {{ : | {{ : | ||
| - | < | + | < |
| </ | </ | ||
| - | Let's consider | + | Let's consider a random vector $\vec{X} = [x_1,x_2,...,x_n]^T$ where $x_i$ are random variables. |
| The vector $\vec{X}$ is described by the joint density function | The vector $\vec{X}$ is described by the joint density function | ||
| Line 809: | Line 809: | ||
| $\vec{Y}$ is the vector with independent and gaussian components with zero mean and variance equal to one, that means that the density is of type negative exponential (except for the normalization constant) and the quadratic form is $\vec{Y}^T \vec{Y} = \sum_{i=1}^{n} y_i^2$; | $\vec{Y}$ is the vector with independent and gaussian components with zero mean and variance equal to one, that means that the density is of type negative exponential (except for the normalization constant) and the quadratic form is $\vec{Y}^T \vec{Y} = \sum_{i=1}^{n} y_i^2$; | ||
| - | | + | |
| - | | + | |
| - | We can repeat the procedure used to obtain the equation (70) and (71); the resulting density function is expect for a normalization constant, the exponential of the squared form: $X^T \boldsymbol{\vec{Q}} = P^{-T} P^{-1}$. | + | We can repeat the procedure used to obtain the equation (70) and (71); the resulting density function is, except of a normalization constant, the exponential of the squared form: $X^T \boldsymbol{\vec{Q}} \vec{X}$ where $\boldsymbol{\vec{Q}} = \boldsymbol{P^{-T}} \boldsymbol{P^{-1}}$. |
| - | The coviariance | + | The covariance |
| \begin{equation} | \begin{equation} | ||
| \boldsymbol{\vec{M}_{\vec{x}}} = E \left[\vec{X} \vec{X}^T \right] = \boldsymbol{\vec{T}} \boldsymbol{\vec{M}_y} \boldsymbol{\vec{P}^T} = \boldsymbol{\vec{P}} \boldsymbol{\vec{P}^T} | \boldsymbol{\vec{M}_{\vec{x}}} = E \left[\vec{X} \vec{X}^T \right] = \boldsymbol{\vec{T}} \boldsymbol{\vec{M}_y} \boldsymbol{\vec{P}^T} = \boldsymbol{\vec{P}} \boldsymbol{\vec{P}^T} | ||
| Line 835: | Line 835: | ||
| is **gaussian**. | is **gaussian**. | ||
| - | From that definition result that the characteristic function of a gaussian random vector is expressed (if the means vector $\left< \vec{X} \right>$ is null): | + | From that definition result that the characteristic function of a gaussian random vector is expressed (if the means vector $\left< \vec{X} \right>$ is null) as: |
| \begin{equation} | \begin{equation} | ||
| Line 841: | Line 841: | ||
| \end{equation} | \end{equation} | ||
| - | where $\boldsymbol{\vec{M}}$ is coaviariance | + | where $\boldsymbol{\vec{M}}$ is covariance |
| \begin{equation} | \begin{equation} | ||
| Line 855: | Line 855: | ||
| \end{equation} | \end{equation} | ||
| - | If we antitransofrm | + | If we antitransform |
| \begin{equation} | \begin{equation} | ||
| Line 869: | Line 869: | ||
| \begin{equation} | \begin{equation} | ||
| - | \boldsymbol{\vec{\hat{M}}} \frac{1}{v} \sum_{i=1}^{v} \vec{X_i} \vec{X_i}^T | + | \boldsymbol{\vec{\hat{M}}} |
| \end{equation} | \end{equation} | ||
| Line 875: | Line 875: | ||
| where $v$ is the number of observation of the vector $\vec{X}$. | where $v$ is the number of observation of the vector $\vec{X}$. | ||
| - | The gaussian multivariate can be also obtained generalizing the procedure that allow to define the gaussian bivariate to the // | + | The gaussian multivariate can be also obtained generalizing the procedure that allows |
| We can start from $n$ variables $y_i$ $i=1,...,n$ independent (vector $\vec{Y}$) that has a density function as | We can start from $n$ variables $y_i$ $i=1,...,n$ independent (vector $\vec{Y}$) that has a density function as | ||
| Line 881: | Line 881: | ||
| \begin{equation} | \begin{equation} | ||
| - | f_{\vec{Y}}(\vec{Y}) = \frac{1}{ | + | f_{\vec{Y}}(\vec{Y}) = \frac{1}{ |
| \end{equation} | \end{equation} | ||
| - | where $\vec{\Lambda}$ is a diagonal matrix that has variance of the random variables $y_i$ as elements, that means $l_{ii}=\text{var}(y_i) i=1,...,n$. | + | where $\vec{\Lambda}$ is a diagonal matrix that has variance of the random variables $y_i$ as elements, that means $\lambda_{ii}=\text{var}(y_i) |
| If we apply the linear transformation | If we apply the linear transformation | ||
| Line 894: | Line 894: | ||
| - | where $\boldsymbol{\vec{U}}$ is a unitary orthonormal matrix, using the fondamental | + | where $\boldsymbol{\vec{U}}$ is a unitary orthonormal matrix, using the fundamental |
| \begin{equation} | \begin{equation} | ||
| f_X(\vec{X}) = f_Y(\vec{Y} = \boldsymbol{\vec{U}} \vec{X}) = \\ | f_X(\vec{X}) = f_Y(\vec{Y} = \boldsymbol{\vec{U}} \vec{X}) = \\ | ||
| = A exp \left \{ -\frac{1}{2} \left( \boldsymbol{\vec{U}} \vec{X} \right)^T | = A exp \left \{ -\frac{1}{2} \left( \boldsymbol{\vec{U}} \vec{X} \right)^T | ||
| - | = A exp \left\{ -\frac{1}{2} | + | = A exp \left\{ -\frac{1}{2} \vec{X}^T \ \vec{U}^T \ \boldsymbol{\vec{\Lambda}^{-1}} \ \boldsymbol{\vec{U}} \ \vec{X} |
| - | = A exp \left\{ -\frac{1}{2} \vec{X}^T | + | = A exp \left\{ -\frac{1}{2} \vec{X}^T |
| \end{equation} | \end{equation} | ||
| | | ||
| - | where $\boldsymbol{M^{-1}}$ is the inverse of the covariance matrix $\boldsymbol{M}$ of $\vec{X}$ that can be spectral decomposed as $\boldsymbol{\vec{M}} = \boldsymbol{\vec{U}} \ \boldsymbol{\vec{\Lambda}} \ \boldsymbol{\vec{U}^T} $. | + | where $\boldsymbol{M^{-1}}$ is the inverse of the covariance matrix $\boldsymbol{\vec{M}}$ of $\vec{X}$ that can be spectral decomposed as $\boldsymbol{\vec{M}} = \boldsymbol{\vec{U}} \ \boldsymbol{\vec{\Lambda}} \ \boldsymbol{\vec{U}^T} $. |
| The probability density of the vector $\vec{X}$ is: | The probability density of the vector $\vec{X}$ is: | ||
| Line 929: | Line 929: | ||
| - | So, the gaussian multivariate distribution of $\vec{X}$ is expressed | + | So, the gaussian multivariate distribution of $\vec{X}$ is expressed |
| \begin{equation} | \begin{equation} | ||
| Line 936: | Line 936: | ||
| - | All we have done is necessary to extend the Neyman-Pearson criterion to the case when we have multiple | + | All we have done is necessary to extend the Neyman-Pearson criterion to the case when we have multiple |
| - | Since normally the compenents | + | Since normally |
| - | We indicate with $\vec{Z}$ the vector such that his $n$ components are samples of the complex envelope | + | We indicate with $\vec{Z}$ the vector such that its $n$ components are samples of the complex envelope |
| In radar applications this signal can be modelled as a stationary process and has narrow band. | In radar applications this signal can be modelled as a stationary process and has narrow band. | ||
| Line 949: | Line 949: | ||
| - | where $\vec{x}$ and $\vec{Y}$ are two real gaussian vectors, and we can consider instead of $\vec{Z}$ the real vector with a double dimension: | + | where $\vec{X}$ and $\vec{Y}$ are two real gaussian vectors, and we can consider instead of $\vec{Z}$, the real vector with a double dimension: |
| \begin{equation} | \begin{equation} | ||
| Line 955: | Line 955: | ||
| \end{equation} | \end{equation} | ||
| - | The covariance matrix of $\vec{Z}$, if is valid the hypotesis | + | The covariance matrix of $\vec{Z}$, if the hypothesis |
| \begin{equation} | \begin{equation} | ||
| - | \boldsymbol{\vec{M}} = E \left[ \begin{bmatrix} \vec{X} \\ \vec{Y} \end{bmatrix} | + | \boldsymbol{\vec{M}} = E \left[ \begin{bmatrix} \vec{X} \\ \vec{Y} \end{bmatrix} |
| \end{equation} | \end{equation} | ||
| - | The $\boldsymbol{\vec{M}}$ matrix contains blocks that are equal two by two, so the information on the $\vec{Z}$ process is only related to the knowledge of $2n^2$ parameters and not $4n^2$ because we need to have only the $\vec{V}$ and $\vec{W}$ matrixes. $\vec{V}$ is the covariance matrix of $\vec{X}$ and $\vec{W}$ is the mutual covariance matrix of $\vec{X}$ and $\vec{Y}$. | + | The $\boldsymbol{\vec{M}}$ matrix contains blocks that are equal two by two, so the information on the $\vec{Z}$ process is only related to the knowledge of $2n^2$ parameters and not $4n^2$ because we need to have only the $\boldsymbol{\vec{V}}$ and $\boldsymbol{\vec{W}}$ matrixes. $\boldsymbol{\vec{V}}$ is the covariance matrix of $\vec{X}$ and $\vec{W}$ is the mutual covariance matrix of $\vec{X}$ and $\vec{Y}$. |
| Note that the relations | Note that the relations | ||
| \begin{equation} | \begin{equation} | ||
| - | \vec{V^T} = \vec{V} \ \ \ \text{and} \ \ \ \vec{W^T} = -\vec{W} | + | \boldsymbol{\vec{V^T}} = \boldsymbol{\vec{V}} \ \ \ \text{and} \ \ \ \boldsymbol{\vec{W^T}} = -\boldsymbol{\vec{W}} |
| \end{equation} | \end{equation} | ||
| Line 991: | Line 991: | ||
| The (99) extends the concept of the gaussian multivariate to the case of complex random vectors. | The (99) extends the concept of the gaussian multivariate to the case of complex random vectors. | ||
| - | In case of a complex process the matrix $\vec{M}$ is Hermitian and so: | + | In case of a complex process the matrix $\boldsymbol{\vec{M}}$ is Hermitian and so: |
| \begin{equation} | \begin{equation} | ||
| Line 1027: | Line 1027: | ||
| \begin{equation} | \begin{equation} | ||
| - | f_{\vec{Z}}(\vec{Z}) = \frac{1}{ (2 \pi)^N \Delta } exp \left( -\frac{1}{2} \left( \vec{Z} - \left< \vec{Z} \right> \right)^{T} \ \boldsymbol{\vec{M}} \ \left( \vec{Z} - \left< \vec{Z} \right> \right)^{*} | + | f_{\vec{Z}}(\vec{Z}) = \frac{1}{ (2 \pi)^N \Delta } exp \left( -\frac{1}{2} \left( \vec{Z} - \left< \vec{Z} \right> \right)^{T} \ \boldsymbol{\vec{M}^{-1}} \ \left( \vec{Z} - \left< \vec{Z} \right> \right)^{*} |
| \end{equation} | \end{equation} | ||
| Line 1035: | Line 1035: | ||
| \begin{equation} | \begin{equation} | ||
| - | \vec{Z} = \boldsymbol{\vec{A}} | + | \vec{Z} = \boldsymbol{\vec{A}} \vec{Z_0} |
| \end{equation} | \end{equation} | ||
| Line 1059: | Line 1059: | ||
| where $\boldsymbol{\vec{M}}$ is the covariance matrix of the process $\vec{Z}$. | where $\boldsymbol{\vec{M}}$ is the covariance matrix of the process $\vec{Z}$. | ||
| - | This formula is useful when we have to work with energies (for example to evaluating | + | This formula is useful when we have to work with energies (for example to evaluate |
| **//Proof of (82) //** //(in case of a random vector with zero mean)//: | **//Proof of (82) //** //(in case of a random vector with zero mean)//: | ||
| Line 1076: | Line 1076: | ||
| and we obtain: | and we obtain: | ||
| - | $\Phi\left(\vec{\Omega}\right) = E \left[ e^{j \vec{\Omega^T} \vec{X}} \right] = exp \left( - \frac{1}{2} \vec{\Omega}^T \vec{M} \vec{\Omega} \right)$ | + | $\Phi_{X}\left(\vec{\Omega}\right) = E \left[ e^{j \vec{\Omega^T} \vec{X}} \right] = exp \left( - \frac{1}{2} \vec{\Omega}^T \vec{M} \vec{\Omega} \right)$ |
| that is equal to (82). | that is equal to (82). | ||
| Line 1083: | Line 1083: | ||
| ====== Coherent detector and Discrete-Time optimal processor ====== | ====== Coherent detector and Discrete-Time optimal processor ====== | ||
| - | // | + | // |
| Line 1115: | Line 1115: | ||
| so if we use a coherent radar we have I and Q samples taken at multiple instants of the PRT. | so if we use a coherent radar we have I and Q samples taken at multiple instants of the PRT. | ||
| - | These type of samples can be considerated | + | These type of samples can be considered |
| \begin{equation} | \begin{equation} | ||
| Line 1123: | Line 1123: | ||
| where $N$ is the number of the samples (or pulses in the dwell time). The samples contained in the vector $\vec{Z}$ have a component of the useful signal and a component of noise. | where $N$ is the number of the samples (or pulses in the dwell time). The samples contained in the vector $\vec{Z}$ have a component of the useful signal and a component of noise. | ||
| - | Let's suppose that the useful signal is represented by (111) and that the noise, produced for example from external factors, (**clutter**) is additive Gaussian with zero mean | + | Let's suppose that the useful signal is represented by (111) and that the noise, produced for example from external factors (**clutter**), is additive Gaussian with zero mean |
| and with covariance matrix $\boldsymbol{\vec{M}}$. | and with covariance matrix $\boldsymbol{\vec{M}}$. | ||
| - | Note that we don't make the hypotesis | + | Note that we don't make the hypothesis |
| Anyway, in the case of noise caused by clutter it's right to say that the distribution is Gaussian, at least for some kind of clutters. | Anyway, in the case of noise caused by clutter it's right to say that the distribution is Gaussian, at least for some kind of clutters. | ||
| The vector of the observable $\vec{Z}$ is | The vector of the observable $\vec{Z}$ is | ||
| Line 1144: | Line 1144: | ||
| where the random vector $\vec{Z}$ is made by the sum of the noise clutter and the useful signal. | where the random vector $\vec{Z}$ is made by the sum of the noise clutter and the useful signal. | ||
| - | If the useful signal is statistically independent from the additive noise and if is a process with zero mean, the probability density function of the vector $\vec{Z}$ in the case of absence and presence of the target | + | If the useful signal is statistically independent from the additive noise and if it' |
| - | are n-dimensional Gaussian. | + | |
| a) In the absence of target we have: | a) In the absence of target we have: | ||
| Line 1155: | Line 1154: | ||
| where we suppose that the vector containing the means of the noise is null. | where we suppose that the vector containing the means of the noise is null. | ||
| - | b) In cases in which the target is deterministic (that are known in addition to the carrier | + | b) In cases in which the target is deterministic (that are known in addition to the carrier |
| \begin{equation} | \begin{equation} | ||
| Line 1182: | Line 1181: | ||
| \begin{equation} | \begin{equation} | ||
| - | \Lambda(\vec{Z}) = - \vec{s}^T \boldsymbol{\vec{M}^{-1}} \vec{s}^{*} + \sum_{ij}{m_{ij} | + | \Lambda(\vec{Z}) = - \vec{s}^T \boldsymbol{\vec{M}^{-1}} \vec{s}^{*} + \sum_{ij}{m_{ij} s_i {z_j}^{*}} + \sum_{ij}{m_{ij} z_i {s_j}^{*}} |
| \end{equation} | \end{equation} | ||
| - | where $m_{ij}$ are the elements of the matrix $\boldsymbol{\vec{M}^{-1}}$. Since $\boldsymbol{\vec{M}^{-1}}$ is Hermitian (since $\boldsymbol{\vec{M}}$ is Hermitian) we have $m_{ij} = m_{ji}^{*}$ and $\vec{s}^T \boldsymbol{\vec{M}^{-1}} \vec{Z}^{*}$ is the complex conjugate of $\vec{Z}^T \ \boldsymbol{\vec{M}^{-1}} \ \vec{s}^{*}$, | + | where $m_{ij}$ are the elements of the matrix $\boldsymbol{\vec{M}^{-1}}$. Since $\boldsymbol{\vec{M}^{-1}}$ is Hermitian (because |
| \begin{equation} | \begin{equation} | ||
| Line 1221: | Line 1220: | ||
| where $\vec{k} = \boldsymbol{\vec{M}^{-1}} \vec{s}^{*}$, | where $\vec{k} = \boldsymbol{\vec{M}^{-1}} \vec{s}^{*}$, | ||
| - | Note that $\vec{s}$ is the vector associated with the received signal in the absence of noise. In the case of white noise (less than a constant factor), $\vec{k} = \vec{s}^{*}$. | + | Note that $\vec{s}$ is the vector associated with the received signal in absence of noise. In the case of white noise (except for a constant factor), $\vec{k} = \vec{s}^{*}$. |
| - | If the vector $\vec{s}$ is considered as the impulse | + | If the vector $\vec{s}$ is considered as the impulsive |
| \begin{equation} | \begin{equation} | ||
| Line 1230: | Line 1229: | ||
| - | It is noted that it is necessary to know the parameters of the target values $A$, $f_D$, $\Phi$. If the density functions are not Gaussian, the likelihood ratio may not be written easily. | + | Note that it is necessary to know the parameters of the target values $A$, $f_D$, $\Phi$. If the density functions are not Gaussian, the likelihood ratio may not be written easily. |
| ===== Maximization of the Signal / Noise ratio ===== | ===== Maximization of the Signal / Noise ratio ===== | ||
| Line 1236: | Line 1235: | ||
| //Finding the maximum Signal / Noise ratio using optimal linear processor for a known deterministic signal. | //Finding the maximum Signal / Noise ratio using optimal linear processor for a known deterministic signal. | ||
| Application of the optimal filtering for an unfluctuating moving target. | Application of the optimal filtering for an unfluctuating moving target. | ||
| - | optimal | + | Optimal |
| - | ==== optimal | + | ==== Optimal |
| In the general case, we prefer to operate with a suboptimal decision criterion based on the research of the maximum signal/ | In the general case, we prefer to operate with a suboptimal decision criterion based on the research of the maximum signal/ | ||
| Line 1245: | Line 1244: | ||
| These samples must be processed in a linear manner (by using a FIR filter) in such a way that at a certain instant (//instant of decision// | These samples must be processed in a linear manner (by using a FIR filter) in such a way that at a certain instant (//instant of decision// | ||
| - | Let $\vec{h}$ be the vector of coefficients of the FIR filter and $v$ is the filter output at the instant of decision, that is (when the sequence of n samples received is aligned with that of the | + | Let $\vec{h}$ be the vector of coefficients of the FIR filter and $v$ is the filter output at the instant of decision, that is (when the sequence of n samples received, is aligned with the sequence |
| coefficients) equal to | coefficients) equal to | ||
| Line 1269: | Line 1268: | ||
| where $\boldsymbol{\vec{M}}$ is the covariance matrix of the noise which has zero mean. | where $\boldsymbol{\vec{M}}$ is the covariance matrix of the noise which has zero mean. | ||
| - | The power of the noise in output of the filter defined by the vector $\vec{h}$ | + | The power of the noise in output of the filter defined by the vector $\vec{h}$, can be expressed |
| \begin{equation} | \begin{equation} | ||
| Line 1275: | Line 1274: | ||
| \end{equation} | \end{equation} | ||
| - | where $\vec{s}$ is the vector of the samples of the useful signal. The indicated quantities are obtained sampling in time (and so in Range) the radar signal in the point where we expect that the target | + | where $\vec{s}$ is the vector of the samples of the useful signal. The indicated quantities are obtained sampling in time (and so in Range) the radar signal in the point where we expect that the target (point) |
| \begin{equation} | \begin{equation} | ||
| Line 1286: | Line 1285: | ||
| __Note__: | __Note__: | ||
| - | In general, the useful signal can be a random process. In this case, it can be described by the vector of expected values and with the $\boldsymbol{\vec{M_s}}$ covariance matrix. | + | In general, the useful signal can be a random process. In this case, it can be described by the vector of the expected values and with the $\boldsymbol{\vec{M_s}}$ covariance matrix. |
| - | If the vector of expected values is null, the signal/ | + | If the vector of the expected values is null, the signal/ |
| \begin{equation} | \begin{equation} | ||
| Line 1306: | Line 1305: | ||
| \end{equation} | \end{equation} | ||
| - | In case of the useful signal is " | + | In case of the useful signal is " |
| - | Knwoing | + | Knowing |
| Line 1318: | Line 1317: | ||
| - | where $\mu$ is non relevant constant. | + | where $\mu$ is a non relevant constant. |
| Note that (135) is the same as (122), result already obtained by applying the Neyman-Pearson criterion in the case in which the statistics are Gaussian. | Note that (135) is the same as (122), result already obtained by applying the Neyman-Pearson criterion in the case in which the statistics are Gaussian. | ||
| Line 1430: | Line 1429: | ||
| where $\rho$ is the correlation coefficient and $\sigma$ represents the power of the noise. | where $\rho$ is the correlation coefficient and $\sigma$ represents the power of the noise. | ||
| - | $\rho = \rho(\tau)$ by definition, | + | $\rho = \rho(\tau)$ by definition, represents the correlation coefficient between two consecutive pulses, one observed at time $t=0$ and the other at time $t=T=PRT$ (see figure {{ref> |
| <figure prt_pulses> | <figure prt_pulses> | ||
| Line 1441: | Line 1440: | ||
| \begin{equation} | \begin{equation} | ||
| - | S(f) = \frac{1}{\sqrt{2 \pi} \sigma_f} e^{-\frac{f^2}{2 {\sigma_f}^2}} | + | S(f) = \frac{1}{\sqrt{2 \pi} \sigma_f} e^{-\frac{f^2}{2 {\sigma_f^2}}} |
| \end{equation} | \end{equation} | ||
| - | The equation (148) is utilized, in general, in the metereological | + | The equation (148) is utilized, in general, in the meteorological |
| If we antitransform the (148) we have the expression of the correlation coefficient $\frac{R(\tau)}{R(0)}$ to which we will refer as $\rho(\tau)$ | If we antitransform the (148) we have the expression of the correlation coefficient $\frac{R(\tau)}{R(0)}$ to which we will refer as $\rho(\tau)$ | ||
| Line 1479: | Line 1478: | ||
| - | Since the white thermal noise contribution is scorrelated | + | Since the white thermal noise contribution is uncorrelated |
| Line 1518: | Line 1517: | ||
| \end{equation} | \end{equation} | ||
| - | using (135) the optimal filter | + | using (135) the optimal filter |
| \begin{equation} | \begin{equation} | ||
| Line 1524: | Line 1523: | ||
| \end{equation} | \end{equation} | ||
| - | knwoing | + | knowing |
| \begin{equation} | \begin{equation} | ||
| Line 1533: | Line 1532: | ||
| - | To see the gain introduced by the optimal linear filter we observe that before the processing the signal/ | + | To see the gain introduced by the optimal linear filter we observe that before the processing, the signal/ |
| \begin{equation} | \begin{equation} | ||
| - | \left ( \frac{S}{N}_{1} | + | \left ( \frac{S}{N} |
| \end{equation} | \end{equation} | ||
| Line 1543: | Line 1542: | ||
| \begin{equation} | \begin{equation} | ||
| - | \left ( \frac{S}{N}_{OUT} | + | \left ( \frac{S}{N} |
| \end{equation} | \end{equation} | ||
| - | We observe that the improvement obtained is lower compared to the case when we have only white noise, this is caused by the contribution of a colored | + | We observe that the improvement obtained is lower compared to the case when we have only white noise, this is caused by the contribution of a coloured |
| As much as the coefficient $\rho$ is lower, the improvement will be bigger, if $\rho = 0$ we have an improvement equal to 2 (in general equal to $n$). | As much as the coefficient $\rho$ is lower, the improvement will be bigger, if $\rho = 0$ we have an improvement equal to 2 (in general equal to $n$). | ||
| - | If $\rho \rightarrow 1$, the spectrum of the noise and the spectrum of the useful signal tend to be overlapped and the improvement tends to zero. Note that the spectral situation in figure {{ref> | + | If $\rho \rightarrow 1$, the spectrum of the noise and the spectrum of the useful signal tend to be overlapped and the improvement tends to zero. Note that the spectral situation in figure {{ref> |
| For example, consider the case where the Doppler frequency of the useful signal is equal to half of the PRF value (see figure {{ref> | For example, consider the case where the Doppler frequency of the useful signal is equal to half of the PRF value (see figure {{ref> | ||
| Line 1612: | Line 1611: | ||
| - | To perform the test it is necessary to calculate $w$ and consequently it is necessary to know the initial phase $\Phi_0$ associated with the target signal. | + | To perform the test is necessary to calculate $w$ and consequently it is necessary to know the initial phase $\Phi_0$ associated with the target signal. |
| If the base band representation of the useful signal is | If the base band representation of the useful signal is | ||
| Line 1626: | Line 1625: | ||
| \end{equation} | \end{equation} | ||
| - | where $\Phi_i$ is the difference of phase between the sample $i$ and the first sample (with $i=0$) and it's related to the doppler | + | where $\Phi_i$ is the difference of phase between the sample $i$ and the first sample (with $i=0$) and __it's related to the doppler |
| In particular we have: | In particular we have: | ||
| Line 1635: | Line 1634: | ||
| - | ==== optimal | + | ==== Optimal |
| - | the phase $\Phi_0$, that is independent from the doppler frequency (see (170)) not known a priori. | + | the phase $\Phi_0$, that is independent from the doppler frequency (see (170)), is not known a priori. |
| We are going to analize what is the optimal test to do when the initial phase $\Phi_0$ is unknown. | We are going to analize what is the optimal test to do when the initial phase $\Phi_0$ is unknown. | ||
| Line 1656: | Line 1655: | ||
| that depends from $\Phi_0$ because $\vec{k}$ depends from $\Phi_0$. | that depends from $\Phi_0$ because $\vec{k}$ depends from $\Phi_0$. | ||
| - | Note that the joint probability density of $\rho$ and $\Phi$, given the event $M_0$ (that represents the absence of useful signal) can be written as | + | Note that the joint probability density of $\rho$ and $\Phi$, given the event $M_0$, (that represents the absence of useful signal) can be written as |
| \begin{equation} | \begin{equation} | ||
| Line 1758: | Line 1757: | ||
| When the phase information is not known, the optimum test consists of __linear__ signal processing that maximizes the S / N ratio (consistent integration), | When the phase information is not known, the optimum test consists of __linear__ signal processing that maximizes the S / N ratio (consistent integration), | ||
| - | Unknowing the phase factor $\Phi_0$ results in a loss of sensitivity of the receiver | + | Unknowing the phase factor $\Phi_0$ results in a loss of sensitivity of the receiver |
| - | The figure {{ref> | + | The figure {{ref> |
| <figure reciever_sensitivity_phi_known_unknow> | <figure reciever_sensitivity_phi_known_unknow> | ||
| {{ : | {{ : | ||
| - | < | + | < |
| </ | </ | ||
radar/radardetection.1528309306.txt.gz · Last modified: (external edit)
