radar:radardetection
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| radar:radardetection [2018/06/07 13:57] – dicristofaro | radar:radardetection [2026/04/28 15:13] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 24: | Line 24: | ||
| Due to the continuous nature of the noise, we can suppose that the dimensionality of $\Omega$ is infinite even if the size of // | Due to the continuous nature of the noise, we can suppose that the dimensionality of $\Omega$ is infinite even if the size of // | ||
| - | Finally we define the space of the possibile decisions // | + | Finally, we define the space of the possibile decisions // |
| - | Due to the fact that the noise is a random process, the law that decide the transaction from the event' | + | Due to the fact that the noise is a random process, the law that decide the transaction from the event' |
| \begin{equation} | \begin{equation} | ||
| Line 35: | Line 35: | ||
| The introduction of the probability distribution to describe the transaction from the space of the events and the space of the observable allow to define some decision procedures that can be of two types: parametric and non-parametric. | The introduction of the probability distribution to describe the transaction from the space of the events and the space of the observable allow to define some decision procedures that can be of two types: parametric and non-parametric. | ||
| We talk about parametric decision procedures when the probabilities of transition $p(x \mid M_i)$ , $i=0, | We talk about parametric decision procedures when the probabilities of transition $p(x \mid M_i)$ , $i=0, | ||
| - | We talk about //non parametric decisions// when the different | + | We talk about //non parametric decisions// when the different |
| An example of paremetric decision between the events $M_0$ and $M_1$ could be: | An example of paremetric decision between the events $M_0$ and $M_1$ could be: | ||
| Line 59: | Line 59: | ||
| Once we have the observables' | Once we have the observables' | ||
| The decision rules allow to go from the space of the observable $\Omega$ to the space of the decisions $D$. | The decision rules allow to go from the space of the observable $\Omega$ to the space of the decisions $D$. | ||
| - | If the space of the events is continous | + | If the space of the events is continuous |
| For example if $M$ is the space of the possible values of a parameter, to estimate this value we can use decision theory metodologies. | For example if $M$ is the space of the possible values of a parameter, to estimate this value we can use decision theory metodologies. | ||
| - | The definition of decision rules determinates a partition of the observable' | + | The definition of decision rules determinates a partition of the observable' |
| Line 69: | Line 69: | ||
| </ | </ | ||
| - | The above scheme (figure {{ref> | + | The above scheme (figure {{ref> |
| Let's suppose to have received the vector $\vec{X}$, the decision rule is: | Let's suppose to have received the vector $\vec{X}$, the decision rule is: | ||
| Line 87: | Line 87: | ||
| - | __The radar detection procedures are particular cases of the decision problems__ and we are going to analize | + | __The radar detection procedures are particular cases of the decision problems__ and we are going to analyze |
| From what we said until here, we have to decide considering two events: | From what we said until here, we have to decide considering two events: | ||
| Line 175: | Line 175: | ||
| Note that we can also associate a cost to the correct decisions. | Note that we can also associate a cost to the correct decisions. | ||
| - | Since we operate in an uncertainity | + | Since we operate in an uncertainty |
| Line 205: | Line 205: | ||
| - | We remember that to estabilish | + | We remember that to establish |
| The space $\Omega_1$ is defined such that the result of (11) is minimum. We are going to analyze this problem in the unidimensional case for semplicity: | The space $\Omega_1$ is defined such that the result of (11) is minimum. We are going to analyze this problem in the unidimensional case for semplicity: | ||
| - | Let's consider the geneirc | + | Let's consider the generic |
| <figure generic_function> | <figure generic_function> | ||
| Line 271: | Line 271: | ||
| \begin{equation} | \begin{equation} | ||
| - | l(\vec{X}) \geq \eta \Rightarrow M_1 \text{ | + | l(\vec{X}) \geq \eta \Rightarrow M_1 \text{ |
| \end{equation} | \end{equation} | ||
| Line 286: | Line 286: | ||
| \end{equation} | \end{equation} | ||
| - | Due to the fact that the main difficulty in the application of the Bayes method consists in defining the values of costs $L_{ij}$ and in the knowledege | + | Due to the fact that the main difficulty in the application of the Bayes method consists in defining the values of costs $L_{ij}$ and in the knowledge |
| Line 334: | Line 334: | ||
| \end{equation} | \end{equation} | ||
| - | and we can obtain (24). The probability error criterion | + | and we can obtain (24). The probability error criterion |
| Alternatively, | Alternatively, | ||
| Line 343: | Line 343: | ||
| \begin{equation} | \begin{equation} | ||
| \begin{array} | \begin{array} | ||
| - | \text{p}(\vec{X} \mid M_0) = p(\vec{X}) p(M_0 \mid \vec{X}) | + | \text{p}(\vec{X} \mid M_0)\text{p_0} |
| \end{array} | \end{array} | ||
| \end{equation} | \end{equation} | ||
| Line 366: | Line 366: | ||
| - | We obtain that the cost is minimum if the decision region for each hypotesis | + | We obtain that the cost is minimum if the decision region for each hypothesis |
| Line 374: | Line 374: | ||
| - | Therefore, the MAP decisor is: the calculation of the likelihood function $p(\vec{X} \mid M_k)$ for each hypotesis, multiplied by a priori probability of that hypotesis | + | Therefore, the MAP decisor is: the calculation of the likelihood function $p(\vec{X} \mid M_k)$ for each hypothesis, multiplied by a priori probability of that hypothesis |
| <figure map_reciever> | <figure map_reciever> | ||
| Line 401: | Line 401: | ||
| \end{equation} | \end{equation} | ||
| - | where $\eta$ is a parameter that depends from the costs and from the a priori probabilities. Due to the fact that often we don't know such values, in the hypotesis | + | where $\eta$ is a parameter that depends from the costs and from the a priori probabilities. Due to the fact that often we don't know such values, in the hypothesis |
| - | If we assume $\eta = 1$ (maximum | + | If we assume $\eta = 1$ (maximum |
| \begin{equation} | \begin{equation} | ||
| Line 471: | Line 471: | ||
| - | The Bayes criterion, MAP and Neyman-Pearson are equavalent | + | The Bayes criterion, MAP and Neyman-Pearson are equivalent |
| In general the most used criterion is the Neyman-Pearson' | In general the most used criterion is the Neyman-Pearson' | ||
| Line 487: | Line 487: | ||
| - | This problem is a tipical | + | This problem is a typical |
| We need to identify the regions $\Omega_0$ and $\Omega_1$. The problem is to find the threshold value $X_T$ that delimits the two decision regions. | We need to identify the regions $\Omega_0$ and $\Omega_1$. The problem is to find the threshold value $X_T$ that delimits the two decision regions. | ||
| - | If we adopt the maximum likelihood criterion (for which $\lambda = 1$) the threshold value $X_T$ is the point $X_T=\mu/2$ that is at the intercption | + | If we adopt the maximum likelihood criterion (for which $\lambda = 1$) the threshold value $X_T$ is the point $X_T=\mu/2$ that is at the interception |
| Line 506: | Line 506: | ||
| \end{equation} | \end{equation} | ||
| - | where the fucntion | + | where the function |
| Line 531: | Line 531: | ||
| // | // | ||
| - | Let's suppose that we have a target which produce an echo with a constant amplitude $A$ (the received signal is a sine with amplitude $A$) and the RMS voltage of the gaussian noise is $\sigma$. We suppose to take a decision on a single pulse recived in output from an enevelope | + | Let's suppose that we have a target which produce an echo with a constant amplitude $A$ (the received signal is a sine with amplitude $A$) and the RMS voltage of the gaussian noise is $\sigma$. We suppose to take a decision on a single pulse recived in output from an envelope |
| Using the theory illustrated in the previous section we can say that: | Using the theory illustrated in the previous section we can say that: | ||
| - | a) If only noise is present ($M_0$ | + | a) If only noise is present ($M_0$ |
| \begin{equation} | \begin{equation} | ||
| Line 543: | Line 543: | ||
| - | b) If the target is present ($M_1$ | + | b) If the target is present ($M_1$ |
| \begin{equation} | \begin{equation} | ||
| Line 557: | Line 557: | ||
| - | The decisor will decide for the hypotesis | + | The decisor will decide for the hypothesis |
| \begin{equation} | \begin{equation} | ||
| Line 596: | Line 596: | ||
| Note that the calculation indicated in the previous equations is optimal due to the Neyman-Pearson criterion such that it give us the maximum $P_d$. | Note that the calculation indicated in the previous equations is optimal due to the Neyman-Pearson criterion such that it give us the maximum $P_d$. | ||
| Due to the fact that the value of $T$ is not dependent from $A$ the procedure is optimal also if $A$ is unknown. | Due to the fact that the value of $T$ is not dependent from $A$ the procedure is optimal also if $A$ is unknown. | ||
| - | So, in case of fixed target when the SNR ratio is unknown (remember that $SNR = \frac{A^2}{2\sigma^2}$), | + | So, in case of fixed target when the SNR ratio is unknown (remember that $SNR = \frac{A^2}{2\sigma^2}$), |
| ====== Neyman-Pearson (single pulse + SW2 target) ====== | ====== Neyman-Pearson (single pulse + SW2 target) ====== | ||
| Line 602: | Line 602: | ||
| // | // | ||
| - | The Neyman-Pearson criterion can be succesfully | + | The Neyman-Pearson criterion can be successfully |
| We assume $x(t)$ is the amplitude of the received voltage, sum of the interesting signal $s(t)$ and the noise $n(t)$, $\sigma$ is the variance of the noise process and $\sigma^2_s$ the power of the signal. | We assume $x(t)$ is the amplitude of the received voltage, sum of the interesting signal $s(t)$ and the noise $n(t)$, $\sigma$ is the variance of the noise process and $\sigma^2_s$ the power of the signal. | ||
| Line 639: | Line 639: | ||
| - | Since $\frac{1}{\sigma^2} - \frac{1}{s^2} > 0$ and the log function is monotonically increasing, the relation (56) can be expressed taking the natural log; So, we can write the quadratic detector: | + | Since $\frac{1}{\sigma^2} - \frac{1}{s^2} > 0$ and the log function is monotonically increasing, the relation (56) can be expressed |
| \begin{equation} | \begin{equation} | ||
| Line 646: | Line 646: | ||
| - | (Obviusly | + | (Obviously |
| The decision threshold $T$ must be chosen such that: | The decision threshold $T$ must be chosen such that: | ||
| Line 681: | Line 681: | ||
| \begin{equation} | \begin{equation} | ||
| - | \Phi_x(\omega) = E[e^{j \omega x}] = \int_{-\infty}^{\infty} f_x(x) e^{j \omega x} \,dx = exp(j \omega | + | \Phi_x(\omega) = E[e^{j \omega x}] = \int_{-\infty}^{\infty} f_x(x) e^{j \omega x} \,dx = exp(j \omega |
| \end{equation} | \end{equation} | ||
| Line 700: | Line 700: | ||
| \end{equation} | \end{equation} | ||
| - | If we multiply the vector $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}$ for the matrix $P = \frac{\sigma}{\sqrt{2}} \begin{bmatrix} \alpha & \beta \\ -\alpha & \beta \end{bmatrix}$ , where $\alpha = \sqrt{1-\rho} \ \text{,}\ \beta=\sqrt{1+\rho}$, | + | If we multiply the vector $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}$ for the matrix $P = \frac{\sigma}{\sqrt{2}} \begin{bmatrix} \alpha & \beta \\ -\alpha & \beta \end{bmatrix}$ , where $\alpha = \sqrt{1-\rho} \ \text{,}\ \beta=\sqrt{1+\rho}$, |
| \begin{equation} | \begin{equation} | ||
| Line 766: | Line 766: | ||
| - | In that way we can demostrate | + | In that way we can demonstrate |
| \begin{equation} | \begin{equation} | ||
| Line 772: | Line 772: | ||
| \end{equation} | \end{equation} | ||
| - | this means that the two random variables are independent. For the gaussian random variable, | + | this means that the two random variables are independent. For the gaussian random variable, |
| - | The cross-correlation coefficient $\rho$, gives an idea of the dipendence | + | The cross-correlation coefficient $\rho$, gives an idea of the dependence |
| If instead $\rho=0$ we can have the situation shown in the figure {{ref> | If instead $\rho=0$ we can have the situation shown in the figure {{ref> | ||
| - | The characteristic function | + | The characteristic function |
| \begin{equation} | \begin{equation} | ||
| Line 787: | Line 787: | ||
| <figure scattering_diagrams> | <figure scattering_diagrams> | ||
| {{ : | {{ : | ||
| - | < | + | < |
| </ | </ | ||
| Line 809: | Line 809: | ||
| $\vec{Y}$ is the vector with independent and gaussian components with zero mean and variance equal to one, that means that the density is of type negative exponential (except for the normalization constant) and the quadratic form is $\vec{Y}^T \vec{Y} = \sum_{i=1}^{n} y_i^2$; | $\vec{Y}$ is the vector with independent and gaussian components with zero mean and variance equal to one, that means that the density is of type negative exponential (except for the normalization constant) and the quadratic form is $\vec{Y}^T \vec{Y} = \sum_{i=1}^{n} y_i^2$; | ||
| - | | + | |
| - | | + | |
| - | We can repeat the procedure used to obtain the equation (70) and (71); the resulting density function is, except of a normalization constant, the exponential of the squared form: $X^T \boldsymbol{\vec{Q}} = \boldsymbol{P^{-T}} \boldsymbol{P^{-1}}$. | + | We can repeat the procedure used to obtain the equation (70) and (71); the resulting density function is, except of a normalization constant, the exponential of the squared form: $X^T \boldsymbol{\vec{Q}} \vec{X}$ where $\boldsymbol{\vec{Q}} = \boldsymbol{P^{-T}} \boldsymbol{P^{-1}}$. |
| - | The covariance matrix of $\vec{X}$ (remember that the components of $\vec{Y}$ are scorrelated | + | The covariance matrix of $\vec{X}$ (remember that the components of $\vec{Y}$ are uncorrelated |
| \begin{equation} | \begin{equation} | ||
| \boldsymbol{\vec{M}_{\vec{x}}} = E \left[\vec{X} \vec{X}^T \right] = \boldsymbol{\vec{T}} \boldsymbol{\vec{M}_y} \boldsymbol{\vec{P}^T} = \boldsymbol{\vec{P}} \boldsymbol{\vec{P}^T} | \boldsymbol{\vec{M}_{\vec{x}}} = E \left[\vec{X} \vec{X}^T \right] = \boldsymbol{\vec{T}} \boldsymbol{\vec{M}_y} \boldsymbol{\vec{P}^T} = \boldsymbol{\vec{P}} \boldsymbol{\vec{P}^T} | ||
| Line 855: | Line 855: | ||
| \end{equation} | \end{equation} | ||
| - | If we antitransofrm | + | If we antitransform |
| \begin{equation} | \begin{equation} | ||
| Line 875: | Line 875: | ||
| where $v$ is the number of observation of the vector $\vec{X}$. | where $v$ is the number of observation of the vector $\vec{X}$. | ||
| - | The gaussian multivariate can be also obtained generalizing the procedure that allow to define the gaussian bivariate to the // | + | The gaussian multivariate can be also obtained generalizing the procedure that allows |
| We can start from $n$ variables $y_i$ $i=1,...,n$ independent (vector $\vec{Y}$) that has a density function as | We can start from $n$ variables $y_i$ $i=1,...,n$ independent (vector $\vec{Y}$) that has a density function as | ||
| Line 885: | Line 885: | ||
| - | where $\vec{\Lambda}$ is a diagonal matrix that has variance of the random variables $y_i$ as elements, that means $l_{ii}=\text{var}(y_i) \ i=1,...,n$. | + | where $\vec{\Lambda}$ is a diagonal matrix that has variance of the random variables $y_i$ as elements, that means $\lambda_{ii}=\text{var}(y_i) \ i=1,...,n$. |
| If we apply the linear transformation | If we apply the linear transformation | ||
| Line 955: | Line 955: | ||
| \end{equation} | \end{equation} | ||
| - | The covariance matrix of $\vec{Z}$, if the hypotesis | + | The covariance matrix of $\vec{Z}$, if the hypothesis |
| \begin{equation} | \begin{equation} | ||
| - | \boldsymbol{\vec{M}} = E \left[ \begin{bmatrix} \vec{X} \\ \vec{Y} \end{bmatrix} | + | \boldsymbol{\vec{M}} = E \left[ \begin{bmatrix} \vec{X} \\ \vec{Y} \end{bmatrix} |
| \end{equation} | \end{equation} | ||
| Line 1083: | Line 1083: | ||
| ====== Coherent detector and Discrete-Time optimal processor ====== | ====== Coherent detector and Discrete-Time optimal processor ====== | ||
| - | // | + | // |
| Line 1126: | Line 1126: | ||
| and with covariance matrix $\boldsymbol{\vec{M}}$. | and with covariance matrix $\boldsymbol{\vec{M}}$. | ||
| - | Note that we don't make the hypotesis | + | Note that we don't make the hypothesis |
| Anyway, in the case of noise caused by clutter it's right to say that the distribution is Gaussian, at least for some kind of clutters. | Anyway, in the case of noise caused by clutter it's right to say that the distribution is Gaussian, at least for some kind of clutters. | ||
| The vector of the observable $\vec{Z}$ is | The vector of the observable $\vec{Z}$ is | ||
| Line 1222: | Line 1222: | ||
| Note that $\vec{s}$ is the vector associated with the received signal in absence of noise. In the case of white noise (except for a constant factor), $\vec{k} = \vec{s}^{*}$. | Note that $\vec{s}$ is the vector associated with the received signal in absence of noise. In the case of white noise (except for a constant factor), $\vec{k} = \vec{s}^{*}$. | ||
| - | If the vector $\vec{s}$ is considered as the impulsive response of a discrete type of FIR filter, there is a filtering operation that provides an output value at a precise instant.The filter described by the vector $\vec{s}^{*}$ assumes the form of discrete-time matched (to the signal waveform emitted from the target in the case of white noise) filter. In the continous | + | If the vector $\vec{s}$ is considered as the impulsive response of a discrete type of FIR filter, there is a filtering operation that provides an output value at a precise instant.The filter described by the vector $\vec{s}^{*}$ assumes the form of discrete-time matched (to the signal waveform emitted from the target in the case of white noise) filter. In the continuous |
| \begin{equation} | \begin{equation} | ||
| Line 1307: | Line 1307: | ||
| In case of the useful signal is " | In case of the useful signal is " | ||
| - | Knwoing | + | Knowing |
| Line 1444: | Line 1444: | ||
| - | The equation (148) is utilized, in general, in the metereological | + | The equation (148) is utilized, in general, in the meteorological |
| If we antitransform the (148) we have the expression of the correlation coefficient $\frac{R(\tau)}{R(0)}$ to which we will refer as $\rho(\tau)$ | If we antitransform the (148) we have the expression of the correlation coefficient $\frac{R(\tau)}{R(0)}$ to which we will refer as $\rho(\tau)$ | ||
| Line 1478: | Line 1478: | ||
| - | Since the white thermal noise contribution is scorrelated | + | Since the white thermal noise contribution is uncorrelated |
| Line 1517: | Line 1517: | ||
| \end{equation} | \end{equation} | ||
| - | using (135) the optimal filter | + | using (135) the optimal filter |
| \begin{equation} | \begin{equation} | ||
| Line 1523: | Line 1523: | ||
| \end{equation} | \end{equation} | ||
| - | knwoing | + | knowing |
| \begin{equation} | \begin{equation} | ||
| Line 1546: | Line 1546: | ||
| - | We observe that the improvement obtained is lower compared to the case when we have only white noise, this is caused by the contribution of a colored | + | We observe that the improvement obtained is lower compared to the case when we have only white noise, this is caused by the contribution of a coloured |
| As much as the coefficient $\rho$ is lower, the improvement will be bigger, if $\rho = 0$ we have an improvement equal to 2 (in general equal to $n$). | As much as the coefficient $\rho$ is lower, the improvement will be bigger, if $\rho = 0$ we have an improvement equal to 2 (in general equal to $n$). | ||
| If $\rho \rightarrow 1$, the spectrum of the noise and the spectrum of the useful signal tend to be overlapped and the improvement tends to zero. Note that the spectral situation in figure {{ref> | If $\rho \rightarrow 1$, the spectrum of the noise and the spectrum of the useful signal tend to be overlapped and the improvement tends to zero. Note that the spectral situation in figure {{ref> | ||
| Line 1759: | Line 1759: | ||
| Unknowing the phase factor $\Phi_0$ results in a loss of sensitivity of the receiver that can be quantified by calculating the probability of correct revelation in both cases (knowledge of $\Phi_0$ and not). | Unknowing the phase factor $\Phi_0$ results in a loss of sensitivity of the receiver that can be quantified by calculating the probability of correct revelation in both cases (knowledge of $\Phi_0$ and not). | ||
| - | The figure {{ref> | + | The figure {{ref> |
| <figure reciever_sensitivity_phi_known_unknow> | <figure reciever_sensitivity_phi_known_unknow> | ||
| {{ : | {{ : | ||
| - | < | + | < |
| </ | </ | ||
radar/radardetection.1528379874.txt.gz · Last modified: (external edit)
