User Tools

Site Tools


radar:radardetection

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
radar:radardetection [2018/06/07 14:23] dicristofaroradar:radardetection [2026/04/28 15:13] (current) – external edit 127.0.0.1
Line 24: Line 24:
 Due to the continuous nature of the noise, we can suppose that the dimensionality of $\Omega$ is infinite even if the size of //**$M$**//  and //**$S$**// are not.  Due to the continuous nature of the noise, we can suppose that the dimensionality of $\Omega$ is infinite even if the size of //**$M$**//  and //**$S$**// are not. 
  
-Finally we define the space of the possibile decisions //**$D$**//. Note that //**$D$**// and //**$M$**// must have the same dimensions due to the fact that the goal of the decision process is to extract the information contained in //**$M$**//.+Finallywe define the space of the possibile decisions //**$D$**//. Note that //**$D$**// and //**$M$**// must have the same dimensions due to the fact that the goal of the decision process is to extract the information contained in //**$M$**//.
  
-Due to the fact that the noise is a random process, the law that decide the transaction from the event's space to the space of the observable have a probabilistic behaviour. In particular, if $\vec{X}$ is the vector of the recieving observable, we define: +Due to the fact that the noise is a random process, the law that decide the transaction from the event's space to the space of the observable has a probabilistic behaviour. In particular, if $\vec{X}$ is the vector of the recieving observable, we define: 
  
 \begin{equation}  \begin{equation} 
Line 35: Line 35:
 The introduction of the probability distribution to describe the transaction from the space of the events and the space of the observable allow to define some decision procedures that can be of two types: parametric and non-parametric. The introduction of the probability distribution to describe the transaction from the space of the events and the space of the observable allow to define some decision procedures that can be of two types: parametric and non-parametric.
 We talk about parametric decision procedures when the probabilities of transition $p(x \mid M_i)$ , $i=0,...,m-1$ differs only from some values of one or more parameters. In this case the decision process coincides with finding the unknown parameters. We talk about parametric decision procedures when the probabilities of transition $p(x \mid M_i)$ , $i=0,...,m-1$ differs only from some values of one or more parameters. In this case the decision process coincides with finding the unknown parameters.
-We talk about //non parametric decisions// when the different hypotesis imply different behaviour of //transaction probabilities// $p(\vec(X) \mid M_i),i=0,...,m-1$. +We talk about //non parametric decisions// when the different hypothesis imply different behaviour of //transaction probabilities// $p(\vec(X) \mid M_i),i=0,...,m-1$. 
 An example of paremetric decision between the events $M_0$ and $M_1$ could be:  An example of paremetric decision between the events $M_0$ and $M_1$ could be: 
  
Line 59: Line 59:
 Once we have the observables' vector $\vec{X}$ we have to keep a decision by interpreting it. Once we have the observables' vector $\vec{X}$ we have to keep a decision by interpreting it.
 The decision rules allow to go from the space of the observable $\Omega$ to the space of the decisions $D$.  The decision rules allow to go from the space of the observable $\Omega$ to the space of the decisions $D$. 
-If the space of the events is continous the problem of decision become a problem of estimation.+If the space of the events is continuous the problem of decision become a problem of estimation.
 For example if $M$ is the space of the possible values of a parameter, to estimate this value we can use decision theory metodologies. For example if $M$ is the space of the possible values of a parameter, to estimate this value we can use decision theory metodologies.
-The definition of decision rules determinates a partition of the observable's space. This partition is realized with rules that operates in the following way: Let's consider the case when the space of the events $M$ have a cardinality equal to two.+The definition of decision rules determinates a partition of the observable's space. This partition is realized with rules that operates in the following way: Let's consider the case when the space of the events $M$ has a cardinality equal to two.
  
  
Line 69: Line 69:
 </figure> </figure>
  
-The above scheme (figure {{ref>binary_decision_problem}}) consider the particular case of __binary decision on single observation__ that is the case when the space of the events $M$ is only made by two elements $M_0$ and $M_1$; in the figure we omitted for simplicity the space of the signal $S$ because is supposed to be in a biunivocal correspondence (1:1) with the space of the events.+The above scheme (figure {{ref>binary_decision_problem}}) consider the particular case of __binary decision on single observation__ that is the case when the space of the events $M$ is only made by two elements $M_0$ and $M_1$; in the figure we omitted for simplicity the space of the signal $S$ because is supposed to be in a biunivocal correspondence (1:1) with the space of the events.
  
 Let's suppose to have received the vector $\vec{X}$, the decision rule is: Let's suppose to have received the vector $\vec{X}$, the decision rule is:
Line 87: Line 87:
  
  
-__The radar detection procedures are particular cases of the decision problems__ and we are going to analize it; The scheme in figure {{ref>radar_detection_decision_theory}} represents a radar detection scenario.+__The radar detection procedures are particular cases of the decision problems__ and we are going to analyze it; The scheme in figure {{ref>radar_detection_decision_theory}} represents a radar detection scenario.
 From what we said until here, we have to decide considering two events: From what we said until here, we have to decide considering two events:
  
Line 175: Line 175:
 Note that we can also associate a cost to the correct decisions. Note that we can also associate a cost to the correct decisions.
  
-Since we operate in an uncertainity condition we have to refer to the medium cost $E(L)$ (mean of the cost, called also as risk) that in the binary case is defined as:+Since we operate in an uncertainty condition we have to refer to the medium cost $E(L)$ (mean of the cost, called also as risk) that in the binary case is defined as:
  
  
Line 205: Line 205:
  
  
-We remember that to estabilish the rules of the decision we have to find $\Omega_1$ knowing the weight values $L_{ij}$ and the functions $p(\vec{X} \mid M_i)$.+We remember that to establish the rules of the decision we have to find $\Omega_1$ knowing the weight values $L_{ij}$ and the functions $p(\vec{X} \mid M_i)$.
  
 The space $\Omega_1$ is defined such that the result of (11) is minimum. We are going to analyze this problem in the unidimensional case for semplicity: The space $\Omega_1$ is defined such that the result of (11) is minimum. We are going to analyze this problem in the unidimensional case for semplicity:
-Let's consider the geneirc function $f(x)$ in figure {{ref>generic_function}}.+Let's consider the generic function $f(x)$ in figure {{ref>generic_function}}.
  
 <figure generic_function> <figure generic_function>
Line 271: Line 271:
  
 \begin{equation}  \begin{equation} 
-l(\vec{X}) \geq \eta \Rightarrow M_1 \text{ hypotes is true}+l(\vec{X}) \geq \eta \Rightarrow M_1 \text{ hypothesis is true}
 \end{equation}  \end{equation} 
  
Line 286: Line 286:
 \end{equation}  \end{equation} 
  
-Due to the fact that the main difficulty in the application of the Bayes method consists in defining the values of costs $L_{ij}$ and in the knowledege of the a priori probability $p_0$ and $p_1$ in general this method is not used for radar detection while other methods are preferred.+Due to the fact that the main difficulty in the application of the Bayes method consists in defining the values of costs $L_{ij}$ and in the knowledge of the a priori probability $p_0$ and $p_1$ in general this method is not used for radar detection while other methods are preferred.
  
  
Line 334: Line 334:
 \end{equation}  \end{equation} 
  
-and we can obtain (24). The probability error criterion consist in comparing the likelihood ratio with the ratio $p_1 / p_0$.+and we can obtain (24). The probability error criterion consists in comparing the likelihood ratio with the ratio $p_1 / p_0$.
  
 Alternatively, we can derive (24) from (14) imposing $L_{01} - L_{00} = L_{10} - L_{11}$, that means that the probability error criterion is a particular case of the Bayes criterion. Alternatively, we can derive (24) from (14) imposing $L_{01} - L_{00} = L_{10} - L_{11}$, that means that the probability error criterion is a particular case of the Bayes criterion.
Line 366: Line 366:
  
  
-We obtain that the cost is minimum if the decision region for each hypotesis ${M_k}$ , ${k=1,2,...,m}$ is:+We obtain that the cost is minimum if the decision region for each hypothesis ${M_k}$ , ${k=1,2,...,m}$ is:
  
  
Line 374: Line 374:
  
  
-Therefore, the MAP decisor is: the calculation of the likelihood function $p(\vec{X} \mid M_k)$ for each hypotesis, multiplied by a priori probability of that hypotesis and finally the choise of the maximum.+Therefore, the MAP decisor is: the calculation of the likelihood function $p(\vec{X} \mid M_k)$ for each hypothesis, multiplied by a priori probability of that hypothesis and finally the choice of the maximum.
  
 <figure map_reciever> <figure map_reciever>
Line 401: Line 401:
 \end{equation}  \end{equation} 
  
-where $\eta$ is a parameter that depends from the costs and from the a priori probabilities. Due to the fact that often we don't know such values, in the hypotesis of maximum uncertainity, we can assume $\eta = 1$ (Maximum Likelihood Criterion (ML)).+where $\eta$ is a parameter that depends from the costs and from the a priori probabilities. Due to the fact that often we don't know such values, in the hypothesis of maximum uncertainty, we can assume $\eta = 1$ (Maximum Likelihood Criterion (ML)).
  
-If we assume $\eta = 1$ (maximum uncertainity) we can rewrite the decision rule (29) as:+If we assume $\eta = 1$ (maximum uncertainty) we can rewrite the decision rule (29) as:
  
 \begin{equation}  \begin{equation} 
Line 471: Line 471:
  
  
-The Bayes criterion, MAP and Neyman-Pearson are equavalent from the algorithmic point of view and are exploited comparing the likelihood ratio with a threshold chosen conveniently, the difference between them is how this threshold value is defined.+The Bayes criterion, MAP and Neyman-Pearson are equivalent from the algorithmic point of view and are exploited comparing the likelihood ratio with a threshold chosen conveniently, the difference between them is how this threshold value is defined.
  
 In general the most used criterion is the Neyman-Pearson's. In general the most used criterion is the Neyman-Pearson's.
Line 487: Line 487:
  
  
-This problem is a tipical case of a deterministic signal as we suppose that $\mu$ is constant.+This problem is a typical case of a deterministic signal as we suppose that $\mu$ is constant.
 We need to identify the regions $\Omega_0$ and $\Omega_1$. The problem is to find the threshold value $X_T$ that delimits the two decision regions. We need to identify the regions $\Omega_0$ and $\Omega_1$. The problem is to find the threshold value $X_T$ that delimits the two decision regions.
  
-If we adopt the maximum likelihood criterion (for which $\lambda = 1$) the threshold value $X_T$ is the point $X_T=\mu/2$ that is at the intercption point of the two curves defined by $p(x \mid M_0)$ and $p(x \mid M_1)$.+If we adopt the maximum likelihood criterion (for which $\lambda = 1$) the threshold value $X_T$ is the point $X_T=\mu/2$ that is at the interception point of the two curves defined by $p(x \mid M_0)$ and $p(x \mid M_1)$.
  
  
Line 506: Line 506:
 \end{equation}  \end{equation} 
  
-where the fucntion $\Phi(x)$ is $\Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^2/2} \,dt$.+where the function $\Phi(x)$ is $\Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^2/2} \,dt$.
  
  
Line 531: Line 531:
 //Application of Neyman-Pearson's criterion for detection of a fixed target using a single pulse.// //Application of Neyman-Pearson's criterion for detection of a fixed target using a single pulse.//
  
-Let's suppose that we have a target which produce an echo with a constant amplitude $A$ (the received signal is a sine with amplitude $A$) and the RMS voltage of the gaussian noise is $\sigma$. We suppose to take a decision on a single pulse recived in output from an enevelope detector.+Let's suppose that we have a target which produce an echo with a constant amplitude $A$ (the received signal is a sine with amplitude $A$) and the RMS voltage of the gaussian noise is $\sigma$. We suppose to take a decision on a single pulse recived in output from an envelope detector.
 Using the theory illustrated in the previous section we can say that: Using the theory illustrated in the previous section we can say that:
  
-a) If only noise is present ($M_0$ hypotesis) the probability density function of the noise envelope is a Rayleigh:+a) If only noise is present ($M_0$ hypothesis) the probability density function of the noise envelope is a Rayleigh:
  
 \begin{equation} \begin{equation}
Line 543: Line 543:
  
  
-b) If the target is present ($M_1$ hypotesis) the probability density associated to the envelope of the received signal is of type Rice +b) If the target is present ($M_1$ hypothesis) the probability density associated to the envelope of the received signal is of type Rice 
  
 \begin{equation} \begin{equation}
Line 557: Line 557:
  
  
-The decisor will decide for the hypotesis that the target is present (event $M_1$) if we have:+The decisor will decide for the hypothesis that the target is present (event $M_1$) if we have:
  
 \begin{equation} \begin{equation}
Line 596: Line 596:
 Note that the calculation indicated in the previous equations is optimal due to the Neyman-Pearson criterion such that it give us the maximum $P_d$. Note that the calculation indicated in the previous equations is optimal due to the Neyman-Pearson criterion such that it give us the maximum $P_d$.
 Due to the fact that the value of $T$ is not dependent from $A$ the procedure is optimal also if $A$ is unknown. Due to the fact that the value of $T$ is not dependent from $A$ the procedure is optimal also if $A$ is unknown.
-So, in case of fixed target when the SNR ratio is unknown (remember that $SNR = \frac{A^2}{2\sigma^2}$), the Neyman-Pearson preocedure is optimal only if the RMS voltage of the noise $\sigma$ is known.+So, in case of fixed target when the SNR ratio is unknown (remember that $SNR = \frac{A^2}{2\sigma^2}$), the Neyman-Pearson procedure is optimal only if the RMS voltage of the noise $\sigma$ is known.
  
 ====== Neyman-Pearson (single pulse + SW2 target) ====== ====== Neyman-Pearson (single pulse + SW2 target) ======
Line 602: Line 602:
 //Application of Neyman-Pearson's criterion for detection of a fluctuating target modeled as Swerling 2 using a single pulse.// //Application of Neyman-Pearson's criterion for detection of a fluctuating target modeled as Swerling 2 using a single pulse.//
  
-The Neyman-Pearson criterion can be succesfully applied also in the case of a fluctuating target (type SW2) operating on a single pulse and with a non coherent detection.+The Neyman-Pearson criterion can be successfully applied also in the case of a fluctuating target (type SW2) operating on a single pulse and with a non coherent detection.
 We assume $x(t)$ is the amplitude of the received voltage, sum of the interesting signal $s(t)$ and the noise $n(t)$, $\sigma$ is the variance of the noise process and $\sigma^2_s$ the power of the signal.  We assume $x(t)$ is the amplitude of the received voltage, sum of the interesting signal $s(t)$ and the noise $n(t)$, $\sigma$ is the variance of the noise process and $\sigma^2_s$ the power of the signal. 
  
Line 639: Line 639:
  
  
-Since $\frac{1}{\sigma^2} - \frac{1}{s^2} > 0$ and the log function is monotonically increasing, the relation (56) can be expressed taking the natural log; So, we can write the quadratic detector:+Since $\frac{1}{\sigma^2} - \frac{1}{s^2} > 0$ and the log function is monotonically increasing, the relation (56) can be expressed by taking the natural log; So, we can write the quadratic detector:
  
 \begin{equation} \begin{equation}
Line 646: Line 646:
  
  
-(Obviusly it is possible to take the square root of the last equation and use a linear detector, if we decide on the single pulse).+(Obviously it is possible to take the square root of the last equation and use a linear detector, if we decide on the single pulse).
  
 The decision threshold $T$ must be chosen such that: The decision threshold $T$ must be chosen such that:
Line 700: Line 700:
 \end{equation} \end{equation}
  
-If we multiply the vector $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}$ for the matrix $P = \frac{\sigma}{\sqrt{2}} \begin{bmatrix} \alpha & \beta  \\ -\alpha & \beta \end{bmatrix}$ , where $\alpha = \sqrt{1-\rho} \ \text{,}\ \beta=\sqrt{1+\rho}$, it's easy to verify the the pair of random gaussian variables $(x_1,x_2)$ have a correlation cohefficient equal to $\rho$ and every variable has a variance equal to $\sigma^2$ . So we have+If we multiply the vector $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}$ for the matrix $P = \frac{\sigma}{\sqrt{2}} \begin{bmatrix} \alpha & \beta  \\ -\alpha & \beta \end{bmatrix}$ , where $\alpha = \sqrt{1-\rho} \ \text{,}\ \beta=\sqrt{1+\rho}$, it's easy to verify the the pair of random gaussian variables $(x_1,x_2)$ have a correlation coefficient equal to $\rho$ and every variable has a variance equal to $\sigma^2$ . So we have
  
 \begin{equation} \begin{equation}
Line 766: Line 766:
  
  
-In that way we can demostrate that $f(x_i)$ $i=1,2$ is gaussian. Then, from the (72) if the cross-correlation coefficient $\rho$ between the two random variables $x_1$ and $x_2$ is null then +In that way we can demonstrate that $f(x_i)$ $i=1,2$ is gaussian. Then, from the (72) if the cross-correlation coefficient $\rho$ between the two random variables $x_1$ and $x_2$ is null then 
  
 \begin{equation} \begin{equation}
Line 772: Line 772:
 \end{equation} \end{equation}
  
-this means that the two random variables are independent. For the gaussian random variable, independece and scorrelation coincide.+this means that the two random variables are independent. For the gaussian random variable, independence and scorrelation coincide.
  
-The cross-correlation coefficient $\rho$, gives an idea of the dipendence between the two variables. In particular, if $x_1$ and $x_2$ are for example a representation of a measure's process, if $\left| \rho \right| \cong 1$ the pair $(x_1,x_2)$ tend to thicken along a straight line as indicated in figure {{ref>scattering_diagrams}}a.+The cross-correlation coefficient $\rho$, gives an idea of the dependence between the two variables. In particular, if $x_1$ and $x_2$ are for example a representation of a measure's process, if $\left| \rho \right| \cong 1$ the pair $(x_1,x_2)$ tend to thicken along a straight line as indicated in figure {{ref>scattering_diagrams}}a.
 If instead $\rho=0$ we can have the situation shown in the figure {{ref>scattering_diagrams}}b. If instead $\rho=0$ we can have the situation shown in the figure {{ref>scattering_diagrams}}b.
  
-The characteristic function assiociated to the gaussian bivariate is:+The characteristic function associated to the gaussian bivariate is:
  
 \begin{equation} \begin{equation}
Line 787: Line 787:
 <figure scattering_diagrams> <figure scattering_diagrams>
 {{ :media:scattering_diagrams.png?nolink |}} {{ :media:scattering_diagrams.png?nolink |}}
-<caption>Scattering diagrams associated to the pairs of random variables: a) correlated ; b) scorrelated</caption>+<caption>Scattering diagrams associated to the pairs of random variables: a) correlated ; b) uncorrelated</caption>
 </figure> </figure>
  
Line 809: Line 809:
 $\vec{Y}$ is the vector with independent and gaussian components with zero mean and variance equal to one, that means that the density is of type negative exponential (except for the normalization constant) and the quadratic form is $\vec{Y}^T \vec{Y} = \sum_{i=1}^{n} y_i^2$;  $\vec{Y}$ is the vector with independent and gaussian components with zero mean and variance equal to one, that means that the density is of type negative exponential (except for the normalization constant) and the quadratic form is $\vec{Y}^T \vec{Y} = \sum_{i=1}^{n} y_i^2$; 
  
- $\boldsymbol{\vec{P}}$ is a coefficients matrix and is obtained, knwoing that $\vec{Y} = \vec{P}^{-1} \vec{X}$ \\+ $\boldsymbol{\vec{P}}$ is a coefficients matrix and is obtained, knowing that $\vec{Y} = \vec{P}^{-1} \vec{X}$ \\
  $\vec{Y}^T \vec{Y} = \vec{X}^T \boldsymbol{\vec{P}^{-T}} \boldsymbol{\vec{P}^{-1}} \vec{X} = X^T \boldsymbol{Q} \vec{X} $    $\vec{Y}^T \vec{Y} = \vec{X}^T \boldsymbol{\vec{P}^{-T}} \boldsymbol{\vec{P}^{-1}} \vec{X} = X^T \boldsymbol{Q} \vec{X} $  
  
Line 815: Line 815:
  We can repeat the procedure used to obtain the equation (70) and (71); the resulting density function is, except of a normalization constant, the exponential of the squared form: $X^T \boldsymbol{\vec{Q}} \vec{X}$ where $\boldsymbol{\vec{Q}} = \boldsymbol{P^{-T}} \boldsymbol{P^{-1}}$.  We can repeat the procedure used to obtain the equation (70) and (71); the resulting density function is, except of a normalization constant, the exponential of the squared form: $X^T \boldsymbol{\vec{Q}} \vec{X}$ where $\boldsymbol{\vec{Q}} = \boldsymbol{P^{-T}} \boldsymbol{P^{-1}}$.
  
-The covariance matrix of $\vec{X}$ (remember that the components of $\vec{Y}$ are scorrelated ) is:+The covariance matrix of $\vec{X}$ (remember that the components of $\vec{Y}$ are uncorrelated ) is:
 \begin{equation} \begin{equation}
 \boldsymbol{\vec{M}_{\vec{x}}} = E \left[\vec{X} \vec{X}^T \right] = \boldsymbol{\vec{T}} \boldsymbol{\vec{M}_y} \boldsymbol{\vec{P}^T} = \boldsymbol{\vec{P}} \boldsymbol{\vec{P}^T} \boldsymbol{\vec{M}_{\vec{x}}} = E \left[\vec{X} \vec{X}^T \right] = \boldsymbol{\vec{T}} \boldsymbol{\vec{M}_y} \boldsymbol{\vec{P}^T} = \boldsymbol{\vec{P}} \boldsymbol{\vec{P}^T}
Line 855: Line 855:
 \end{equation} \end{equation}
  
-If we antitransofrm using Fourier the equation (84) we obtain the density function of the gaussian multivariate. +If we antitransform using Fourier the equation (84) we obtain the density function of the gaussian multivariate. 
  
 \begin{equation} \begin{equation}
Line 875: Line 875:
 where $v$ is the number of observation of the vector $\vec{X}$. where $v$ is the number of observation of the vector $\vec{X}$.
  
-The gaussian multivariate can be also obtained generalizing the procedure that allow to define the gaussian bivariate to the //n//-dimensional case.+The gaussian multivariate can be also obtained generalizing the procedure that allows to define the gaussian bivariate to the //n//-dimensional case.
  
 We can start from $n$ variables $y_i$ $i=1,...,n$ independent (vector $\vec{Y}$) that has a density function as We can start from $n$ variables $y_i$ $i=1,...,n$ independent (vector $\vec{Y}$) that has a density function as
Line 955: Line 955:
 \end{equation} \end{equation}
  
-The covariance matrix of $\vec{Z}$, if the hypotesis of narrow band stationary process (see above) with zero mean is valid, is equal to+The covariance matrix of $\vec{Z}$, if the hypothesis of narrow band stationary process (see above) with zero mean is valid, is equal to
  
 \begin{equation} \begin{equation}
Line 1083: Line 1083:
 ====== Coherent detector and Discrete-Time optimal processor ====== ====== Coherent detector and Discrete-Time optimal processor ======
  
-//Application of Neyman-Pearson criterieon on a coherent radar detector.//+//Application of Neyman-Pearson criterion on a coherent radar detector.//
  
  
Line 1126: Line 1126:
 and with covariance matrix $\boldsymbol{\vec{M}}$.  and with covariance matrix $\boldsymbol{\vec{M}}$. 
  
-Note that we don't make the hypotesis that the noise is white, this is why in the radar field there is not only the contribution of the white noise, but can exist also noises caused by clutter'echos (__clutter__: echo caused by ground, sea, rain, animals/insects, chaff and atmospheric turbulences, etc.) that have some correlation between them; This correlation can be expressed in the covariance matrix $\boldsymbol{\vec{M}}$ assuming that the matrix is not diagonal.+Note that we don't make the hypothesis that the noise is white, this is why in the radar field there is not only the contribution of the white noise, but can exist also noises caused by clutter'echoes (__clutter__: echo caused by ground, sea, rain, animals/insects, chaff and atmospheric turbulences, etc.) that have some correlation between them; This correlation can be expressed in the covariance matrix $\boldsymbol{\vec{M}}$ assuming that the matrix is not diagonal.
 Anyway, in the case of noise caused by clutter it's right to say that the distribution is Gaussian, at least for some kind of clutters. Anyway, in the case of noise caused by clutter it's right to say that the distribution is Gaussian, at least for some kind of clutters.
 The vector of the observable $\vec{Z}$ is The vector of the observable $\vec{Z}$ is
Line 1222: Line 1222:
 Note that $\vec{s}$ is the vector associated with the received signal in absence of noise. In the case of white noise (except for a constant factor), $\vec{k} = \vec{s}^{*}$. Note that $\vec{s}$ is the vector associated with the received signal in absence of noise. In the case of white noise (except for a constant factor), $\vec{k} = \vec{s}^{*}$.
  
-If the vector $\vec{s}$ is considered as the impulsive response of a discrete type of FIR filter, there is a filtering operation that provides an output value at a precise instant.The filter described by the vector $\vec{s}^{*}$ assumes the form of discrete-time matched (to the signal waveform emitted from the target in the case of white noise) filter. In the continous case, if $s(t)$ is the waveform emitted by the target, the adaptive filter to this waveform (remember that this filter allow us to obtain the maximum signal/noise ratio in output) is expressed as+If the vector $\vec{s}$ is considered as the impulsive response of a discrete type of FIR filter, there is a filtering operation that provides an output value at a precise instant.The filter described by the vector $\vec{s}^{*}$ assumes the form of discrete-time matched (to the signal waveform emitted from the target in the case of white noise) filter. In the continuous case, if $s(t)$ is the waveform emitted by the target, the adaptive filter to this waveform (remember that this filter allow us to obtain the maximum signal/noise ratio in output) is expressed as
  
 \begin{equation} \begin{equation}
Line 1307: Line 1307:
 In case of the useful signal is "white" (this corresponds to the standard definition of the improvement factor) $\boldsymbol{\vec{M_s}} = \boldsymbol{\vec{I}}$, solving the problem of the optimal filtering is equal to the problem of finding the eigenvalues of the covariance matrix of the noise (clutter + thermal noise) $\boldsymbol{\vec{M}}$, and in particular, to find the vector of optimal coefficients $\vec{h_0}$, that  is the eigenvector of $\boldsymbol{\vec{M}}$ associated to the minimum eigenvalue. In case of the useful signal is "white" (this corresponds to the standard definition of the improvement factor) $\boldsymbol{\vec{M_s}} = \boldsymbol{\vec{I}}$, solving the problem of the optimal filtering is equal to the problem of finding the eigenvalues of the covariance matrix of the noise (clutter + thermal noise) $\boldsymbol{\vec{M}}$, and in particular, to find the vector of optimal coefficients $\vec{h_0}$, that  is the eigenvector of $\boldsymbol{\vec{M}}$ associated to the minimum eigenvalue.
  
-Knwoing that, we can express this theorem:+Knowing that, we can express this theorem:
  
  
Line 1444: Line 1444:
  
  
-The equation (148) is utilized, in general, in the metereological radars to represent the spectrum of the rainy phenomena.+The equation (148) is utilized, in general, in the meteorological radars to represent the spectrum of the rainy phenomena.
 If we antitransform the (148) we have the expression of the correlation coefficient $\frac{R(\tau)}{R(0)}$ to which we will refer as $\rho(\tau)$ If we antitransform the (148) we have the expression of the correlation coefficient $\frac{R(\tau)}{R(0)}$ to which we will refer as $\rho(\tau)$
  
Line 1478: Line 1478:
  
  
-Since the white thermal noise contribution is scorrelated from the clutter contribution (produced for example by the atmospheric agents), it can be added in power. Consecutively the covariance matrix $\boldsymbol{\vec{M}}$ of the overall noise process ($\sigma_n^2$ for the noise, $\sigma_c^2$ for the clutter and $\rho_c$ as correlation coefficient) is: +Since the white thermal noise contribution is uncorrelated from the clutter contribution (produced for example by the atmospheric agents), it can be added in power. Consecutively the covariance matrix $\boldsymbol{\vec{M}}$ of the overall noise process ($\sigma_n^2$ for the noise, $\sigma_c^2$ for the clutter and $\rho_c$ as correlation coefficient) is: 
  
  
Line 1517: Line 1517:
 \end{equation} \end{equation}
  
-using (135) the optimal filter coffieceints are+using (135) the optimal filter cofficients are
  
 \begin{equation} \begin{equation}
Line 1523: Line 1523:
 \end{equation} \end{equation}
  
-knwoing the vector of the observed samples $\vec{z}^T = \begin{bmatrix} z_1 \ & \ z_2 \end{bmatrix}$, the output of the filter is+knowing the vector of the observed samples $\vec{z}^T = \begin{bmatrix} z_1 \ & \ z_2 \end{bmatrix}$, the output of the filter is
  
 \begin{equation} \begin{equation}
Line 1546: Line 1546:
  
  
-We observe that the improvement obtained is lower compared to the case when we have only white noise, this is caused by the contribution of a colored noise.+We observe that the improvement obtained is lower compared to the case when we have only white noise, this is caused by the contribution of a coloured noise.
 As much as the coefficient $\rho$ is lower, the improvement will be bigger, if $\rho = 0$ we have an improvement equal to 2 (in general equal to $n$). As much as the coefficient $\rho$ is lower, the improvement will be bigger, if $\rho = 0$ we have an improvement equal to 2 (in general equal to $n$).
 If $\rho \rightarrow 1$, the spectrum of the noise and the spectrum of the useful signal tend to be overlapped and the improvement tends to zero. Note that the spectral situation in figure {{ref>spectrum_noise_signal_doppler}}, where we are supposing that the useful signal has a doppler frequency $f_D$ such that the representative line of the signal is not overlapped with the spectrum of the noise. In this case, the output gain improvement of the optimal linear processor can be very high and this is due to the fact that the spectrums are not overlapped. If $\rho \rightarrow 1$, the spectrum of the noise and the spectrum of the useful signal tend to be overlapped and the improvement tends to zero. Note that the spectral situation in figure {{ref>spectrum_noise_signal_doppler}}, where we are supposing that the useful signal has a doppler frequency $f_D$ such that the representative line of the signal is not overlapped with the spectrum of the noise. In this case, the output gain improvement of the optimal linear processor can be very high and this is due to the fact that the spectrums are not overlapped.
Line 1759: Line 1759:
 Unknowing the phase factor $\Phi_0$ results in a loss of sensitivity of the receiver that can be quantified by calculating the probability of correct revelation in both cases (knowledge of $\Phi_0$ and not). Unknowing the phase factor $\Phi_0$ results in a loss of sensitivity of the receiver that can be quantified by calculating the probability of correct revelation in both cases (knowledge of $\Phi_0$ and not).
  
-The figure {{ref>reciever_sensitivity_phi_known_unknow}} represents the probability of detection in the two cases: $\Phi_0$ known (continous curve) and $\Phi_0$ unknown (dotted curve) and we can see that: for the same SNR, if $\Phi_0$ is unknown, the probability of detection is lower. The variation of the signal/noise ratio $\Delta SNR$ in the two cases, for the same probability of detection (and the same $p_{fa}$), indicates the loss of sensitivity of the reciever.+The figure {{ref>reciever_sensitivity_phi_known_unknow}} represents the probability of detection in the two cases: $\Phi_0$ known (continuous curve) and $\Phi_0$ unknown (dotted curve) and we can see that: for the same SNR, if $\Phi_0$ is unknown, the probability of detection is lower. The variation of the signal/noise ratio $\Delta SNR$ in the two cases, for the same probability of detection (and the same $p_{fa}$), indicates the loss of sensitivity of the reciever.
  
  
 <figure reciever_sensitivity_phi_known_unknow> <figure reciever_sensitivity_phi_known_unknow>
 {{ :media:reciever_sensitivity_phi_known_unknow.png?nolink |}} {{ :media:reciever_sensitivity_phi_known_unknow.png?nolink |}}
-<caption>Reciever sensitivity in the two cases: $\Phi_0$ known (continous curve) and $\Phi_0$ unknown (dotted curve)[(cite:RadarBook)]</caption>+<caption>Reciever sensitivity in the two cases: $\Phi_0$ known (continuous curve) and $\Phi_0$ unknown (dotted curve)[(cite:RadarBook)]</caption>
 </figure> </figure>
  
radar/radardetection.1528381424.txt.gz · Last modified: (external edit)