Inside mathematical statistics, characteristic function of any kind of probability distribution around the real line is due to the following formulation, where X will be any random variable with all the distribution.

$\phi_x $(t) = E(e$^itX $)

Characteristic functions can be used as part connected with procedures for size probability distributions to types of data. Estimation procedures can be purchased which match this theoretical characteristic function on the empirical characteristic operate, calculated from your data. If a random variable admits the latest probability density function, then the characteristic function could be the inverse Fourier transform within the probability density function. As a result it affords the cornerstone of a different path to analytical results which have a practical working directly getting probability density operates or cumulative submitting functions.

There are specially simple results to the characteristic features regarding distributions defined from your weighted sums about random variables.


You'll find relations between the behavior  function of an distribution and properties with the distribution, such because the existence of moments as well as the existence of a density function.

A characteristic function is simply the Fourier transform, in probabilistic language. Characteristic function serves as an important tool for analyzing random phenomenon.

Let X be a random variable. Let i = $\sqrt{-1}$ be the imaginary unit. The function $\phi$ to R towards C defined by

$\phi_{X}$(t) = E[exp(i tX)]

is called the characteristic function of X.

$\phi_{X}$(t) exists for any t. This is proved below.

$\phi_{X}$(t) = E[exp(i tX)]

= E[cos(tX) + i sin(tX)]

= E[cos (tx)] + i E[sin(tX)]

Sine and cosine functions are well defined in the interval [-1, 1] and are said to be bounded. We can also see that the last two expected values are also well defined.
They are useful in learning a lot about the random variables they correspond to. We start
with some properties which follow directly from the definition:


Suppose X, Y and {X$_{n}$}$_{n \in N}$ be random variables.

1) $\phi_{X}$(0) = 1 and |$\phi_{X}$(t)| $\leq$ 1, for all t

2) For all $\in$ > 0 there exists a $\delta$ > 0 such that |$\phi$(t) - $\phi$(s)| $\leq$ $\in$ whenever |t - s| $\leq$ $\delta$. Therefore $\phi$ is said to be uniformly continuous.

3) a + bX is e$^{iat}$ $\phi$(bt)

4) Characteristic function of - X is the complex conjugate $\bar{\phi}$(t).

5) For convolution, F * G is equal to $\phi_{F}$(t) $\phi_{G}$(t)

6) A characteristic function $\phi$ is real valued if and only if the distribution of the corresponding random variable X has a distribution that  is symmetric about zero.
The above statement in mathematical form is equivalent to

P[X > z] = P[X < -z] for all z $\geq$ 0.

7) If X and Y are independent then $\phi_{X + Y}$ = $\phi_{X}$$\phi_{Y}$

8) If X$_{n}$ tends to X in distribution, then $\phi_{X_{n}}$(t) tends to $\phi_{X}$(t) for each t $\in$ R.

9) The tail behavior of the function determines the smoothness of the corresponding density function.
Characteristic functions can be utilized as part connected with procedures for installing probability distributions to samples of data. Cases where this allows a practicable option when compared with other possibilities contain fitting the steady distribution since sealed form expressions for that density are not available which makes setup of maximum likelihood estimation difficult.

Estimation procedures can be found which match the actual theoretical characteristic function towards empirical characteristic perform, calculated from your data.
The logarithm of your characteristic function can be a cumulant generating perform, which is great for finding cumulants; note that some instead define the actual cumulant generating function as the logarithm of the actual moment-generating function, and call the logarithm of the characteristic function the 2nd cumulant generating perform.Characteristic functions are particularly great for dealing with linear operates of independent arbitrary variables.
Consider X = $\begin{bmatrix}
X_{1}\\
X_{2}\\
X_{3}\\
\\
\\
X_{n}\end{bmatrix}$  and t = $\begin{bmatrix}
t_{1}\\
t_{2}\\
t_{3}\\
\\
\\
t_{n}\end{bmatrix}$, t real be n * 1 column vectors.

Then characteristic function of X is defined as

$\phi_{X}$(t) = E(e$^{it'X}$)

= E(e$^{i(t_{1}X_{1} + t_{2}X_{2} + t_{3}X_{3} + ...................... + t_{n}X_{n})}$)

We may also write it as

$\phi_{X_{1}, X_{2}, X_{3}, ............................. , X_{n}}(t_{1}, t_{2}, t_{3},  ..........., t_{n}$)

This is also equivalent to $\phi_{X}(t_{1}, t_{2}, t_{3},  ..........., t_{n}$)
Uniqueness theorem of characteristic function

Characteristic function uniquely determines the distribution. A necessary and sufficient condition for two distributions with pdf's f$_{1}$(.)and f$_{2}$(.)to be identical is that their characteristic function $phi_{1}$(t) and $\phi_{2}$(t) are identical.

Thus M$_{X}$(t) = E(e$^{tX}$)