**Single Exponential**

**Smoothing**

**Simple or Single Exponential Smoothing**can be thought of a Moving Average which assigns more weight to the more recent observations than the first, there by reducing the Lag but has less effectiveness when there is trend.

➡ To start a

**Single Exponential Smoothing**procedure, you start out with averaging the first 10 observations of the**Time Series**. A kind of Moving Average techniques to kick start the**.***Exponential Smoothing*
➡ You will need to calculate the Weighted Multiplyer or Smoothing Constant

**Alpha**(**α**) before you can now use the Formula to calculate the smoothed values for the**.***Time Series***Alpha = (2/(initial Average period + 1)**)

Meaning, if the initial average made use of 8 periods, then (Alpha) will be given by

**Alpha****(**

**α****) = (2/8+1)**

**=0.22**

The closer the

**Alpha**(**α**) to one**(1)**the quicker the decay, depending on how you chose to your**Alpha**(**α**) but 10 day initial average is most suitable for a not too large series
⬇⬇

Given a series

**Y(t)**taken at time**(t)**which Erin's at time**(t=0)**, the smoothened series is usually denoted by**S(t)**and represented by the mathematical Model**S(t) =**

**α****Y**

**(t-1) + (1- α)S(t-1)**

**Where**

**0 < α < 1**

**t=0,1,...,**

The

**Single Exponential Smoothing**is much different from the*procedure in that when we take a 10 day period to start a Smoothing process, the first calculated average will be just exactly as the***Simple Moving Average***result but the second smoothened value will take into consideration the other 10 values (making it 11) while the Moving Average will one take into consideration the latest 10 observations in the Series,forgetting the first.***Moving Average**
⬇⬇

**Reasons Why It Is Called Exponential**

Looking at the Single Exponential Smoothing Model

**S(t) =**

**α****Y**

**(t-1) + (1- α)S(t-1)**

We could expand the model by substituting for S(t-1)

**S(t) =**

**α****Y**

**(t-1) +**

**(1- α**)[

**α**

**Y**

**(t-2) + (1- α)²S(t-2)**]

⬇

**S(t) =**

**α****Y**

**(t-1) +**

**α****(1- α)**

**Y**

**(t-2) + (1- α)²S**

**(t-2)**

⬇

**For S(t-2)****S(t) =**

**α****Y**

**(t-1) +**

**α****(1- α)**

**Y**

**(t-2) + (1- α)²**[

**α****Y**

**(t-3) + (1- α)³S(t-3)**]

⬇

**S(t) =**

**α****Y**

**(t-1) +**

**α****(1- α)**

**Y**

**(t-2) +**

**α****(1- α)²**

**Y**

**(t-3) + (1- α)³S(t-3)**]

We could go on expanding the series for

**S(t-3),S(t-4)**, on and on and__on__. the weights assigned to previous observations are in general proportional to the terms of the geometric progression {1, (1 − α), (1 − α) 2 , (1 − α) 3 , ...} and a geometric progression is the discrete version of an exponential function.**Optimization**-

**Knowing The Right Choice Of Alpha (α)**

When we use this Model, we need to chose

**Alpha (α)**that minimizes the MSE which might be hectic, if calculated manually, but if done through a statistical software, We could apply the proven trial-and-error method. This is an iterative procedure beginning with a range of**Alpha (α)**between 0.1 and 0.9, picking the one which minimizes the MSE
Other methods like

*Marquardt***can be used to obtain a better choice for***procedure***Alpha (α)****Forecasting Using The Single Exponential Smoothing Or Exponential Moving Average**

The Forecast Model for the Single Exponential Smoothing Or Exponential Moving Average is given by

**S(t+1) =**

**α****Y**

**(t) + (1- α)S(t)**

Where

**0<α<1 and t>0**

We can rewrite it as

**S(t+1) = S(t) + αe(t)**

**Where the**

**e(t)**is the error term

⬇

**Bootstraping**

Using the Model to forecast might end up a problem if there are no initial data points. We set alpha(

**α**) to 0.1.
Given a point and it's Smoothened value say 20 and 20.35 we could compute the next values using the forecast model

**S(t+1) = 0.1(20) + (1- 0.1)20.35**

**S(t+1)=20.32**

This process is called Bootstraping and when compared to normal forecast results, will give you a gradually decreasing series while the normal Forecasting will give you a more accurate forecast

⏩

**Brown's Linear Exponential Smoothing (****LES****) or Brown's Double Exponential Smoothing**

## 0 comments:

## Post a Comment