# LSTM - Echo Sequence Prediction Problem (Vanilla LSTM)

This is an implementation of basic and simple LSTM implementation (also called the vanilla LSTM) in Keras. This is a two layered model with simple LSTM in one layer and a final Dense layer. The LSTM model is for **Echo Sequence Prediction**.

For other Blogs on LSTM: Sequence Models , LSTM, GRU

### Inputs to LSTM:

LSTM takes in a 3-Dimensional input, with the following dims:

- Samples (rows in data)
- Time Steps (lag variables or past observations)
- Features (columns in the data)

The first dimension does not need to be defined in the model explicitly, while the other two needs to be provided (time steps and features)

```
model.add(LSTM(25,input_shape=(length,n_features)))
```

**Reshaping**

The input data needs to be reshaped in 3-dimensional.

```
#For just one training example (num_samples=1)
X= encoded.reshape((1, length, n_features))
```

### Echo Sequence Prediction - Problem statement

The echo sequence prediction problem is a contrived problem for demonstrating the memory capability of the Vanilla LSTM. The task is that, given a sequence of random integers as input, to output the value of a random integer at a specific time input step that is not specified to the model.

For example, given the input sequence of random integers [5, 3, 2] and the chosen time step was the second value, then the expected output is 3. Technically, this is a sequence classification problem; it is formulated as a many-to-one prediction problem, where there are multiple input time steps and one output time step at the end of the sequence.

## Code - Github Link

### Creating Data

The functions for generating sequential data and doing one-hot encode is provided in the code. For generating the data, the following snippet is used

```
# generate one example for an lstm
def generate_example(length, n_features, out_index):
# generate sequence
sequence = generate_sequence(length, n_features)
# one hot encode
encoded = one_hot_encode(sequence, n_features)
# reshape sequence to be 3D
X = encoded.reshape((1, length, n_features))
# select output
y = encoded[out_index].reshape(1, n_features)
return X, y
X, y = generate_example(25, 100, 2)
print(X.shape)
print(y.shape)
```

Once sequence are generated and encoded, they are reshaped into 3 dimensional vectors an mentioned in the post (top). Number of examples are 1, number of time steps is the length of sequence (length) and the the 3rd dimesion for column is n_features

### Running Model

# define model

```
length = 5
n_features = 10
out_index = 2
model = Sequential()
model.add(LSTM(25, input_shape=(length, n_features)))
model.add(Dense(n_features, activation= 'softmax' ))
model.compile(loss= 'categorical_crossentropy' , optimizer= 'adam' , metrics=['acc' ])
print(model.summary())
# fit model
for i in range(10000):
X, y = generate_example(length, n_features, out_index)
model.fit(X, y, epochs=1, verbose=2)
```

Reference : Jason Brownlee’s awesome work from www.machinelearningmastery.com. Read his blog regularly if you want to get expertse in machine learning