- Sequence models: focus on time series (there are others) — stock, weather,...
- At the end, we wanna model sunspot actitivity cycles which is important to NASA and other space agencies.
- Using RNN on time series data.
📙 Notebook: introduction to time series. + explaining video. → How to create synthetic time series data + plot them.
- Time series is everywhere: stock prices, weather focasts, historical trends (Moore's law),...
- Univariate TS and Miltivariate TS.
- Type of things can we do with ML over TS:
- Any thing has a time factor can be analysed using TS.
- Predicting a forecasting (eg. birth & death in Japan -> predict future for retirement, immigration, impacts...).
- Imputation: project back into the past.
- Fill holes in the data.
- Nomalies detecction (website attacks).
- Spot patterns (eg. speed recognition).
- Common patterns in TS:
- Trend: a specific direcion that they're moving in.
- Seasonality: patterns repeat at predictable intervals (eg. active users for a website).
- Combinition of both trend and seasonality.
- Stationary TS
- Autocorrelated TS: a time series is linearly related to a lagged version of itself.. There is no trend, no seasonality.
- Multiple auto correlation.
- May be trend + seasonality + autorrelation + noise.
- Non-stationary TS
In this case, we base just on the later data to predict the future (not on the whole data).
- Fixed partitioning (this course focuses on) = splitting TS data into training period, validation period and test period.
- If TS is seasonal, we want each period contains the whole number of seasons.
- We can split + train + test to get a model and then re-train with the data containing also the test period so that the model is optimized! In that case, the test set comes from the future.
- Roll-forward partitioning: we start with a short training period and we gradually increase it (1 day at a time or 1 week at a time). At each iteration, we train the model on training period, use it to focast the following day/week in the validation period. = Fixed partitioning in a number of times!
For evaluating models:
1errors = forecasts - actual
2
3# Mean squared error (square to get rid of negative values)
4# Eg. Used if large errors are potentially dangerous
5mse = np.square(errors).mean()
6# Get back to the same scale to error
7rmse = np.sqrt(mse)
8
9# Mean absolute error (his favorite)
10# this doesn't penalize large errs as much as mse does,
11# used if loss is proportional to the size of err
12mae = np.abs(errors).mean()
13
14# Mean abs percentage err
15# idea of the size of err compared to the values
16mape = np.abs(errors / x_valid).mean()
1# MAE with TF
2keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()
📙 Notebook: Forecasting. + explaining video.
Moving average: a simple forecasting method. Calculate the average of blue lines within a fixed "averaging windows".
- This can eliminate noises and doesn't anticipate trend or seasonality.
- Depend on the "averaging window", it can give worse result than naive forecast.
1def moving_average_forecast(series, window_size):
2 """Forecasts the mean of the last few values.
3 If window_size=1, then this is equivalent to naive forecast"""
4 forecast = []
5 for time in range(len(series) - window_size):
6 forecast.append(series[time:time + window_size].mean())
7 return np.array(forecast)
Differencing: remove the trend and seasonality from the TS. We study on the differences between points and their previous neighbor in period.
Above method still get the noises (because we add the differencing to the previous noise). If we remove past noise using moving average on that.
Keep in mind before using Deep Learning, sometimes simple approaches just work fine!
- We need to split our TS data into features and labels so that we can use them in ML algos.
- In this case: features=#values in TS, label=next_value.
- Feature: window size and train to predict next value.
- Ex: 30 days of values as features and next value as label.
- Overtime, train ML to match 30 features to match a single label.
📙 Notebook: Preparing features and labels.
👉 Video explains how to split to features and labels from dataset.
👉 Video explains how to split to features and labels from dataset.
1def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
2 dataset = tf.data.Dataset.from_tensor_slices(series)
3 dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
4 dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
5 dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
6 dataset = dataset.batch(batch_size).prefetch(1)
7 return dataset
Sequence bias is when the order of things can impact the selection of things. It's ok to shuffle!
📙 Notebook: Single layer NN + video explains it.
1# Simple linear regression (1 layer NN)
2dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
3l0 = tf.keras.layers.Dense(1, input_shape=[window_size])
4model = tf.keras.models.Sequential([l0])
5model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))
6model.fit(dataset,epochs=100,verbose=0)
7print("Layer weights {}".format(l0.get_weights()))
8
9forecast = []
10
11for time in range(len(series) - window_size):
12 forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
13 # np.newaxis: reshape X to input dimension that used by the model
14
15forecast = forecast[split_time-window_size:]
16results = np.array(forecast)[:, 0, 0]
📙 Notebook: DNN with TS + video explains it.
1# A way to choose an optimal learning rate
2lr_schedule = tf.keras.callbacks.LearningRateScheduler(
3 lambda epoch: 1e-8 * 10**(epoch / 20))
4optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
5model.compile(loss="mse", optimizer=optimizer)
6history = model.fit(dataset, epochs=100, callbacks=[lr_schedule], verbose=0)
1lrs = 1e-8 * (10 ** (np.arange(100) / 20))
2plt.semilogx(lrs, history.history["loss"])
3plt.axis([1e-8, 1e-3, 0, 300])
📙 Notebook: DNN with synthetic TS.
- RRN is a NN containing Recurrent layer.
- The different from DNN is the input shape is 3 dimensional (
batch_size x #time_step x dims_input_at each_timestep
).
- Re-use 1 cell multiple times in different layers (in this course).
- Suppose: window size of 30 time steps, batch size of 4: Shape will be 4x30x1 and the memory cell input will be 4x1 matrix.
- If the memory cell comprises 3 neurons then the output matrix will be 4x3. Therefore, the full output of the layer will be 4x30x3.
- is just a copy of .
- Below figure: input and also output a sequence.
- Sometimes, we want only input a sequence but not output. This called sequence-to-vector RNN. I.E., ignore all of the outputs except the last one!. In
tf.keras
, it's default setting!
1# Check the figure below as an illustration
2model = tf.keras.models.Sequential([
3tf.keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 1]),
4 # input_shape:
5 # TF assumes that 1st dim is batch size -> any size at all -> no need to define
6 # None -> number of time steps, None means RNN can handle sequence of any length
7 # 1 -> univariate TS
8tf.keras.layers.SimpleRNN(20),
9 # if there is `return_sequences=True` -> sequence-to-sequence RNN
10tf.keras.layers.Dense(1),
11])
1model = tf.keras.models.Sequential([
2 tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1), # expand to 1 dim (from 2) so that we have 3 dims: batch size x #timesteps x series dim
3 input_shape=[None]), # can use any size of sequences
4 tf.keras.layers.SimpleRNN(40, return_sequences=True),
5 tf.keras.layers.SimpleRNN(40),
6 tf.keras.layers.Dense(1),
7 tf.keras.layers.Lambda(lambda x: x * 100.0)
8 # default activation in RNN is tanh -> (-1, 1) -> scale to -100, 100
9])
- Loss function Huber (wiki): less sensitive to outliers. => we use this because our data in this case get a little bit noisy!
📙 Notebook: Simple RNN with a TS data + videos explains it.
📙 Notebook: LSTM with a TS data + videos explains it.
1# clear internal variables
2tf.keras.backend.clear_session()
3dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
4
5model = tf.keras.models.Sequential([
6 tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
7 input_shape=[None]),
8 # LSTM here
9 tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
10 tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
11 tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
12 #
13 tf.keras.layers.Dense(1),
14 tf.keras.layers.Lambda(lambda x: x * 100.0)
15])
📙 Notebook: LSTM with synthetic TS.
- We are going to predict the sunspot actitivity cycles (download dataset).
- Combine CNN + LSTM.
👉 Andrew's video on Optimization Algo: Mini-batch gradient descent.
📙 Notebook: Sunspot dataset with CNN+LSTM. + video explains it.
📙 Notebook: Sunspot dataset with DNN only + explaining video.
👉 Video explains train & tune the model (how to choose suitable values for sizes)
📙 Notebook: Sunspot dataset with CNN+LSTM. + video explains it.
📙 Notebook: Sunspot dataset with DNN only + explaining video.
👉 Video explains train & tune the model (how to choose suitable values for sizes)