from tsfast.datasets.core import *
dls = create_dls_test(prediction= True )
model = SimpleRNN(1 ,1 )
Create Learner Models
Create Learner with different kinds of models with fitting Parameters and regularizations.
source
get_inp_out_size
get_inp_out_size (dls)
returns input and output size of a timeseries databunch
test_eq(get_inp_out_size(dls),(2 ,1 ))
RNN Learner
The Learners include model specific optimizations. Removing the first n_skip samples of the loss function of transient time, greatly improves training stability. In
source
RNNLearner
RNNLearner (dls, loss_func=L1Loss(), metrics=[<function fun_rmse at
0x145fb9630>], n_skip=0, num_layers=1, hidden_size=100,
stateful=False, opt_func=<function Adam>, cbs=None,
linear_layers=0, return_state=False, hidden_p=0.0,
input_p=0.0, weight_p=0.0, rnn_type='gru',
ret_full_hidden=False, normalization='', **kwargs)
RNNLearner(dls,rnn_type= 'gru' ).fit(1 ,1e-4 )
0
0.026982
0.025624
0.226313
00:01
RNNLearner(dls,rnn_type= 'gru' ,stateful= True ).fit(1 ,1e-4 )
0
0.024248
0.022891
0.213916
00:01
TCN Learner
Performs better on multi input data. Higher beta values allow a way smoother prediction. Way faster then RNNs in prediction.
source
TCNLearner
TCNLearner (dls, num_layers=3, hidden_size=100, loss_func=L1Loss(),
metrics=[<function fun_rmse at 0x145fb9630>], n_skip=None,
opt_func=<function Adam>, cbs=None, hl_depth=1, hl_width=10,
act=<class 'torch.nn.modules.activation.Mish'>, bn=False,
stateful=False, **kwargs)
0
0.112962
0.010284
0.143394
00:00
CRNN Learner
source
CRNNLearner
CRNNLearner (dls, loss_func=L1Loss(), metrics=[<function fun_rmse at
0x145fb9630>], n_skip=0, opt_func=<function Adam>, cbs=None,
num_ft=10, num_cnn_layers=4, num_rnn_layers=2, hs_cnn=10,
hs_rnn=10, hidden_p=0, input_p=0, weight_p=0,
rnn_type='gru', stateful=False, **kwargs)
CRNNLearner(dls,rnn_type= 'gru' ).fit(1 ,3e-2 )
0
0.016305
0.004414
0.093948
00:02
Autoregressive Learner
source
AR_TCNLearner
AR_TCNLearner (dls, hl_depth=3, alpha=1, beta=1, early_stop=0,
metrics=None, n_skip=None, opt_func=<function Adam>,
hl_width=10, act=<class
'torch.nn.modules.activation.Mish'>, bn=False,
stateful=False, **kwargs)
AR_TCNLearner(dls).fit(1 )
0
0.024635
0.021864
0.210655
00:00
source
AR_RNNLearner
AR_RNNLearner (dls, alpha=0, beta=0, early_stop=0, metrics=None,
n_skip=0, opt_func=<function Adam>, num_layers=1,
hidden_size=100, linear_layers=0, return_state=False,
hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru',
ret_full_hidden=False, stateful=False, normalization='',
**kwargs)
AR_RNNLearner(dls).fit(1 )
0
0.011750
0.004110
0.090632
00:01