= create_dls_test(prediction=True)
dls = 50 init_sz
Dual RNN Models
=1) dls.show_batch(max_n
Model
Diag_RNN
Diag_RNN (input_size, output_size, output_layer=1, hidden_size=100, rnn_layer=1, linear_layer=1, stateful=False, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
Diag_RNN_raw
Diag_RNN_raw (input_size, output_size, output_layer=1, hidden_size=100, rnn_layer=1, linear_layer=1, stateful=False)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
DiagLSTM
DiagLSTM (input_size, output_size, output_layer=1, hidden_size=100, rnn_layer=1, linear_layer=1, **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
Diag_TCN
Diag_TCN (input_size, output_size, output_layer, hl_width, mlp_layers=0, hl_depth=1, act=<class 'torch.nn.modules.activation.Mish'>, bn=False, stateful=False, **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
ARProg_Init
ARProg_Init (n_u, n_y, init_sz, n_x=0, hidden_size=100, rnn_layer=1, diag_model=None, linear_layer=1, final_layer=0, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
= ARProg_Init(1,1,init_sz=init_sz,rnn_layer=1,hidden_size=50)
model = Learner(dls,model,loss_func=SkipNLoss(mse,init_sz))
lrn # lrn.fit(1,lr=3e-3)
1,3e-3,pct_start=0.2) lrn.fit_flat_cos(
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.054181 | 0.059382 | 00:01 |
FranSys
FranSys
FranSys (n_u, n_y, init_sz, n_x=0, hidden_size=100, rnn_layer=1, diag_model=None, linear_layer=1, init_diag_only=False, final_layer=0, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
= FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50)
model = Learner(dls,model,loss_func=nn.MSELoss())
lrn 1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.041958 | 0.032479 | 00:02 |
RNN without linear layer as diagnosis module
#TCN as Diagnosis Module
= Diag_TCN(2,50,2,hl_depth=6,hl_width=20,mlp_layers=3)
diag_tcn = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50,diag_model=diag_tcn)
model = Learner(dls,model,loss_func=nn.MSELoss())
lrn
lrn.add_cb(TbpttResetCB())1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.037339 | 0.029248 | 00:01 |
= Diag_RNN_raw(2,50,2,stateful=False)
diag_rnn = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50,diag_model=diag_rnn)
model = Learner(dls,model,loss_func=nn.MSELoss())
lrn
lrn.add_cb(TbpttResetCB())1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.036428 | 0.023112 | 00:02 |
Fast variant with initsz diagnosis only
= FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50,init_diag_only=True)
model = Learner(dls,model,loss_func=nn.MSELoss(),opt_func=ranger)
lrn
lrn.add_cb(TbpttResetCB())1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.075652 | 0.069702 | 00:01 |
Callbacks
FranSysCallback
FranSysCallback (modules, p_state_sync=10000000.0, p_diag_loss=0.0, p_osp_sync=0, p_osp_loss=0, p_tar_loss=0, sync_type='mse', targ_loss_func=<function mae>, osp_n_skip=None, FranSys_model=None, detach=False, **kwargs)
Callback
that regularizes the output of the FranSys model.
Type | Default | Details | |
---|---|---|---|
modules | |||
p_state_sync | float | 10000000.0 | scalingfactor for regularization of hidden state deviation between diag and prog module |
p_diag_loss | float | 0.0 | scalingfactor of loss calculation of diag hidden state to final layer |
p_osp_sync | int | 0 | scalingfactor for regularization of hidden state deviation between one step prediction and diag hidden states |
p_osp_loss | int | 0 | scalingfactor for loss calculation of one step prediction of prog module |
p_tar_loss | int | 0 | scalingfactor for time activation regularization of combined hiddenstate of diag and prog with target sequence length |
sync_type | str | mse | |
targ_loss_func | function | mae | |
osp_n_skip | NoneType | None | number of elements to skip before osp is applied, defaults to model.init_sz |
FranSys_model | NoneType | None | |
detach | bool | False | |
kwargs | VAR_KEYWORD |
= FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50)
model = FranSysCallback([model.rnn_diagnosis,model.rnn_prognosis],
cb =1e-1,
p_state_sync=0.0,
p_diag_loss=0,
p_osp_sync=0.1,
p_osp_loss='cos_pow')
sync_type= Learner(dls,model,loss_func=nn.MSELoss(),cbs=cb,opt_func=ranger)
lrn 1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.164724 | 0.062946 | 00:02 |
FranSysCallback_variable_init
FranSysCallback_variable_init (init_sz_min, init_sz_max, **kwargs)
Callback
reports progress after every epoch to the ray tune logger
Learner
FranSysLearner
FranSysLearner (dls, init_sz, attach_output=False, loss_func=L1Loss(), metrics=[<function fun_rmse at 0x1490f8af0>], opt_func=<function Adam>, lr=0.003, cbs=[], n_x=0, hidden_size=100, rnn_layer=1, diag_model=None, linear_layer=1, init_diag_only=False, final_layer=0, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
= FranSysLearner(dls,init_sz=50)
lrn 1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.168541 | 0.126720 | 0.159639 | 00:01 |
= create_dls_test(prediction=False)
dls = FranSysLearner(dls,init_sz=50,attach_output=True)
lrn 1,lr=3e-3) lrn.fit(
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.179930 | 0.183227 | 0.232352 | 00:01 |