| |
Methods defined here:
- __init__(self, *args, **kwargs)
- calculate_loss(self, predicted_val, true_val)
- display_network1(self)
- display_network2(self)
- Provides a fancier display of the network graph
- forward_propagate_one_input_sample_with_partial_deriv_calc(self, sample_index, input_vals_for_ind_vars)
- If you want to look at how the information flows in the DAG when you don't have to worry about
estimating the partial derivatives, see the method gen_gt_dataset(). As you will notice in the
implementation code for that method, there is nothing much to pushing the input values through
the nodes and the arcs of a computational graph if we are not concerned about estimating the
partial derivatives.
On the other hand, if you want to see how one might also estimate the partial derivatives as
during the forward flow of information in a computational graph, the forward_propagate...()
presented here is the method to examine. We first split the expression that the node
variable depends on into its constituent parts on the basis of '+' and '-' operators and
subsequently, for each part, we estimate the partial of the node variable with respect
to the variables and the learnable parameters in that part.
- gen_gt_dataset(self, vals_for_learnable_params={})
- This method illustrates that it is trivial to forward-propagate the information through
the computational graph if you are not concerned about estimating the partial derivatives
at the same time. This method is used to generate 'dataset_size' number of input/output
values for the computational graph for given values for the learnable parameters.
- parse_expressions(self)
- This method creates a DAG from a set of expressions that involve variables and learnable
parameters. The expressions are based on the assumption that a symbolic name that starts
with the letter 'x' is a variable, with all other symbolic names being learnable parameters.
The computational graph is represented by two dictionaries, 'depends_on' and 'leads_to'.
To illustrate the meaning of the dictionaries, something like "depends_on['xz']" would be
set to a list of all other variables whose outgoing arcs end in the node 'xz'. So
something like "depends_on['xz']" is best read as "node 'xz' depends on ...." where the
dots stand for the array of nodes that is the value of "depends_on['xz']". On the other
hand, the 'leads_to' dictionary has the opposite meaning. That is, something like
"leads_to['xz']" is set to the array of nodes at the ends of all the arcs that emanate
from 'xz'.
- plot_loss(self)
- train_on_all_data(self)
- The purpose of this method is to call forward_propagate_one_input_sample_with_partial_deriv_calc()
repeatedly on all input/output ground-truth training data pairs generated by the method
gen_gt_dataset(). The call to the forward_propagate...() method returns the predicted value
at the output nodes from the supplied values at the input nodes. The "train_on_all_data()"
method calculates the error associated with the predicted value. The call to
forward_propagate...() also returns the partial derivatives estimated by using the finite
difference method in the computational graph. Using the partial derivatives, the
"train_on_all_data()" backpropagates the loss to the interior nodes in the computational graph
and updates the values for the learnable parameters.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|