# comparison_report_template.html
Detailed analysis of the optimal student model
Comparison of Best Student Models by Metric
Top features by importance in the {{ model.model_type }} model. Feature importance represents the relative contribution of each feature to the model's predictions.
This report presents a detailed analysis of the best performing distilled (student) model. The {{ model.model_type }} model was trained using knowledge distillation with a temperature of {{ model.temperature }} and alpha of {{ model.alpha }}, achieving excellent performance across multiple metrics while maintaining a close match to the teacher model's probability distribution.
Model Type: {{ model.model_type }}
Temperature: {{ model.temperature }}
Alpha: {{ model.alpha }}
Accuracy: {{ "%.3f"|format(metrics.accuracy.value) }} {% if metrics.accuracy.retention is defined %}({{ "%.1f"|format(metrics.accuracy.retention) }}% of teacher){% endif %}
F1 Score: {{ "%.3f"|format(metrics.f1.value) if 'f1' in metrics else 'N/A' }} {% if 'f1' in metrics and metrics.f1.retention is defined %}({{ "%.1f"|format(metrics.f1.retention) }}% of teacher){% endif %}
AUC-ROC: {{ "%.3f"|format(metrics.auc_roc.value) if 'auc_roc' in metrics else 'N/A' }} {% if 'auc_roc' in metrics and metrics.auc_roc.retention is defined %}({{ "%.1f"|format(metrics.auc_roc.retention) }}% of teacher){% endif %}
KL Divergence: {{ "%.3f"|format(metrics.kl_divergence.value) if 'kl_divergence' in metrics else 'N/A' }}
KS Statistic: {{ "%.3f"|format(metrics.ks_statistic.value) if 'ks_statistic' in metrics else 'N/A' }}
R² Score: {{ "%.3f"|format(metrics.r2_score.value) if 'r2_score' in metrics else 'N/A' }}
A straight diagonal line would indicate identical distributions. {% if 'r2_score' in metrics %}The R² score of {{ "%.3f"|format(metrics.r2_score.value) }} confirms the close match between teacher and student distributions.{% endif %}
Metric | Teacher Model | Student Model | Difference | Retention % |
---|---|---|---|---|
{{ metric.display_name }}{% if metric_name in ['kl_divergence', 'ks_statistic'] %} (lower is better){% endif %} | {{ "%.3f"|format(metric.teacher_value) if metric.teacher_value is not none else 'N/A' }} | {{ "%.3f"|format(metric.value) }} | {% if metric.difference is defined and metric.teacher_value is not none %} {% if metric_name in ['kl_divergence', 'ks_statistic'] %} {{ "+%.3f"|format(metric.difference) if metric.difference > 0 else "%.3f"|format(metric.difference) }} {% else %} {{ "%.3f"|format(metric.difference) }} {% endif %} {% else %} N/A {% endif %} | {% if metric.retention is defined %} {{ "%.1f"|format(metric.retention) }}% {% elif metric_name in ['kl_divergence', 'ks_statistic'] %} N/A {% else %} N/A {% endif %} |
Comparison of Best Student Models by Metric
This report presents the best student models for each evaluation metric, comparing their performance with the teacher model. The report shows the optimal hyperparameters (model type, temperature, and alpha) for each metric.
Models were trained using knowledge distillation to transfer knowledge from a complex teacher model to simpler, more efficient student models.
Metric | Best Model Type | Temp | Alpha | Teacher Value | Student Value | Difference | Retention % |
---|---|---|---|---|---|---|---|
{{ model.display_name }}{% if model.minimize %} (lower is better){% endif %} | {{ model.model_type }} | {{ model.temperature }} | {{ model.alpha }} | {{ "%.3f"|format(model.teacher_value) if model.teacher_value is not none else 'N/A' }} | {{ "%.3f"|format(model.value) }} | {% if model.difference is defined and model.teacher_value is not none %} {% if model.minimize %} {{ "+%.3f"|format(model.difference) if model.difference > 0 else "%.3f"|format(model.difference) }} {% else %} {{ "%.3f"|format(model.difference) }} {% endif %} {% else %} N/A {% endif %} | {% if model.retention is defined and model.retention is not none %} {{ "%.1f"|format(model.retention) }}% {% else %} N/A {% endif %} |
Based on the overall performance, we recommend: