Introduction
You just hit the run button on your machine learning model and everything works *whew*. BEFORE you go turning all the knobs and dials, it’s important to set up some type of system for collecting data (you should know how important that is!) on your experiments.
With only two lines of python code, you can do this for any machine learning architecture. To tune to your needs: simply replace my hyperparameters and metrics with yours. Okay enough beating around the bush, here it is:
from datetime import date
with open('path/to/logging.txt', 'a+') as file:
file.write('Date:{0} | HYPERPARAMS: LearningRate {1}, DropoutRate {2}, Epochs {3} | METRICS: TrainLoss {5}, TrainAcc {6}, TestLoss {7}, TestAcc {8}\n'.format(date.today(), _LEARNING_RATE, _DROPOUT_RATE, _NUM_EPOCHS, round(train_loss, 3), round(train_acc, 3), round(test_loss, 3), round(test_acc, 3)))
Breaking it down:
“with open(‘path/to/logging.txt’, ‘a+’) as file” opens a text file called “logging.txt” in append mode (so that each time this code block is run it adds to our logger file and doesn’t write over it). In that file we write a formatted string containing the date of the experiment, the hyperparameters of the model used (no the random seed is not a tune-able hyperparameter), and the metrics we want to track on our train and testing sets. Finally, I round the metrics for readability.
Conclusion
You should keep track of the models you run. If you don’t, you will have no recorded history of what happens when you turn certain model knobs. This is an easy, simple way to do it.