Week 3 : Coding period


What did I do this week?

  • I fine-tuned my DenseNet201 and InceptionV3 models till I could maximize their accuracy. I used class-weighting for this purpose and achieved around 85% accuracy on both the models.
  • For both the models, I calculated all the performance metrics like -
    • Accuracy
    • Loss
    • Confusion matrix
    • Precision
    • Recall
    • F1 score
    • ROC curve

      I will now document all these metrics to present them to my mentor.
  • I implemented dynamic range and Float16 quantization methods on DenseNet201 and InceptionV3 and evaluated their accuracy. Float16 gave minimum change in accuracy while dynamic range dropped the accuracy considerably.
Plans for the upcoming week

I tried implementing Int8 quantization and model pruning but ran into a heap of errors. So this week, I will rectify those errors and make these modules run.

Earlier when I had experimented with pruning I ran into a lot of OOM errors so my mentor has been gracious to provide me with a Jupyter Hub access hosted on the PLHI server. The best part about this is that my data will be persistent and I would not have to run the complete notebook every time I need to run a single module. So I will be migrating my work from the Kaggle notebook to the PLHI server.

I was evaluating my system architecture and looking for easier methods to develop this system. Currently, the design says that I will run a Raspberry Pi from the Qemu emulator. This Raspberry Pi will host the Flask application that will test the images. The emulator is supposed to be run using a Docker container. Sounds complex? Yes it is. So I have a few more approaches in my mind to simplify this architecture for testing. I will discuss this with my mentor in our upcoming meeting.


That's all for this week folks. See you next week!
Happy coding! :)

Comments

Popular posts from this blog

GSoC 2020 with LibreHealth : Final Report

Week 1 : Acceptance and Community Bonding