Week 12 : Coding Period

 This week I was successfully able to make my models produce good results. I was evaluating them on the wrong structure of data. Yes, yes the data format that I was working on since ages did not prove useful to me. But it was a building block to the generators that I finally created. After that, I re-evaluated my original model and viola! I got an AUC-ROC score of 0.81. As this generator model proved useful, I immediately parallelized my operations. 

I was running a total of 6 notebooks. 2 on PLHI server, 2 locally, 1 Kaggle notebook and 1 Colab notebook. 

The PLHI server did not have the complete data so I exported a subset over there and re-trained my pruning model.

My local system had the data but lacked processing power. So I ran model evaluations locally. Each Int-8 model took over 24 hours. 

The Kaggle notebook had data and processing power but the data was not in the correct structure to be fed to models. So I tested inference scripts here.

Colab lacked the data but had processing power. It also had support for Tensorflow-model-optimization library. So I ran my model quantization scripts here by uploading a small subset of the data.

The main problem was that the data is 45GB in size. It took a lot of time to download or upload anywhere. Even for my local system, I had to clear a lot of my personal data to make space for this dataset.

Along with the notebooks that I was running, I was also testing my custom scripts locally. Finally, I finished these processes and pushed my code. I also submitted my model results.

While I was working on these tasks, I also setup my Raspberry Pi on Qemu and began working on it. There were 2 main blockers here - network and space. The emulator was not getting internet access initially. So I built a bridge interface from my host ethernet to the guest (RPi) ethernet. This took quite some time to troubleshoot and in the end I was able to provide internet access to my Pi. A few tweeks later, I was able to get it to connect to my Wifi connection. 

The next blocker was the size of the image. After emulating, I was not left with enough space on my partition to install Tensorflow or other packages. So I had to resize the Raspbian image. This needed me to tinker with the partitions using fdisk and in the end, I was able to allocate space to the image.

Right now, I am trying to solve the MemoryError that I get everytime I  try to install Tensorflow on the Pi. I also get an error that says that grpcio is not supported. Hopefully this will get resolved soon because the scripts are ready to be run.

After this last task I can say that I have completed my project.

Happy Coding :)


Popular posts from this blog

GSoC 2020 with LibreHealth : Final Report

Week 1 : Acceptance and Community Bonding

Week 3 : Coding period