May and June of this year was exam season, which I spent researching and writing my final exams of my undergraduate degree! Due to COVID the exams were not done in person, and were instead a series of take-home essays for each module. Though I’ve written plenty of scientific essays throughout my schooling years, completing 12 fully-referenced essays within the 28 day exam period was still a challenging endeavour! Despite the hard work it was still an enjoyable experience consolidating my learning over the year and delving into the recent literature to build up my writing. I'm elated to report having scored High 1st (88%) marks in several essays, such as "Experimental strategies for obtaining rare antimicrobial compounds" and discussing in-vitro experiments used to identify essential components for protein import and translocation to the endoplasmic reticulum.
One of the many diagrams I'd created for use in my exam essays - this one describes the general workflow of using synbio approaches to biosynthesise useful compounds.
Following exam submissions, I then got started on my final year research/dissertation project! As I have mentioned before on this blog, my assigned project was mainly based in deep-learning, a field I had minimal exposure to at the time. The project period was just about a month, so it was a whirlwind of attempting to learn enough about the field, generating sufficient data and eventually writing a (hopefully) engaging and well-researched report. Under the wonderful supervision of Dr. Munehiro Asally, I trained a StarDist 2D network on progressively increasing amounts of self-generated cell segmentation data, before testing and analysing its performance versus that of a human (me) or on pre-trained StarDist models.
Some screenshots from my report, which hopefully give some visual explanation to what I was up to for that month!
Figure explaining the benefit of using the StarDist object detection model rather than the typical bounding boxes used in most object detection programmes.
A simple example illustrating how model performance is evaluated - basically, the more overlap between the model's prediction and the human-annotated "answer," the better.
Examples of cell segmentation predictions produced by the models I had trained.
Having worked with microscopes in previous internships, it was exciting to learn first-hand its applications for bacterial cell imaging, compared to the fruit fly brains I was more used to! The most satisfying part of the process for me was the successful creation of multiple pre-trained models, which could then be downloaded and utilised by members of the Asally lab to hopefully speed up and semi-automate their own microscopy cell segmentation tasks in their own projects. The overall experience was an excellent way to learn “on the job,” as the Asally lab members kindly provided real data from their projects for me to train and test my deep-learning models with. Despite the relatively short project period of just under a month, I felt it was a great way to end off my last-ever piece of undergraduate work, as it allowed me to gain great insight and initial experience with computing-based approaches to bioimage analysis - an extremely important aspect of basically any biological field that involves visual data. Even though I aim to pursue further research based more in Neuroscience rather than Microbiology, the knowledge and skills I'd gained during this project have definite transferable applications to other experiments I may undertake.