Tag Archives: caffe

Notes on Transfer Learning in Caffe

This is a very straightforward practical approach:

https://github.com/NVIDIA/DIGITS/tree/master/examples/fine-tuning

One trick I’ve learned from somewhere (can’t find the link, unfortunately), which is a break from the above tutorial, is to simply reduce the base learning rate by an order of magnitude when transferring, while simultaneously seting the “new” layers to have their lr_mult an order of magnitude higher than the rest of the network

so, to initially train:

  1. learning rate =.01
  2. each layer's lr_rate = 1

to transfer learn:

  1. learning rate = .001
  2. each previous layer's lr_rate = 1
  3. new layers lr_rate = 10

This saves a lot of editing of the files and allows for small amounts of adjustment to existing layers, while focusing the bulk of the learning on the newer layers, yet still resisting to overfitting small datasets.

This is pretty informative:

https://cs231n.github.io/transfer-learning/
And I think this is the original TL paper as far as I remember:

http://www.datascienceassn.org/sites/default/files/How%20Transferable%20are%20Features%20in%20Deep%20Neural%20Networks.pdf

On Stain Normalization in Deep Learning

Just wanted to take a moment and share some quick stain normalization type experimental results. We have a trained in-house nuclei segmentation model which works fairly well when the test images have similar stain presentation properties, but when new datasets arrive which are notably different we tend to see a decreased classifier performance.

Here we look at one of these images and ways of improving classifier robustness.

Continue reading On Stain Normalization in Deep Learning

Real time Data Augmentation using Nvidia Digits + Python Layer

One of the common ways of increasing the size of a training set is to augment the original data with a set of modified patches. These modifications often include (a) rotations, (b) mirroring, (c) lighting adjustment, (d) affine transformations (sheering, etc), (e) magnification modification, (f) addition of noise, etc. This blog post discusses how to do the most trivial modification, rotation, in real-time using a python layer through Nvidia Digits. Given this code, it should be easy to add on other desired augmentations.

Continue reading Real time Data Augmentation using Nvidia Digits + Python Layer

Revised Deep Learning approach using Matlab + Caffe + Python

Our publication “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases” , showed how to use deep learning to address many common digital pathology tasks. Since then, many improvements have been made both in the field and in my implementation of them. In this blog post, I re-address the nuclei segmentation use case using the latest and greatest approaches.

Continue reading Revised Deep Learning approach using Matlab + Caffe + Python

Efficient pixel-wise deep learning on large images

This blog post is based on the net surgery example provided by Caffe. It takes the concept and expands it to a working example to produce pixel-wise output images, generating output in ~2 seconds (simple approach) or ~35 seconds (advanced approach) for a 2,000 x 2,000 image, an improvement from the  ~15 hours of a naive pixel wise approach.

Continue reading Efficient pixel-wise deep learning on large images

Use Case 6: Invasive Ductal Carcinoma (IDC) Segmentation

This blog posts explains how to train a deep learning Invasive Ductal Carcinoma (IDC) classifier in accordance with our paper “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases”.

Continue reading Use Case 6: Invasive Ductal Carcinoma (IDC) Segmentation