1 21-BioImage


Previous: 20-ImageBasics.html

21-BioImage/XKCD_artifacts.png

1.1 Biological imaging

Biological imaging may refer to any imaging technique used in biology.

https://en.wikipedia.org/wiki/Biological_imaging
https://en.wikipedia.org/wiki/Bioimage_informatics
https://en.wikipedia.org/wiki/Automated_tissue_image_analysis
https://en.wikipedia.org/wiki/Digital_pathology
https://en.wikipedia.org/wiki/Image_segmentation
https://en.wikipedia.org/wiki/Medical_imaging
https://en.wikipedia.org/wiki/Computer-aided_diagnosis

1.1.1 Medical imaging

https://en.wikipedia.org/wiki/Medical_imaging

1.1.2 Bioimage informatics

https://en.wikipedia.org/wiki/Bioimage_informatics

21-BioImage/pasted_image.png
Fluorescent image of a cell in telophase.
Multiple dyes were imaged and are shown in different colours.

21-BioImage/pasted_image006.png
Microscopic view of a histologic specimen of human lung tissue,
stained with hematoxylin and eosin.

Overview of a typical process:
21-BioImage/process.png

1.2 Types of problem

1.2.1 Subcellular Location Analysis

1.2.2 Automated tissue image analysis

1.2.3 High-Content Screening

1.2.4 Segmentation (or labeling)

21-BioImage/pasted_image008.png
Volume segmentation of a 3D-rendered CT scan of the thorax:
The anterior thoracic wall, the airways and the pulmonary vessels anterior to the root of the lung have been digitally removed in order to visualize thoracic contents:
* blue: pulmonary arteries
* red: pulmonary veins (and also the abdominal wall)
* yellow: the mediastinum
* violet: the diaphragm

1.2.5 Tracking

21-BioImage/tracking.jpg

1.2.6 Registration

Segment, then register:
21-BioImage/pasted_image013.png

21-BioImage/pasted_image014.png
21-BioImage/pasted_image015.png

1.2.7 Classification/diagnosis

Imaging methods:
21-BioImage/pasted_image011.png

Breast cancer image (e.g., a target classification)
21-BioImage/pasted_image012.png

1.2.8 Reconstruction

Taking sequential microscope image slices and reconstructing a 3D view on tissue.
21-BioImage/pasted_image010.png
21-BioImage/pasted_image009.png

+++++++++++ Cahoot-20-2

1.3 Common bioimage informatics methods and their applications

Many of these methods are general,
but are particularly common in bioimage analysis.

1.3.1 Overview

21-BioImage/bio0.png
A conceptual pipeline.
The specimen is imaged using any of today’s microscopes,
modeled by the input image f(v) passing through the blocks of
PSF (properties of the microscope, described by convolution with h(v)) and
A/D conversion (analog-to-digital conversion, effects of sampling and digitization together with uncertainty introduced by various sources of noise),
producing a digital image gn.
That digital image is then restored either via a
de-noising followed by de-convolution, or via
joint de-noising/de-convolution,
producing a digital image fn .
Various options are possible, the image could go through a
registration/mosaicing processing, producing rn,
segmentation/tracing/tracking, producing sn,
data analysis/modeling/simulations block, with the output yn.
At the input/output of each block,
one can join the pathway to either skip a block,
or send feedback to previous block(s) in the system.

21-BioImage/b01.png
Stages in image analysis illustrated using algal image:
* (a) detail from image;
* (b) same detail after application of 5 × 5 moving median filter;
* (c) histogram, on a square root scale, of pixel values after filtering, with an arrow to indicate the threshold;
* (d) result of thresholding image at pixel value 120, to produce a binary image;
* (e) result of applying morphological opening to (d);
* (f) separated objects in (e) counted.

1.3.2 Bioimage formats

21-BioImage/bio1.png
* Images viewed as matrices.
* The overview is not meant to be exhaustive, but reflects some of the more frequently used modes of image acquisition in biological and medical imaging, where the number of dimensions is typically one to five, with each dimension corresponding to an independent physical parameter:
* three to space,
* one (usually denoted t) to time, and
* one to wavelength, or color, or more generally to any spectral parameter (we denote this dimension s here).
* In other words, images are discrete functions, I (x, y, z, t, s),
* with each set of coordinates yielding the value of a unique sample
* (indicated by the small squares, the number of which is obviously arbitrary here).
* Note that the dimensionality of an image (indicated in the top row) is given by the number of coordinates that are varied during acquisition.
* To avoid confusion in characterizing an image, it is advisable to add adjectives indicating which dimensions were scanned, rather than mentioning just dimensionality.
* For example, a 4D image may either be a spatially 2D multi-spectral time-lapse image, or a spatially 3D time-lapse image.

1.3.3 Major terms

21-BioImage/bio2.png
Image processing
* takes an image as input and produces a modified version of it
* (in the case shown, the object contours are enhanced using an operation known as edge detection, described in more detail elsewhere in this booklet).

Image analysis
* concerns the extraction of object features from an image.

Computer graphics
* is the inverse of image analysis:
* it produces an image from given primitives, which could be numbers (the case shown), or parameterized shapes, or mathematical functions.

Computer vision
* aims at producing a high-level interpretation of what is contained in an image; this is also known as image understanding.

Visualization
* transforms higher-dimensional image data into a more primitive representation to facilitate exploring the data.

1.3.4 Intensity transformation

21-BioImage/bio3.png

Check out in class:
https://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py

1.3.5 Local image filtering

  1. Linear filtering operations:
  2. Nonlinear filtering operations

1.3.5.1 Linear

https://en.wikipedia.org/wiki/Kernel_(image_processing)
21-BioImage/bio4.png

Local filters
http://scipy-lectures.org/advanced/image_processing/index.html#image-filtering
http://scipy-lectures.org/packages/scikit-image/#local-filters

1.3.5.2 Non-linear

Code to show:
https://gitlab.com/bio-data/computer-vision/bio_images/morphology.py

The basic morphological operators are erosion, dilation, opening and closing.
https://en.wikipedia.org/wiki/Mathematical_morphology

https://en.wikipedia.org/wiki/Erosion_(morphology)
The erosion of a point is the minimum of the points in its neighborhood,
with that neighborhood defined by the structuring element.
In this way it is similar to many other kinds of image filters,
like the median filter and the gaussian filter.

Erosion = minimum filter.
Replace the value of a pixel by the minimal value covered by the structuring element.

Suppose:
A is a 13 x 13 matrix, and
B is a 3 x 3 matrix:
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1

Assuming that the of origin B is at its center,
for each pixel in A, superimpose the origin of B,
if B is completely contained by A,
then the pixel is retained,
else deleted.

Therefore the Erosion of A by B is given by this 13 x 13 matrix:
0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 0 0 0 1 1 1 1 0
0 1 1 1 1 0 0 0 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0

This means that only when the values in B are completely contained inside A’s,
that the pixels values are retained,
otherwise it gets deleted or eroded.

https://en.wikipedia.org/wiki/Dilation_(morphology)
Dilation = maximum filter.
Replace the value of central pixel by the maximal value covered by the structuring element.

Suppose:
A is the following 11 x 11 matrix, and
B is the following 3 x 3 matrix:
0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 0 0 1 1 1 0
0 1 1 1 1 0 0 1 1 1 0
0 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 0 1 1 1
0 1 1 0 0 0 1 1 1 1 0 1 1 1
0 1 1 0 0 0 1 1 1 1 0 1 1 1
0 1 1 0 0 0 1 1 1 1 0
0 1 1 1 1 1 1 1 0 0 0
0 1 1 1 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0

For each pixel in A that has a value of 1, superimpose B,
with the center of B aligned with the corresponding pixel in A.
Each pixel of every superimposed B is included in the dilation of A by B.
The dilation of A by B is given by this 11 x 11 matrix:
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 0 0
1 1 1 1 1 1 1 1 1 0 0

https://en.wikipedia.org/wiki/Opening_(morphology)
Opening - the dilation of the erosion

https://en.wikipedia.org/wiki/Closing_(morphology)
Closing - the erosion of the dilation

See ndimage api for these methods:
http://scipy-lectures.org/intro/scipy.html#mathematical-morphology
http://scipy-lectures.org/advanced/image_processing/index.html#mathematical-morphology
http://scipy-lectures.org/packages/scikit-image/#mathematical-morphology

21-BioImage/bio5.png

21-BioImage/b00.png
Turbinate image:
(a) as printed,
(b) after enhancement.

+++++++++++ Cahoot-20-3

1.3.6 Geometric transformations

  1. Coordinate transformation and
    This concerns the mapping of input pixel positions to output pixel positions (and vice versa).
    Depending on the complexity of the problem, one commonly uses a rigid,
    or an affine, or a curved transformation.

  2. Image re-sampling.
    Image re-sampling concerns the issue of computing output pixel values based on:
    the input pixel values and the coordinate transformation.
    This is also known as image interpolation, for which many methods exist.

21-BioImage/bio6.png
Geometrical image transformation by:

Coordinate transformation

Image re-sampling

1.3.7 Image restoration

21-BioImage/bio7.png

1.3.8 Co-localization analysis

An interesting question in many biological studies is:
To what degree two or more molecular objects (typically proteins) are active in the same specimen?
21-BioImage/bio8.png

1.3.9 Neuron tracing and quantification

Another biological image analysis problem,
which occurs for example when studying some molecular mechanisms,
such as involved in neurite outgrowth and differentiation,
is the length measurement of elongated image structures.
For practical reasons, many neuronal morphology studies are performed using 2D imaging.
This often results in ambiguous images.
At many places it is unclear whether neurites are branching or crossing.
Tracing such structures and building neuritic trees for morphological analysis,
requires the input of human experts to resolve ambiguities.

21-BioImage/bio9.png

1.3.10 Particle detection and tracking, and Cell tracking

Particle tracking methods consist of two stages
1. the detection of individual particles per time frame, and
2. the linking of particles detected in successive frames

21-BioImage/bio10.png
Challenges in particle and cell tracking.

Reminder:
21-BioImage/bio2.png

1.3.11 Visualization

Several visualization methods exist, which vary in efficiency:

1.3.11.1 Ray

1.3.11.2 Surface rendering

21-BioImage/bio11.png

1.4 General vision methods: edges and gradients

1.4.1 Low-level feature detecting algorithms

https://en.wikipedia.org/wiki/Feature_detection_(computer_vision)
21-BioImage/pasted_image021.png

1.4.1.1 Edges

https://en.wikipedia.org/wiki/Edge_detection
21-BioImage/pasted_image029.png

Canny:
https://en.wikipedia.org/wiki/Canny_edge_detection

Example problem:
https://en.wikipedia.org/wiki/Sobel_operator
21-BioImage/pasted_image018.png

How to detect edges?
Edge detection: zoom in 1 row
21-BioImage/pasted_image019.png
Of one row:
intensity,
derivative, and
smoothed derivative

Do this with rows, then cols, and you get edges

Sobel does x-derivative approx and x-derivative approx
(Convolution kernels above)

+++++++++++ Cahoot-20-4

http://scipy-lectures.org/advanced/image_processing/index.html#edge-detection
(sobel)
21-BioImage/pasted_image020.png

1.4.1.2 Gradients

21-BioImage/pasted_image022.png

1.4.1.3 Histogram of oriented gradients (HOG)

21-BioImage/pasted_image024.png
21-BioImage/pasted_image025.png

Feature histogram
21-BioImage/pasted_image026.png

21-BioImage/pasted_image028.png

HOGles
21-BioImage/pasted_image027.png
Middle row displays roughly what the computer with HOG might see.
Top row is HOG.
Doing HOG first helps to scan/detect objects (a form of feature extraction)

1.5 Labeling and segmentation

What is segmentation on biological images?
21-BioImage/segmentation_fault.jpg

How do you label all of the cell or nucleus data in these images with the same algorithm?
Below are images from your next assignment:
../Content.html
(watch video in class)

21-BioImage/0a849e0eb15faa8a6d7329c3dd66aabe9a294cccb52ed30a90c8ca99092ae732.png 21-BioImage/0e132f71c8b4875c3c2dd7a22997468a3e842b46aa9bd47cf7b0e8b7d63f0925.png
21-BioImage/0ed3555a4bd48046d3b63d8baf03a5aa97e523aa483aaa07459e7afa39fb96c6.png 21-BioImage/1ef68e93964c2d9230100c1347c328f6385a7bc027879dc3d4c055e6fe80cb3c.png
21-BioImage/0f1f896d9ae5a04752d3239c690402c022db4d72c0d2c087d73380896f72c466.png 21-BioImage/1cdbfee1951356e7b0a215073828695fe1ead5f8b1add119b6645d2fdc8d844e.png
21-BioImage/3c4c675825f7509877bc10497f498c9a2e3433bf922bd870914a2eb21a54fd26.png 21-BioImage/1d9eacb3161f1e2b45550389ecf7c535c7199c6b44b1c6a46303f7b965e508f1.png

1.5.1 Segmentation theory

1.5.1.1 Segmentation by histogram thresholding

https://en.wikipedia.org/wiki/Otsu's_method
In Otsu’s method we exhaustively search for the threshold that minimizes the intra-class variance (the variance within the class), defined as a weighted sum of variances of the two classes
The algorithm assumes that the image contains two classes of pixels following bi-modal histogram (foreground pixels and background pixels), it then calculates the optimum threshold separating the two classes so that their combined spread (intra-class variance) is minimal, or equivalently (because the sum of pairwise squared distances is constant), so that their inter-class variance is maximal.
21-BioImage/otsu_lung.jpg

https://en.wikipedia.org/wiki/Balanced_histogram_thresholding
This method weighs the histogram, checks which of the two sides is heavier, and removes weight from the heavier side until it becomes the lighter.
It repeats the same operation until the edges of the weighing scale meet.
21-BioImage/spyder.jpeg21-BioImage/spyder_thresh.jpeg

1.5.1.2 Segmentation with clustering algorithms

Recall k-means
1. Pick K cluster centers, either randomly or based on some heuristic method, for example K-means++
2. Assign each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster center
3. Re-compute the cluster centers by averaging all of the pixels in the cluster
4. Repeat steps 2 and 3 until convergence is attained (i.e. no pixels change clusters)

Original image
21-BioImage/polar.jpg

k-means on that image’s X-features (pixel values)
21-BioImage/polar_kmeans.png
* Recall/consider?
* What is distance?
* How were colors chosen?

1.5.1.3 Segmentation via edge detection

https://en.wikipedia.org/wiki/Edge_detection

1.5.1.4 Many many more segmentation methods

https://en.wikipedia.org/wiki/Image_segmentation

1.5.2 Python demos to go over in class

1.5.2.1 scipy from scipy-lectures

May be more out-of-date on some functions
http://scipy-lectures.org/intro/scipy.html#connected-components-and-measurements-on-images
http://scipy-lectures.org/advanced/image_processing/index.html#segmentation
(based on threshold and more)
http://scipy-lectures.org/advanced/image_processing/index.html#measuring-obj

1.5.2.2 scipy docs

https://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#segmentation-and-labeling
(no easy images for in-class)
https://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#object-measurements
(no easy images for in-class)

1.5.2.3 scikit-image from scipy-lectures

May be more out-of-date on some functions:
http://scipy-lectures.org/packages/scikit-image/#image-segmentation
In class:
http://scipy-lectures.org/packages/scikit-image/auto_examples/plot_labels.html#sphx-glr-packages-scikit-image-auto-examples-plot-labels-py
<http://scipy-lectures.org/packages/scikit-image/#measuring-regions-properties (for labeling)>
http://scipy-lectures.org/packages/scikit-image/#data-visualization-and-interaction

1.5.2.4 scikit-image docs

https://scikit-image.org/docs/stable/user_guide/tutorial_segmentation.html
https://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_regional_maxima.html#sphx-glr-auto-examples-color-exposure-plot-regional-maxima-py
https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_template.html#sphx-glr-auto-examples-features-detection-plot-template-py
<(template matching can be useful for consistent objects, but not all)>
https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_blob.html#sphx-glr-auto-examples-features-detection-plot-blob-py
https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_peak_local_max.html#sphx-glr-auto-examples-segmentation-plot-peak-local-max-py
<https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_label.html#sphx-glr-auto-examples-segmentation-plot-label-py >
(simple, good in class)
https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_random_walker_segmentation.html#sphx-glr-auto-examples-segmentation-plot-random-walker-segmentation-py
https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_watershed.html#sphx-glr-auto-examples-segmentation-plot-watershed-py
https://scikit-image.org/docs/stable/auto_examples/applications/plot_coins_segmentation.html#sphx-glr-auto-examples-applications-plot-coins-segmentation-py
(good comparison of methods, overview; do in Spyder)

1.5.3 How to judge accuracy of a labeling

https://en.wikipedia.org/wiki/Jaccard_index
Recall the rand index we covered previously!

Jaccard index, also known as Intersection over Union (IOU),
and the Jaccard similarity coefficient
(originally given the French name coefficient de communauté by Paul Jaccard),
is a statistic used for gauging the similarity and diversity of sample sets.
The Jaccard coefficient measures similarity between finite sample sets,
and is defined as the size of the intersection,
divided by the size of the union of the sample sets:

$J(A,B)={{|AB|} }={{|AB|} } $

If A and B are both empty,
we define J(A,B) = 1

\(0\leq J(A,B)\leq 1\)

Union —————————— Intersection
21-BioImage/Union_of_sets_A_and_B.png21-BioImage/Intersection_of_sets_A_and_B.png
21-BioImage/Intersection_over_Union_object_detection_bounding_boxes.jpg
21-BioImage/Intersection_over_Union_poor_good_and_excellent_score.png
21-BioImage/Intersection_over_Union_visual_equation.png

1.6 Intro on nuclei from data science bowl

https://kaggle.com/competitions/data-science-bowl-2018/stage1/teaching-notebook-for-total-imaging-newbies.py
https://kaggle.com/competitions/data-science-bowl-2018/stage1/basic-skimage-nuclei.py