Match Feature Points#

Note

Wish List Still needs additional work to finish proper creation of example.

Synopsis#

Match feature points.

Results#

Note

Help Wanted Implementation of Results for sphinx examples containing this message. Reconfiguration of CMakeList.txt may be necessary. Write An Example <https://itk.org/ITKExamples/Documentation/Contribute/WriteANewExample.html>

Code#

C++#

#include "itkBlockMatchingImageFilter.h"
#include "itkImage.h"
#include "itkPoint.h"
#include "itkPointSet.h"

using ImageType = itk::Image<unsigned char, 2>;

static void
CreateImage(ImageType::Pointer image, const unsigned int x);

int
main(int /*argc*/, char * /*argv*/[])
{
  // Create input images
  auto fixedImage = ImageType::New();
  CreateImage(fixedImage, 40);

  auto movingImage = ImageType::New();
  CreateImage(movingImage, 50);

  //  using BlockMatchingImageFilterType = itk::BlockMatchingImageFilter<ImageType, ImageType, PointSetType>;
  using BlockMatchingImageFilterType = itk::BlockMatchingImageFilter<ImageType>;
  auto blockMatchingImageFilter = BlockMatchingImageFilterType::New();

  // Generate feature points
  //  using PointSetType = itk::PointSet< float, 2>;
  using PointSetType = BlockMatchingImageFilterType::FeaturePointsType;
  using PointType = PointSetType::PointType;
  using PointsContainerPointer = PointSetType::PointsContainerPointer;

  auto                   pointSet = PointSetType::New();
  PointsContainerPointer points = pointSet->GetPoints();

  PointType p0, p1, p2, p3;

  p0[0] = 40.0;
  p0[1] = 40.0;
  p1[0] = 40.0;
  p1[1] = 60.0;
  p2[0] = 60.0;
  p2[1] = 40.0;
  p3[0] = 60.0;
  p2[1] = 60.0;

  points->InsertElement(0, p0);
  points->InsertElement(1, p1);
  points->InsertElement(2, p2);
  points->InsertElement(3, p3);

  blockMatchingImageFilter->SetFixedImage(fixedImage);
  blockMatchingImageFilter->SetMovingImage(movingImage);
  blockMatchingImageFilter->SetFeaturePoints(pointSet);
  blockMatchingImageFilter->UpdateLargestPossibleRegion();

  typename BlockMatchingImageFilterType::DisplacementsType * displacements =
    blockMatchingImageFilter->GetDisplacements();

  std::cout << "There are " << displacements->GetNumberOfPoints() << " displacements." << std::endl;

  return EXIT_SUCCESS;
}

void
CreateImage(ImageType::Pointer image, const unsigned int x)
{
  // Allocate empty image
  itk::Index<2> start;
  start.Fill(0);
  itk::Size<2> size;
  size.Fill(100);
  ImageType::RegionType region(start, size);
  image->SetRegions(region);
  image->Allocate();
  image->FillBuffer(0);

  // Make a white square
  for (unsigned int r = x; r < x + 20; ++r)
  {
    for (unsigned int c = 40; c < 60; ++c)
    {
      ImageType::IndexType pixelIndex;
      pixelIndex[0] = r;
      pixelIndex[1] = c;
      image->SetPixel(pixelIndex, 255);
    }
  }
}

Classes demonstrated#

template<typename TFixedImage, typename TMovingImage = TFixedImage, typename TFeatures = PointSet<Matrix<SpacePrecisionType, TFixedImage::ImageDimension, TFixedImage::ImageDimension>, TFixedImage::ImageDimension>, class TDisplacements = PointSet<Vector<typename TFeatures::PointType::ValueType, TFeatures::PointDimension>, TFeatures::PointDimension>, class TSimilarities = PointSet<SpacePrecisionType, TDisplacements::PointDimension>>
class BlockMatchingImageFilter : public itk::MeshToMeshFilter<TFeatures, TDisplacements>

Computes displacements of given points from a fixed image in a floating image.

BlockMatchingImageFilter takes fixed and moving Images as well as PointSet of feature points as inputs. Physical coordinates of feature points are stored as point coordinates. Points of the input point set must have unique identifiers within range 0..N-1, where N is the number of points. Pixels (pointData) of input point set are not used. Additionally, by default, feature points are expected to lie at least (SearchRadius + BlockRadius) voxels from a boundary. This is usually achieved by using an appropriate mask during selection of feature points. If you are unsure whether feature points satisfy the above condition set CheckBoundary flag to true which turns on boundary checks. The default output(0) is a PointSet with displacements stored as vectors. Additional output(1) is a PointSet containing similarities. Similarities are needed to compute displacements and are always computed. The number of points in the output PointSet is equal to the number of points in the input PointSet.

The filter is templated over fixed Image, moving Image, input PointSet, output displacements PointSet and output similarities PointSet.

This filter is intended to be used in the process of Physics-Based Non-Rigid Registration. It computes displacement for selected points based on similarity [M. Bierling, Displacement estimation by hierarchical block matching, Proc. SPIE Vis. Comm. and Image Proc., vol. 1001, pp. 942-951, 1988.].

Author

Andriy Kot, Center for Real-Time Computing, Old Dominion University, Norfolk, VA

See

MaskFeaturePointSelectionFilter

ITK Sphinx Examples:

See itk::BlockMatchingImageFilter for additional documentation.