\( \newcommand{\E}{\mathrm{E}} \) \( \newcommand{\A}{\mathrm{A}} \) \( \newcommand{\R}{\mathrm{R}} \) \( \newcommand{\N}{\mathrm{N}} \) \( \newcommand{\Q}{\mathrm{Q}} \) \( \newcommand{\Z}{\mathrm{Z}} \) \( \def\ccSum #1#2#3{ \sum_{#1}^{#2}{#3} } \def\ccProd #1#2#3{ \sum_{#1}^{#2}{#3} }\)
CGAL 5.0.2 - Classification
CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction > Class Template Reference

#include <CGAL/Classification/TensorFlow/Neural_network_classifier.h>

Definition

template<typename ActivationFunction = tensorflow::ops::Relu>
class CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >

Classifier based on the TensorFlow version of the neural network algorithm.

This class provides an interface to a feature-based neural network: a set of features is used as an input layer followed by a user-specified number of hidden layers with a user-specified activation function. The output layer is a softmax layer providing, for each label, the probability that an input item belongs to it.

Warning
This feature is still experimental: it may not be stable and is likely to undergo substantial changes in future releases of CGAL. The API changes will be announced in the release notes.
Note
This class requires the TensorFlow library.
Template Parameters
ActivationFunctionChosen activation function for the hidden layers. Relu is used as default. The following functions can be used (please refer to the documentation of TensorFlow for more information):
  • tensorflow::ops::Elu
  • tensorflow::ops::Relu6
  • tensorflow::ops::Relu
  • tensorflow::ops::Selu
  • tensorflow::ops::Sigmoid
  • tensorflow::ops::Tanh
Is Model Of:
CGAL::Classification::Classifier

Constructor

 Neural_network_classifier (const Label_set &labels, const Feature_set &features)
 Instantiates the classifier using the sets of labels and features.
 

Training

template<typename LabelIndexRange >
void train (const LabelIndexRange &ground_truth, bool restart_from_scratch=true, std::size_t number_of_iterations=5000, float learning_rate=0.01, std::size_t batch_size=1000, const std::vector< std::size_t > &hidden_layers=std::vector< std::size_t >())
 Runs the training algorithm. More...
 

Input/Output

void save_configuration (std::ostream &output)
 Saves the current configuration in the stream output. More...
 
bool load_configuration (std::istream &input, bool verbose=false)
 Loads a configuration from the stream input. More...
 

Member Function Documentation

◆ load_configuration()

template<typename ActivationFunction = tensorflow::ops::Relu>
bool CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >::load_configuration ( std::istream &  input,
bool  verbose = false 
)

Loads a configuration from the stream input.

The input file should be in the XML format written by the save_configuration() method. The feature set of the classifier should contain the exact same features in the exact same order as the ones present when the file was generated using save_configuration().

◆ save_configuration()

template<typename ActivationFunction = tensorflow::ops::Relu>
void CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >::save_configuration ( std::ostream &  output)

Saves the current configuration in the stream output.

This allows to easily save and recover a specific classification configuration, that is to say:

  • The statistics of features (mean and standard deviation)
  • The number of hiddens layers and their respectives sizes
  • The weights and bias of the neurons

The output file is written in an XML format that is readable by the load_configuration() method.

◆ train()

template<typename ActivationFunction = tensorflow::ops::Relu>
template<typename LabelIndexRange >
void CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >::train ( const LabelIndexRange &  ground_truth,
bool  restart_from_scratch = true,
std::size_t  number_of_iterations = 5000,
float  learning_rate = 0.01,
std::size_t  batch_size = 1000,
const std::vector< std::size_t > &  hidden_layers = std::vector<std::size_t>() 
)

Runs the training algorithm.

From the set of provided ground truth, this algorithm constructs a neural network and applies an Adam optimizer to set up the weights and bias that produce the most accurate result with respect to this ground truth.

Precondition
At least one ground truth item should be assigned to each label.
Parameters
ground_truthvector of label indices. It should contain for each input item, in the same order as the input set, the index of the corresponding label in the Label_set provided in the constructor. Input items that do not have a ground truth information should be given the value -1.
restart_from_scratchshould be set to false if the user wants to continue adjusting weights and bias based, and set to true if the neural network should be re-created from scratch (discarding all previous training results).
number_of_iterationsnumber of times the optimizer is called.
learning_ratedescribes the rate at which the optimizer changes the weights and bias.
batch_sizesize of the random subset of inliers used for optimizing at each iteration.
hidden_layersvector containing the consecutive sizes (number of neurons) of the hidden layers of the network. If no vector is given, the behavior used is the following: 2 hidden layers are used. The first layer has as many neurons as the number of features; the second layer has a number of neurons equal to the average value between the number of features and the number of labels.