|
CGAL 4.14.1 - Classification
|
#include <CGAL/Classification/TensorFlow/Neural_network_classifier.h>
Classifier based on the TensorFlow version of the neural network algorithm.
This class provides an interface to a feature-based neural network: a set of features is used as an input layer followed by a user-specified number of hidden layers with a user-specified activation function. The output layer is a softmax layer providing, for each label, the probability that an input item belongs to it.
| ActivationFunction | Chosen activation function for the hidden layers. Relu is used as default. The following functions can be used (please refer to the documentation of TensorFlow for more information):
|
Constructor | |
| Neural_network_classifier (const Label_set &labels, const Feature_set &features) | |
Instantiates the classifier using the sets of labels and features. | |
Training | |
| template<typename LabelIndexRange > | |
| void | train (const LabelIndexRange &ground_truth, bool restart_from_scratch=true, std::size_t number_of_iterations=5000, float learning_rate=0.01, std::size_t batch_size=1000, const std::vector< std::size_t > &hidden_layers=std::vector< std::size_t >()) |
| Runs the training algorithm. More... | |
Input/Output | |
| void | save_configuration (std::ostream &output) |
Saves the current configuration in the stream output. More... | |
| bool | load_configuration (std::istream &input, bool verbose=false) |
Loads a configuration from the stream input. More... | |
| bool CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >::load_configuration | ( | std::istream & | input, |
| bool | verbose = false |
||
| ) |
Loads a configuration from the stream input.
The input file should be in the XML format written by the save_configuration() method. The feature set of the classifier should contain the exact same features in the exact same order as the ones present when the file was generated using save_configuration().
| void CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >::save_configuration | ( | std::ostream & | output | ) |
Saves the current configuration in the stream output.
This allows to easily save and recover a specific classification configuration, that is to say:
The output file is written in an XML format that is readable by the load_configuration() method.
| void CGAL::Classification::TensorFlow::Neural_network_classifier< ActivationFunction >::train | ( | const LabelIndexRange & | ground_truth, |
| bool | restart_from_scratch = true, |
||
| std::size_t | number_of_iterations = 5000, |
||
| float | learning_rate = 0.01, |
||
| std::size_t | batch_size = 1000, |
||
| const std::vector< std::size_t > & | hidden_layers = std::vector<std::size_t>() |
||
| ) |
Runs the training algorithm.
From the set of provided ground truth, this algorithm constructs a neural network and applies an Adam optimizer to set up the weights and bias that produce the most accurate result with respect to this ground truth.
| ground_truth | vector of label indices. It should contain for each input item, in the same order as the input set, the index of the corresponding label in the Label_set provided in the constructor. Input items that do not have a ground truth information should be given the value -1. |
| restart_from_scratch | should be set to false if the user wants to continue adjusting weights and bias based, and set to true if the neural network should be re-created from scratch (discarding all previous training results). |
| number_of_iterations | number of times the optimizer is called. |
| learning_rate | describes the rate at which the optimizer changes the weights and bias. |
| batch_size | size of the random subset of inliers used for optimizing at each iteration. |
| hidden_layers | vector containing the consecutive sizes (number of neurons) of the hidden layers of the network. If no vector is given, the behavior used is the following: 2 hidden layers are used. The first layer has as many neurons as the number of features; the second layer has a number of neurons equal to the average value between the number of features and the number of labels. |