iemodel trainAndevaluate model

The iemodel trainAndevaluate model command evaluates and trains a new model as well as an existing model. In case of an existing model you need to overwrite it with the newly trained model by using "true" for the argument --u in the command.

This command calls your training options file and provides an optional output file with evaluation results, should you choose to produce that file.


iemodel trainAndevaluate model --f trainingOptionsFile --u trueOrFalse --o outputFileName --c categoryCount --d trueOrfalse
Yes--f trainingOptionsFileSpecifies the name and location of the training options file used to train the model. Directory paths you specify here are relative to the location where you are running the Administration Utility.
No--u overWriteIfExistsSpecifies whether to overwrite the existing trained model (if one exists).
Overwrites the existing model.
Does not overwrite the existing model.
No--o outputFileNameSpecifies the name and location of the output file that will store the evaluation results.
No--c categoryCountSpecifies the number of categories in the model; must be a numeric value.
Note: It is applicable only for Text Classification model.
No--d trueOrfalseSpecifies whether to display a table with entity wise detailed analysis; the value must be true or false, as below:
Detailed evaluation results are required.
Detailed evaluation results are not required.
The default is false.

The Model Evaluation Results table, and Confusion Matrix with its columns, as described below, display the counts per entity.

Note: If the command is run without this argument or with the argument value false, the Model Evaluation Results table and Confusion Matrix are not displayed. Only the Model Evaluation Statistics are displayed.


Model Evaluation Statistics
Executing this command displays these evaluation statistics in a tabular format:
  • Precision: It is a measure of exactness. Precision defines the proportion of correctly identified tuples.
  • Recall: It is a measure of completeness of the results. Recall can be defined as a fraction of relevant instances that are retrieved.
  • F1 Measure: It is the measure of the accuracy of a test. The computation of F1 score takes into account both precision and recall of the test. It can be interpreted as the weighted average of the precision and recall, where F1 score reaches its best value at 1 and worst at 0.
  • Accuracy: It measures the degree of correctness of results. It defines the closeness of the measured value to the known value.
Model Evaluation Results
If the command is run with the argument --d true, the match counts of all the entities are displayed in a tabular format. The columns of the table are:
Input Count
The number of occurrences of the entity in the input data.
Mismatch Count
The number of times the entity match failed.
Match Count
The number of times the entity match succeeded.
Confusion Matrix
The Confusion Matrix (shown below) allows visualization of how an algorithm performs. It illustrates the performance of a classification model.
The column represents the instances in a predicted class while the row represents the instances in an actual class. Some of the terms associated with the confusion matrix are:
The number of occurrences of the entity in the actual class.
The number of occurrences of the entity in the predicted class.
True Positive: The number of entity occurrences predicted as positive and actually true as well.
True Negative:The number of entity occurrences predicted as negative but actually true.
False Positive: The number of entity occurrences predicted as positive but actually false.
False Negative: The number of entity occurrences predicted as negative and actually false as well.


This example:
  • Uses a training options file called "ModelTrainingFile" that is located in "C:\Spectrum\IEModels"
  • Overwrites any existing output file of the same name
  • Stores the output of the evaluation in a file called "MyModelTestOutput"
  • Specifies a category count of 4
  • Specifies that a detailed analysis of the evaluation is required
iemodel trainAndevaluate model --f C:\Spectrum\IEModels\ModelTrainingFile --u true --o C:\Spectrum\IEModels\MyModelTestOutput --c 4 --d true