Performance Evaluation Metrics

Manidhar Kodurupaka
3 min readApr 22, 2021

The main aim of the performance evaluation metrics is to find the position of the target object. The various issues on comparison got addressed and allow the features to be expressed. Here, the target is tracked regarding the literature where all the time percent is calculated. After emerging the issue tracer performance evaluation got aspected. The metrics which are developed for the positional tracker got evaluated with regard to the real trajectories.

Performance evaluation metrics in machine learning-

Considering the evaluation metrics, the performance of the classification model is measured and estimated. Here the input value of the probability is between 0 and 1. We can understand it by differentiating with speed(accuracy). We can define accuracy by making the percentage where all the correct predictions in order to test the data. We can divide the number of correct predictions by the number of total predictions.

Evaluation metrics are accustomed to measure the standard of the applied mathematics or machine learning model. There square measure many various sorts of analysis metrics on the market to check a model. These embody classification accuracy, power loss, confusion matrix, and others.

Types of Predictive models

Talking about predictive models, we consider two models i.e. regression model or classification model. In both this model, the evaluation metrics used are different.

So, in the classification problems, we can make different kinds of output:

  1. Class Output
  2. Probability Output

Here we go,

  1. Class output: We consider two algorithms here, i.e. which creates a class output. In the case of instance, the outputs we get can be either 0 or 1, this happens in binary classification problem. Considering the present world we have algorithms that convert class outputs to probability. But it was found that these algorithms are not accepted by the statistics community.
  2. Probability output: There are many algorithms that give probability outputs like Logistic Regression, Gradient Boosting, Adaboost, etc. In order to create a threshold probability, it's simple by converting outputs to class output.

In regression problems, the output we get is always continuous in nature and requires no further treatment.

Model prediction

1. Confusion Matrix:

This matrix is one of the easiest metrics where you can find the speed of the model and its correctness. It is said to be a Classification problem because the output we get can be of any type.

model evaluation metrics

So before going into the problem of confusion matrix, let's say here we are solving the classification problem. After knowing this we can explore other various classifiers. But the thing is when I started learning about the performance evaluation and confusion matrix, there are many more to learn which creates a lot of confusion among all. Like, precision, f1score, accuracy, recall, positives and negatives values. Again a lot of confusion created in my mind about each.

--

--