Contents

## When to use multi-label or multi-target regression?

Multi target regression is the term used when there are multiple dependent variables. If the target variables are categorical, then it is called multi-label or multi-target classification, and if the target variables are numeric, then multi-target (or multi-output) regression is the name commonly used.

### What does it mean to have multiple regression models?

Multiple regression model is one that attempts to predict a dependent variable which is based on the value of two or more independent variables.

#### Can a classification model support multiple target variables?

Machine Learning classifiers usually support a single target variable. In the case of regression models, the target is real valued, whereas in a classification model, the target is binary or multivalued. F o r classification models, a problem with multiple target variables is called multi-label classification.

**What do you call problem with multiple target variables?**

F o r classification models, a problem with multiple target variables is called multi-label classification. In the realm of regression models, as a beginner, I found the nomenclature a bit confusing.

**How are targets set in a performance management system?**

The quality of the final results that the system produces depends directly on the targets that are set at the beginning of the cycle. In other words the better the quality of the target setting, the better the quality of the whole performance management system. On the other hand, the opposite is true as well.

## How are decision trees used in multi target regression?

Multi target regression (MTR) using Clustering and Decision trees. For the rest of the discussion, we shall focus on a single method, that is, decision trees and ensembles of decision trees for MTR. We need to first take a look at Predictive Clustering Trees (PCT), which is the foundation on which decision trees for MTR are built on.

### How to choose the best regression model for a cluster?

Given that each node of a tree corresponds to a cluster, the decision tree algorithm is then adapted to select in each node the test that will maximize the distance between the resulting clusters in its subnodes. Given a Cluster C and a Test T, the best test T is one that maximizes the distance between sub-cluster C1 and C2.