Decision trees are a type of supervised learning algorithm which are used for mainly classification and regression.
They have a tree like structure in which the internal nodes are "tests" for attributes and the branches are the results of the "tests". The leaf nodes will be the class labels i.e., the output of the learner. Given below is the basic structure of a decision tree.
Given below is an example of a decision tree used to decide wether to walk or take the bus. "Walk" and "Bus" are the class labels in this example. The parameters of the model are weather, time and hunger.
As you can see in the above example, we can clearly examine the decision making process involved. This is a major advantage of decision trees- they are transparent models.
Before learning a model given a data and a learning algorithm, there are a few assumptions a learner makes about the algorithm. These assumptions are called the inductive bias. It is like the property of the algorithm.
For eg. in the case of decision trees, the depth of the tress is the inductive bias. If the depth of the tree is too low, then there is too much generalisation in the model. Similarly, if the depth of the tree is too much, there is too less generalisation and while testing the model on a new example, we might reach a particular example used to train the model. This may give us incorrect results.
In machine learning the hyperparameters are used to control the learning process as compared to parameters which are obtained after training the model on the data. The hyperparameters are independent of the data.
Hyperparameters are set manually before the training of the model. Generally the hyperparameter is chosen using the inductive bias. After we set out hyperparameter, we train on the data and get a trained model. We used the trained model on separate data to "validate" it. Using this, we tune our hyperparameters as per our requirement. A basic flowchart is given below