top of page

Some thoughts on emotional adjustment --
A metaphor in Machine Learning

Since I changed my program, I often get caught up in roller coaster mood swings. Today, it suddenly occurred to me that we can actually think of mood swings as a kind of 「underfitting-overfitting problem」 in machine learning. From a 「Bias-variance tradoff」 point of view, you may be able to better control your emotions. 


「Underfitting-overfitting problem」 is a common problem in machine learning. For example, in the linear regression problem, given a data set, we hope to be able to find a representative equation to represent the data. The yellow square in the following figure is the data, and the blue line is the fitting curve that the algorithm "learns" to.

  • Instagram
  • email

If you look at only the yellow points, you can clearly see that these points are arranged in a wide U shape: slow down and then slow growth. The blue curve in the middle of the three graphs fits very well with this trend. 


In the leftmost figure, the fitting curve learned by the algorithm is a straight line-which does not show a trend of data decline-nor does it properly represent each data. This situation is called「underfitting」. 


In the rightmost diagram, although the entire fitting curve perfectly represents a given data (which runs through almost every data), the curve does not capture the most important U-row structure of the dataset. And its ability to generalize new data is poor: if new data is put in, even if the data is in a wide U-shaped trend, it is likely to be far from the fitting curve. This situation is called「overfitting」.

I think「underfitting」and「overfitting」can probably be analogous to the two poor states in a person's emotional model. Under the control of「underfitting」emotions, people will become numb to their own situation, escape, do not accept the information of the outside world, and do not want to make themselves better. In this mood, people probably lose their enthusiasm for life and tend to indulge in their own world. And in the「overfitting」under the control of the mood, people will overreact to the outside world, will be easily disturbed by a little outside interference and restlessness. In this mood, people are probably nervous and busy every day, but it is only after a period of time that they find themselves running aimlessly.


Naturally, we can learn from the method of「underfitting—overfitting」in machine learning, make corresponding adjustments to these two models. 


Usually, the methods to solve the problem of「underfitting」are as follows: 


(1) add the (features) to describe the model. 

(2) increase the (parameters) of the control model. 


According to this line of thinking, in the emotional underfitting, we should try to develop new interests, try reach out to different circles, do not let a factor dominate their own spiritual world. Anyway, we have to enrich our lives. 


Usually, the methods to solve the problem of「overfitting」are as follows: 


(1) increase the training set. 

(2) reduce the (parameters) of the control model. 

(3) Let the model (regularization).


According to this line of thinking, in the emotional overfitting, we should focus on the immediate thing itself rather than the outside world's influence on him. The so-called "think about chopping wood when chopping wood, thinking about boiling water when boiling water, thinking about cooking when cooking." In short, become calmer, more focused, and more disciplined.


This kind of thinking can continue. Such as: in the case of emotional「underfitting」, how to choose the new feature? that is most needed. In the case of emotional「overfitting」is there a kind of emotional (crossvalidation) [2]? How do I do this crossvalidation? These problems themselves have nothing to do with the subject of machine learning, but it provides a ready way to think about them. That's why I often think that everyone should learn more about data science, such as statistics and machine learning, which are not only tools for artificial intelligence, but also stones for thinking about the little problems around them.


bottom of page