Supervised Learning : What if we use our raw sigmoid function as part of our cost function to optimize the problem of logistic regression.
Abstract A common question , once one learner of machine learning catches the concept of regression is to extrapolate the usage of mean squared errors through sigmoid function to figure out the problem of mathematical optimization. This post was popped up during my recent flight from Bogota to Los Angeles, in which I started to think how I could demonstrate in a simple way the non-convexity of this cost function. Development Let's commence by: What if we use our raw sigmoid function as part of our cost function to optimize the problem of logistic regression. In order to achieve this demonstration in an intuitive way and to further simplify, a dataset was created on purpose in which the presence of a malignant tumor is given by the tumor size. For example: Tumor Size, Malign 1.0 ,0 1.1 ,0 1.2 ,0 1.3 ,0 1.4,0 1.5,0.... Now, let's define our function: Then, let's put the dataset info in a graphic 2D in which Y-...