Ha, it's English time, let's spend a few minutes to learn a simple machine learning example in a simple passage.

Introduction

  • What is machine learning? you design methods for machine to learn itself and improve itself.
  • By leading into the machine learning methods, this passage introduced three methods to get optimal k and b of linear regression(y = k*x + b).
  • The data used is produced by ourselves.
  1. Self-sufficient data generation
  2. Random Chosen Method
  3. Supervised Direction Method
  4. Gradient Descent Method
  5. Conclusion

Self-sufficientDataGeneration

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import random #produce data
age_with_fares = pd.DataFrame({"Fare":[263.0, 247.5208, 146.5208, 153.4625, 135.6333, 247.5208, 164.8667, 134.5, 135.6333, 153.4625, 134.5, 263.0, 211.5, 263.0, 151.55, 153.4625, 227.525, 211.3375, 211.3375],
"Age":[23.0, 24.0, 58.0, 58.0, 35.0, 50.0, 31.0, 40.0, 36.0, 38.0, 41.0, 24.0, 27.0, 64.0, 25.0, 40.0, 38.0, 29.0, 43.0]})
sub_fare = age_with_fares['Fare']
sub_age = age_with_fares['Age'] #show our data
plt.scatter(sub_age,sub_fare)
plt.show()

def func(age, k, b): return k*age+b
def loss(y,yhat): return np.mean(np.abs(y-yhat))
#here we choose only minus methods as the loss, besides, there are mean-square-error(L2) loss and other loss methods

RandomChosenMethod

min_error_rate = float('inf')

loop_times = 10000
losses = [] def step(): return random.random() * 2 - 1
# random生成 0~1的随机数;(0,1)*2 -> (0,2); 再减1 -> (-1,1), 随机生成+循环:学习动力来源 while loop_times > 0:
k_hat = random.random() * 20 - 10
b_hat = random.random() * 20 - 10
estimated_fares = func(sub_age, k_hat, b_hat)
error_rate = loss(y=sub_fare, yhat=estimated_fares)
if error_rate<min_error_rate:# 自我监督机制体现在此
min_error_rate = error_rate
losses.append(error_rate)
best_k = k_hat
best_b = b_hat loop_times -= 1 plt.scatter(sub_age, sub_fare)
plt.plot(sub_age, func(sub_age, best_k, best_b), c = 'r')
plt.show()

show the loss change

plt.plot(range(len(losses)), losses)
plt.show()

Explain

  • We can see the loss decrease sometimes quickly, sometimes slowly, anyway, it decreases finally.
  • One shortcoming of this method: the Random Chosen methods is not so valid as it runs random function tons of time.
  • Because even when it comes out a better parameter, it may choose a worse one next time.
  • One improved method see next part.

SupervisedDirectionMethod

change_directions = [
(+1, -1),# k increase, b decrease
(+1, +1),
(-1, -1),
(-1, +1)
]
min_error_rate = float('inf') loop_times = 10000
losses = [] best_direction = random.choice(change_directions)
#定义每次变化(步长)的大小
def step(): return random.random()*2-1
#random生成 0~1的随机数;(0,1)*2 -> (0,2); 再减1 -> (-1,1);
#但是change_directions已经有加减1(改变方向)的操作,所以去掉 *2-1
#但保留*2-1 能增加choise k_hat = random.random() * 20 - 10
b_hat = random.random() * 20 - 10
best_k, best_b = k_hat, b_hat
while loop_times > 0:
k_delta_direction, b_delta_direction = best_direction or random.choice(change_directions)
k_delta = k_delta_direction * step()
b_delta = b_delta_direction * step() new_k = best_k + k_delta
new_b = best_b + b_delta estimated_fares = func(sub_age, new_k, new_b)
error_rate = loss(y=sub_fare, yhat=estimated_fares)
#print(error_rate) if error_rate < min_error_rate:#supervisor learning
min_error_rate = error_rate
best_k, best_b = new_k, new_b best_direction = (k_delta_direction, b_delta_direction) #print(min_error_rate)
#print("loop == {}".format(loop_times))
losses.append(min_error_rate)
#print("f(age) = {} * age + {}, with error rate: {}".format(best_k, best_b, error_rate))
else:
best_irection = random.choice(list(set(change_directions)-{(k_delta_direction, b_delta_direction)}))
#新方向不能等于老方向
loop_times -= 1
print("f(age) = {} * age + {}, with error rate: {}".format(best_k, best_b, error_rate))
plt.scatter(sub_age, sub_fare)
plt.plot(sub_age, func(sub_age, best_k, best_b), c = 'r')
plt.show()

show the loss change

plt.plot(range(len(losses)), losses)
plt.show()

Explain

  • The Supervised Direction method(2nd method) is better than Random Chosen method(1st method).
  • The 2nd method introduced supervise mechanism, which is more efficiently in changing parameters k and b.
  • But the 2nd method can't optimize the parameters to smaller magnitude.
  • Besides, the 2nd method can't find the extreme value, thus can't find the optimal parameters effectively.

GradientDescentMethod

min_error_rate = float('inf')
loop_times = 10000
losses = []
learing_rate = 1e-1 change_directions = [
# (k, b)
(+1, -1), # k increase, b decrease
(+1, +1),
(-1, +1),
(-1, -1) # k decrease, b decrease
] k_hat = random.random() * 20 - 10
b_hat = random.random() * 20 - 10 best_direction = None
def step(): return random.random() * 1
direction = random.choice(change_directions) def derivate_k(y, yhat, x):
abs_values = [1 if (y_i - yhat_i) > 0 else -1 for y_i, yhat_i in zip(y, yhat)] return np.mean([a * -x_i for a, x_i in zip(abs_values, x)]) def derivate_b(y, yhat):
abs_values = [1 if (y_i - yhat_i) > 0 else -1 for y_i, yhat_i in zip(y, yhat)]
return np.mean([a * -1 for a in abs_values]) while loop_times > 0: k_delta = -1 * learing_rate * derivate_k(sub_fare, func(sub_age, k_hat, b_hat), sub_age)
b_delta = -1 * learing_rate * derivate_b(sub_fare, func(sub_age, k_hat, b_hat)) k_hat += k_delta
b_hat += b_delta estimated_fares = func(sub_age, k_hat, b_hat)
error_rate = loss(y=sub_fare, yhat=estimated_fares) #print('loop == {}'.format(loop_times))
#print('f(age) = {} * age {}, with error rate: {}'.format(k_hat, b_hat, error_rate))
losses.append(error_rate) loop_times -= 1 print('f(age) = {} * age {}, with error rate: {}'.format(k_hat, b_hat, error_rate))
plt.scatter(sub_age, sub_fare)
plt.plot(sub_age, func(sub_age, k_hat, b_hat), c = 'r')
plt.show()

show the loss change

plt.plot(range(len(losses)), losses)
plt.show()

Explain

  • To fit the objective function given discrete data, we use the loss function to determine how good the fit is.
  • In order to get the minimum loss, it becomes a problem of finding the extremum without constraints.
  • Therefore, the method of gradient reduction of the objective function is conceived.
  • The gradient is the maximum value in the directional derivative.
  • When the gradient approaches 0, we fit the better objective function.

Conclusion

  • Machine learning is a process to make the machine learning and improving by methods designed by us.
  • Random function usually not so efficient, but when we add supervise mechanism, it becomes efficient.
  • Gradient Descent is efficiently to find extreme value and optimal.

Serious question for this article:

Why do you use machine learning methods instead of creating a y = k*x + b formula?

  • In some senarios, complicated formula can't meet the reality needs, like irrational elements in economics models.
  • When we have enough valid data, we can run regression or classification model by machine learning methods
  • We can also evaluate our machine learning model by test data which contributes to the application of the model in our real life
  • This is just an example, Okay.

Reference for this article: Jupyter Notebook

Linear Regression with machine learning methods的更多相关文章

  1. Machine Learning Methods: Decision trees and forests

    Machine Learning Methods: Decision trees and forests This post contains our crib notes on the basics ...

  2. How to use data analysis for machine learning (example, part 1)

    In my last article, I stated that for practitioners (as opposed to theorists), the real prerequisite ...

  3. 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)

    ##机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)---#####注:机器学习资料[篇目一](https://github.co ...

  4. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  5. booklist for machine learning

    Recommended Books Here is a list of books which I have read and feel it is worth recommending to fri ...

  6. Machine Learning and Data Mining(机器学习与数据挖掘)

    Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...

  7. Why The Golden Age Of Machine Learning is Just Beginning

    Why The Golden Age Of Machine Learning is Just Beginning Even though the buzz around neural networks ...

  8. Introduction to Machine Learning

    Chapter 1 Introduction 1.1 What Is Machine Learning? To solve a problem on a computer, we need an al ...

  9. Machine learning | 机器学习中的范数正则化

    目录 1. \(l_0\)范数和\(l_1\)范数 2. \(l_2\)范数 3. 核范数(nuclear norm) 参考文献 使用正则化有两大目标: 抑制过拟合: 将先验知识融入学习过程,比如稀疏 ...

随机推荐

  1. 全排列 permutation

    给定一个数字列表,返回其所有可能的排列 lintcode package www.dxb.com; import java.util.List;import java.util.ArrayList; ...

  2. SpringMVC Web项目升级为Springboot项目(二)

    一.访问原项目地址,报404错误 由于原项目地址启动路径为http://localhost:8080/xxx Spring boot默认启动路径为http://localhost:8080/ 所以需要 ...

  3. vue中loding

    <template> <div class="mf-loading-container" v-show="ifShow"> <im ...

  4. 使用paginate分页后数据处理

    public function index(){ $sql = ""; $list = ""; $pagenumber = 20;//默认分页条数 //查询数据 ...

  5. python 使用 PIL 和 matplotlib 获取图片像素点处理之后再写入

    python 版本 3.x 首先安装 PIL 由于PIL仅支持到Python 2.7,加上年久失修,于是一群志愿者在PIL的基础上创建了兼容的版本,名字叫Pillow,支持最新Python 3.x,又 ...

  6. python 计算器

    import redef main(): #""代表的是空,split()方法已空格或者\t,\n进行切割,join方法是列表,元组,字典变为字符串 a = "" ...

  7. Windows Java安装

    jdk安装与配置jdk for windows1.下载官网地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html2. ...

  8. HDU 1556 BIT区间修改+单点查询(fread读入优化)

    BIT区间修改+单点查询 [题目链接]BIT区间修改+单点查询 &题解: BIT区间修改+单点查询和求和的bit是一模一样的(包括add,sum) 只不过是你使用函数的方式不一样: 使用区间的 ...

  9. 18.12.09-C语言练习:兔子繁衍问题 / Fibonacci 数列

    题目: 问题解析: 这是典型的/Fibonacci 数列问题.具体这里不赘述. 问题中不论是初始的第1对兔子还是以后出生的小兔子都是从第3个月龄起每个月各生一对兔子. 设n1,n2,n3分别是每个月1 ...

  10. JAVA 11初体验

    JAVA 11初体验 随着JAVA没半年发布一次新版本,前几天JAVA 11隆重登场.在JAVA 11中,增加了一些新的特性和api, 同时也删除了一些特性和api,还有一些性能和垃圾回收的改进. 作 ...