https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables

1。

Suppose m=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

midterm exam

(midterm exam)2

final exam

89

7921

96

72

5184

74

94

8836

87

69

4761

78

You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ0+θ1x1+θ2x2, where x1 is the midterm score and x2 is (midterm score)2. Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.

What is the normalized feature x2(2)? (Hint: midterm = 72, final = 74 is training example 2.) Please round off your answer to two decimal places and enter in the text box below.

答案: -0.37

平均值 :(7921+5184+8836+4761)/4 = 6675.5

Max-Min: 8836-4761=4075

x=(xn-平均值)/(Max-Min)

training example 2   (5184-6675.5)/4075=-0.37

2。

You run gradient descent for 15 iterations

with α=0.3 and compute J(θ) after each

iteration. You find that the value of J(θ) increases over

time. Based on this, which of the following conclusions seems

most plausible?

α=0.3 is an effective choice of learning rate.

Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).

Rather than use the current value of α, it'd be more promising to try a smaller value of α (say α=0.1).

答案:B. Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).

a越大下降越快,a越小下降越慢。

3。

Suppose you have m=23 training examples with n=5 features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation is θ=(XTX)−1XTy. For the given values of m and n, what are the dimensions of θ, X, and y in this equation?

X is 23×6, y is 23×6, θ is 6×6

X is 23×5, y is 23×1, θ is 5×5

X is 23×6, y is 23×1, θ is 6×1

X is 23×5, y is 23×1, θ is 5×1

答案:C. X is 23×6, y is 23×1, θ is 6×1

X n+1 列 ,  y 1 列 , θ  n+1 行

4。

Suppose you have a dataset with m=50 examples and n=15 features for each example. You want to use multivariate linear regression to fit the parameters θ to our data. Should you prefer gradient descent or the normal equation?

Gradient descent, since it will always converge to the optimal θ.

The normal equation, since it provides an efficient way to directly find the solution.

Gradient descent, since (XTX)−1 will be very slow to compute in the normal equation.

The normal equation, since gradient descent might be unable to find the optimal θ.

答案: B. The normal equation, since it provides an efficient way to directly find the solution.

比较梯度下降与normal equation

梯度下降需要Feature Scaling;normal equation 简单方便不需Feature Scaling。

normal equation 时间复杂度较大,适用于Feature数量较少的情况。

当Feature数量<100000时  Normal Equation
当Feature数量>100000时  Gradient Descent
 

5。

Which of the following are reasons for using feature scaling?

It speeds up solving for θ using the normal equation.

It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).

It speeds up gradient descent by making it require fewer iterations to get to a good solution.

It is necessary to prevent gradient descent from getting stuck in local optima.

答案 :C. It speeds up gradient descent by making it require fewer iterations to get to a good solution.

上一题也考到这个点:normal equation 不需要 Feature Scaling,排除AB, 特征缩放减少迭代数量,加快梯度下降,然而不能防止梯度下降陷入局部最优。

Coursera machine learning 第二周 quiz 答案 Linear Regression with Multiple Variables的更多相关文章

  1. Coursera machine learning 第二周 编程作业 Linear Regression

    必做: [*] warmUpExercise.m - Simple example function in Octave/MATLAB[*] plotData.m - Function to disp ...

  2. Coursera machine learning 第二周 quiz 答案 Octave/Matlab Tutorial

    https://www.coursera.org/learn/machine-learning/exam/dbM1J/octave-matlab-tutorial Octave Tutorial 5  ...

  3. [Machine Learning (Andrew NG courses)]IV.Linear Regression with Multiple Variables

    watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvenFoXzE5OTE=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA ...

  4. 【原】Coursera—Andrew Ng机器学习—Week 2 习题—Linear Regression with Multiple Variables 多变量线性回归

    Gradient Descent for Multiple Variables [1]多变量线性模型  代价函数 Answer:AB [2]Feature Scaling 特征缩放 Answer:D ...

  5. Machine Learning – 第2周(Linear Regression with Multiple Variables、Octave/Matlab Tutorial)

    Machine Learning – Coursera Octave for Microsoft Windows GNU Octave官网 GNU Octave帮助文档 (有900页的pdf版本) O ...

  6. Stanford机器学习---第二讲. 多变量线性回归 Linear Regression with multiple variable

    原文:http://blog.csdn.net/abcjennifer/article/details/7700772 本栏目(Machine learning)包括单参数的线性回归.多参数的线性回归 ...

  7. Linear regression with multiple variables(多特征的线型回归)算法实例_梯度下降解法(Gradient DesentMulti)以及正规方程解法(Normal Equation)

    ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, , ...

  8. 机器学习 (二) 多变量线性回归 Linear Regression with Multiple Variables

    文章内容均来自斯坦福大学的Andrew Ng教授讲解的Machine Learning课程,本文是针对该课程的个人学习笔记,如有疏漏,请以原课程所讲述内容为准.感谢博主Rachel Zhang 的个人 ...

  9. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 4_Linear Regression with Multiple Variables 多变量线性回归

    Lecture 4 Linear Regression with Multiple Variables 多变量线性回归 4.1 多维特征 Multiple Features4.2 多变量梯度下降 Gr ...

随机推荐

  1. 【mybatis】时间范围 处理时间格式问题 + 查询当天 本月 本年 + 按当天 当月 范围 查询 分组

    1.mybatis中查询时间范围处理: 例如2018-05-22 ~2018-05-23 则查出来的数据仅能查到2018-05-22的,查不到2018-05-23的数据! 为什么会这样? 明明时间字段 ...

  2. jsonObject关于xml,json,bean之间的转换关系

    1.json转换为JAVA @Test public void jsonToJAVA() { System.out.println("json字符串转java代码"); Strin ...

  3. SilverLight-3:目录

    ylbtech-SilverLight-Index: 1.A,返回顶部 Layout The Layout Containers - The Panel Background Borders   Si ...

  4. functor

    I thought it would be easy and convenient to define a small functor and perform a customized sort on ...

  5. Spark-Streaming之window滑动窗口应用

    Spark-Streaming之window滑动窗口应用,Spark Streaming提供了滑动窗口操作的支持,从而让我们可以对一个滑动窗口内的数据执行计算操作.每次掉落在窗口内的RDD的数据,会被 ...

  6. etcd的原理分析

    k8s集群使用etcd作为它的数据后端,etcd是一种无状态的分布式数据存储集群. 数据以key-value的形式存储在其中. 今天同事针对etcd集群的运作原理做了一个讲座,总结一下. A. etc ...

  7. EffectiveJava(1) 构造器和静态工厂方法

    构造器和静态工厂方法 **构造器是大家创建类时的构造方法,即使不显式声明,它也会在类内部隐式声明,使我们可以通过类名New一个实例. 静态方法是构造器的另一种表现形式** 主题要点:何时以及如何创建对 ...

  8. ibatis 读写clob数据

      ibatis 读写clob数据 CreationTime--2018年7月1日09点57分 Author:Marydon 1.从数据库读取数据 <!-- 根据主键查询患者信息.申请单.报告单 ...

  9. linux配置jdk失败

    在linux下配置jdk时,/etc/profile下的配置内容是对的,可是输入java -version却发现配置没有成功,这一般都是jdk的安装文件夹权限没有提升的原因,仅仅需用chmod -R ...

  10. uva 10034 Freckles (kruskal||prim)

    题目上仅仅给的坐标,没有给出来边的长度,不管是prim算法还是kruskal算法我们都须要知道边的长度来操作. 这道题是浮点数,也没啥大的差别,处理一下就能够了. 有关这两个算法的介绍前面我已经写过了 ...