Question 1

Consider the problem of predicting how well a student does in her second year of college/university, given how well she did in her first year.

Specifically, let x be equal to the number of “A” grades (including A-. A and A+ grades) that a student receives in their first year of college (freshmen year). We would like to predict the value of y, which we define as the number of “A” grades they get in their second year (sophomore year).

Here each row is one training example. Recall that in linear regression, our hypothesis is hθ(x)=θ01x, and we use m to denote the number of training examples.

x

y

5

4

3

4

0

1

4

3

For the training set given above (note that this training set may also be referenced in other questions in this quiz), what is the value of m? In the box below, please enter your answer (which should be a number between 0 and 10).

Answer:
4

Question 2

Consider the following training set of m=4 training examples:

x

y

1

0.5

2

1

4

2

0

0

Consider the linear regression model hθ(x)=θ01x. What are the values of θ0 and θ1
that you would expect to obtain upon running gradient descent on this
model? (Linear regression will be able to fit this data perfectly.)

    • θ0=0.5,θ1=0

    • θ0=0.5,θ1=0.5

    • θ0=1,θ1=0.5

    • θ0=0,θ1=0.5

    • θ0=1,θ1=1

Answer:
θ0=0,θ1=0.5

As J(θ01)=0, y = hθ(x) = θ0 + θ1x. Using any two values in the table, solve for θ0, θ1.

Question 3

Suppose we set θ0=−1,θ1=0.5. What is hθ(4)?

Answer:

Setting x = 4, we have hθ(x)=θ01x = -1 + (0.5)(4) = 1

Question 4

Let f be some function so that f(θ01) outputs a number. For this problem,f is some arbitrary/unknown smooth function (not necessarily the cost function of linear regression, so f may have local optima).Suppose we use gradient descent to try to minimize f(θ01)  as a function of θ0 and θ1. Which of thefollowing statements are true? (Check all that apply.)

    • Even if the learning rate α is very large, every iteration of gradient descent will decrease the value of f(θ01).

    • If the learning rate is too small, then gradient descent may take a very long time to converge.

    • If θ0 and θ1 are initialized at a local minimum, then one iteration will not change their values.

    • If θ0 and θ1 are initialized so that θ01,
      then by symmetry (because we do simultaneous updates to the two
      parameters), after one iteration of gradient descent, we will still have
      θ01.

Answers:

True or False

Statement

Explanation

True

If the learning rate is too small, then gradient descent may take a very long time to converge.

If the learning rate is small, gradient descent ends up taking an
extremely small step on each iteration, and therefor can take a long
time to converge

True

If θ0 and θ1 are initialized at a local minimum, then one iteration will not change their values.

At a local minimum, the derivative (gradient) is zero, so gradient descent will not change the parameters.

False

Even if the learning rate α is very large, every iteration of gradient descent will decrease the value of f(θ01).

If the learning rate is too large, one step of gradient descent
can actually vastly “overshoot” and actually increase the value of f(θ01).

False

If θ0 and θ1 are initialized so that θ01,
then by symmetry (because we do simultaneous updates to the two
parameters), after one iteration of gradient descent, we will still have
θ01.

The updates to θ0 and θ1 are different (even
though we’re doing simulaneous updates), so there’s no particular
reason to update them to be same after one iteration of gradient
descent.

Other Options:

True or False

Statement

Explanation

True

If the first few iterations of gradient descent cause f(θ01) to increase rather than decrease, then the most likely cause is that we have set the learning rate to too large a value

if alpha were small enough, then gradient descent should always successfully take a tiny small downhill and decrease f(θ01)
at least a little bit. If gradient descent instead increases the
objective value, that means alpha is too large (or you have a bug in
your code!).

False

No matter how θ0 and θ1 are initialized, so
long as learning rate is sufficiently small, we can safely expect
gradient descent to converge to the same solution

This is not true, depending on the initial condition, gradient descent may end up at different local optima.

False

Setting the learning rate to be very small is not harmful, and can only speed up the convergence of gradient descent.

If the learning rate is small, gradient descent ends up taking an
extremely small step on each iteration, so this would actually slow down
(rather than speed up) the convergence of the algorithm.

Question 5

Suppose that for some linear regression problem (say, predicting
housing prices as in the lecture), we have some training set, and for
our training set we managed to find some θ0, θ1 such that J(θ01)=0.

Which of the statements below must then be true? (Check all that apply.)

    • For this to be true, we must have y(i)=0 for every value of i=1,2,…,m.

    • Gradient descent is likely to get stuck at a local minimum and fail to find the global minimum.

    • For this to be true, we must have θ0=0 and θ1=0 so that hθ(x)=0

    • Our training set can be fit perfectly by a straight line, i.e.,
      all of our training examples lie perfectly on some straight line.

Answers:

True or False

Statement

Explanation

False

For this to be true, we must have y(i)=0 for every value of i=1,2,…,m.

So long as all of our training examples lie on a straight line, we will be able to find θ0 and θ1) so that J(θ01)=0. It is not necessary that y(i) for all our examples.

False

Gradient descent is likely to get stuck at a local minimum and fail to find the global minimum.

none

False

For this to be true, we must have θ0=0 and θ1=0 so that hθ(x)=0

If J(θ01)=0 that means the line defined by the equation “y = θ0 + θ1x” perfectly fits all of our data. There’s no particular reason to expect that the values of θ0 and θ1 that achieve this are both 0 (unless y(i)=0 for all of our training examples).

True

Our training set can be fit perfectly by a straight line, i.e., all of our training examples lie perfectly on some straight line.

If J(θ0,θ1)=0, that means the line defined by the equation "y=θ0+θ1x" perfectly fits all of our data.

 False

We can perfectly predict the value of y even for new examples that we have not yet seen. (e.g., we can perfectly predict prices of even new houses that we have not yet seen.)

 None
False

This is not possible: By the definition of J(θ01), it is not possible for there to exist θ0 and θ1 so that J(θ01)=0

None
True

For these values of θ0 and θ1 that satisfy J(θ01)=0, we have that hθ(x(i))=y(i) for every training example (x(i),y(i))

Not all the hθ(x(i)) need to be equal to y(i)

【原】Coursera—Andrew Ng机器学习—Week 1 习题—Linear Regression with One Variable 单变量线性回归的更多相关文章

  1. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 2_Linear regression with one variable 单变量线性回归

    Lecture2   Linear regression with one variable  单变量线性回归 2.1 模型表示 Model Representation 2.1.1  线性回归 Li ...

  2. 【原】Coursera—Andrew Ng机器学习—Week 2 习题—Linear Regression with Multiple Variables 多变量线性回归

    Gradient Descent for Multiple Variables [1]多变量线性模型  代价函数 Answer:AB [2]Feature Scaling 特征缩放 Answer:D ...

  3. 【原】Coursera—Andrew Ng机器学习—Week 3 习题—Logistic Regression 逻辑回归

    课上习题 [1]线性回归 Answer: D A 特征缩放不起作用,B for all 不对,C zero error不对 [2]概率 Answer:A [3]预测图形 Answer:A 5 - x1 ...

  4. 【原】Coursera—Andrew Ng机器学习—Week 11 习题—Photo OCR

    [1]机器学习管道 [2]滑动窗口 Answer:C ((200-20)/4)2 = 2025 [3]人工数据 [4]标记数据 Answer:B (10000-1000)*10 /(8*60*60) ...

  5. 【原】Coursera—Andrew Ng机器学习—Week 5 习题—Neural Networks learning

    课上习题 [1]代价函数 [2]代价函数计算 [3] [4]矩阵的向量化 [5]梯度校验 Answer:(1.013 -0.993) / 0.02 = 3.001 [6]梯度校验 Answer:学习的 ...

  6. 【原】Coursera—Andrew Ng机器学习—Week 10 习题—大规模机器学习

    [1]大规模数据 [2]随机梯度下降 [3]小批量梯度下降 [4]随机梯度下降的收敛 Answer:BD A 错误.学习率太小,算法容易很慢 B 正确.学习率小,效果更好 C 错误.应该是确定阈值吧 ...

  7. 【原】Coursera—Andrew Ng机器学习—Week 9 习题—异常检测

    [1]异常检测 [2]高斯分布 [3]高斯分布 [4] 异常检测 [5]特征选择 [6] [7]多变量高斯分布 Answer: ACD B 错误.需要矩阵Σ可逆,则要求m>n  测验1 Answ ...

  8. 【原】Coursera—Andrew Ng机器学习—Week 8 习题—聚类 和 降维

    [1]无监督算法 [2]聚类 [3]代价函数 [4] [5]K的选择 [6]降维 Answer:本来是 n 维,降维之后变成 k 维(k ≤ n) [7] [8] Answer: 斜率-1 [9] A ...

  9. 【原】Coursera—Andrew Ng机器学习—Week 7 习题—支持向量机SVM

    [1] [2] Answer: B. 即 x1=3这条垂直线. [3] Answer: B 因为要尽可能小.对B,右侧红叉,有1/2 * 2  = 1 ≥ 1,左侧圆圈,有1/2 * -2  = -1 ...

随机推荐

  1. springmvc跨域(转)

    跨域资源共享 CORS 详解  原文链接:http://www.ruanyifeng.com/blog/2016/04/cors.html   作者: 阮一峰 日期: 2016年4月12日 CORS是 ...

  2. linux使用virtualenv构建虚拟环境,requirement.txt记录包版本

    virtualenv介绍: virtualenv把是一个把python应用隔离在一个虚拟环境中的工具.网上的例子较多,这里重点讲述怎么使用virtualenv来激活一个虚拟环境,并且记录虚拟环境中所依 ...

  3. Android以root起一个process[shell脚本的方法]

    有时候我们写的app要用uid=0的方式启动一个process,framework层和app层是做不到的,只有通过写脚本,利用am来实现.下面是具体步骤: 1.创建一个包含Main()方法Java p ...

  4. dojo chart详解

    Dojo提供了一套很完善的统计图(Chart)接口,在dojox/charting下面,可以支持很多种类型的. .简介 Dojo统计图提供快速的.简单的接口实现美观的.交互性强的web统计图表的实现. ...

  5. 机器学习算法实现解析——libFM之libFM的训练过程之SGD的方法

    本节主要介绍的是libFM源码分析的第五部分之一--libFM的训练过程之SGD的方法. 5.1.基于梯度的模型训练方法 在libFM中,提供了两大类的模型训练方法,一类是基于梯度的训练方法,另一类是 ...

  6. 21天学通C++_Day1

    被阿里实习生的第一轮电话面试刷掉以后,幡然醒悟,发现以前学习的C++基础一点都不扎实.为了把基础打扎实,重新学习一遍:为了让自己不放弃,也顺便可以把当天学到的东西记录下来,开始了写博客. 学习书籍:& ...

  7. 在linux中使用shell来分析统计日志中的信息

    在运维工作中,要经常分析后台系统的日志,通过抓取日志中的关键字信息,对抓取结果进行统计,从而为监控结果提供基础数据.下面的shell演示了如何从大量的日志中取得想要的统计结果.其中展示了各种有趣的命令 ...

  8. js 获取 本周、上周、本月、上月、本季度、上季度的开始结束日期

    js 获取 本周.上周.本月.上月.本季度.上季度的开始结束日期 /**  * 获取本周.本季度.本月.上月的开始日期.结束日期  */ var now = new Date(); //当前日期 va ...

  9. 剑指offer-第四章解决面试题的思路(包含min函数的栈)

    题目:定义栈的数据结构,请在该类型中实现一个能够得到栈的最小元素的min函数,在该栈中,调用min,push及pop的时间复杂度都是O(1) 思路:定义两个栈分别为dataStack和minStack ...

  10. test20181219 连续段的期望

    题意 连续段的期望 [问题描述] 小N最近学习了位运算,她发现2个数xor之后数的大小可能变大也可能变小,and之后都不会变大,or之后不会变小.于是她想算出以下的期望值:现在有 N个数排成一排,如果 ...