Question 1

Consider the problem of predicting how well a student does in her second year of college/university, given how well she did in her first year.

Specifically, let x be equal to the number of “A” grades (including A-. A and A+ grades) that a student receives in their first year of college (freshmen year). We would like to predict the value of y, which we define as the number of “A” grades they get in their second year (sophomore year).

Here each row is one training example. Recall that in linear regression, our hypothesis is hθ(x)=θ01x, and we use m to denote the number of training examples.

x

y

5

4

3

4

0

1

4

3

For the training set given above (note that this training set may also be referenced in other questions in this quiz), what is the value of m? In the box below, please enter your answer (which should be a number between 0 and 10).

Answer:
4

Question 2

Consider the following training set of m=4 training examples:

x

y

1

0.5

2

1

4

2

0

0

Consider the linear regression model hθ(x)=θ01x. What are the values of θ0 and θ1
that you would expect to obtain upon running gradient descent on this
model? (Linear regression will be able to fit this data perfectly.)

    • θ0=0.5,θ1=0

    • θ0=0.5,θ1=0.5

    • θ0=1,θ1=0.5

    • θ0=0,θ1=0.5

    • θ0=1,θ1=1

Answer:
θ0=0,θ1=0.5

As J(θ01)=0, y = hθ(x) = θ0 + θ1x. Using any two values in the table, solve for θ0, θ1.

Question 3

Suppose we set θ0=−1,θ1=0.5. What is hθ(4)?

Answer:

Setting x = 4, we have hθ(x)=θ01x = -1 + (0.5)(4) = 1

Question 4

Let f be some function so that f(θ01) outputs a number. For this problem,f is some arbitrary/unknown smooth function (not necessarily the cost function of linear regression, so f may have local optima).Suppose we use gradient descent to try to minimize f(θ01)  as a function of θ0 and θ1. Which of thefollowing statements are true? (Check all that apply.)

    • Even if the learning rate α is very large, every iteration of gradient descent will decrease the value of f(θ01).

    • If the learning rate is too small, then gradient descent may take a very long time to converge.

    • If θ0 and θ1 are initialized at a local minimum, then one iteration will not change their values.

    • If θ0 and θ1 are initialized so that θ01,
      then by symmetry (because we do simultaneous updates to the two
      parameters), after one iteration of gradient descent, we will still have
      θ01.

Answers:

True or False

Statement

Explanation

True

If the learning rate is too small, then gradient descent may take a very long time to converge.

If the learning rate is small, gradient descent ends up taking an
extremely small step on each iteration, and therefor can take a long
time to converge

True

If θ0 and θ1 are initialized at a local minimum, then one iteration will not change their values.

At a local minimum, the derivative (gradient) is zero, so gradient descent will not change the parameters.

False

Even if the learning rate α is very large, every iteration of gradient descent will decrease the value of f(θ01).

If the learning rate is too large, one step of gradient descent
can actually vastly “overshoot” and actually increase the value of f(θ01).

False

If θ0 and θ1 are initialized so that θ01,
then by symmetry (because we do simultaneous updates to the two
parameters), after one iteration of gradient descent, we will still have
θ01.

The updates to θ0 and θ1 are different (even
though we’re doing simulaneous updates), so there’s no particular
reason to update them to be same after one iteration of gradient
descent.

Other Options:

True or False

Statement

Explanation

True

If the first few iterations of gradient descent cause f(θ01) to increase rather than decrease, then the most likely cause is that we have set the learning rate to too large a value

if alpha were small enough, then gradient descent should always successfully take a tiny small downhill and decrease f(θ01)
at least a little bit. If gradient descent instead increases the
objective value, that means alpha is too large (or you have a bug in
your code!).

False

No matter how θ0 and θ1 are initialized, so
long as learning rate is sufficiently small, we can safely expect
gradient descent to converge to the same solution

This is not true, depending on the initial condition, gradient descent may end up at different local optima.

False

Setting the learning rate to be very small is not harmful, and can only speed up the convergence of gradient descent.

If the learning rate is small, gradient descent ends up taking an
extremely small step on each iteration, so this would actually slow down
(rather than speed up) the convergence of the algorithm.

Question 5

Suppose that for some linear regression problem (say, predicting
housing prices as in the lecture), we have some training set, and for
our training set we managed to find some θ0, θ1 such that J(θ01)=0.

Which of the statements below must then be true? (Check all that apply.)

    • For this to be true, we must have y(i)=0 for every value of i=1,2,…,m.

    • Gradient descent is likely to get stuck at a local minimum and fail to find the global minimum.

    • For this to be true, we must have θ0=0 and θ1=0 so that hθ(x)=0

    • Our training set can be fit perfectly by a straight line, i.e.,
      all of our training examples lie perfectly on some straight line.

Answers:

True or False

Statement

Explanation

False

For this to be true, we must have y(i)=0 for every value of i=1,2,…,m.

So long as all of our training examples lie on a straight line, we will be able to find θ0 and θ1) so that J(θ01)=0. It is not necessary that y(i) for all our examples.

False

Gradient descent is likely to get stuck at a local minimum and fail to find the global minimum.

none

False

For this to be true, we must have θ0=0 and θ1=0 so that hθ(x)=0

If J(θ01)=0 that means the line defined by the equation “y = θ0 + θ1x” perfectly fits all of our data. There’s no particular reason to expect that the values of θ0 and θ1 that achieve this are both 0 (unless y(i)=0 for all of our training examples).

True

Our training set can be fit perfectly by a straight line, i.e., all of our training examples lie perfectly on some straight line.

If J(θ0,θ1)=0, that means the line defined by the equation "y=θ0+θ1x" perfectly fits all of our data.

 False

We can perfectly predict the value of y even for new examples that we have not yet seen. (e.g., we can perfectly predict prices of even new houses that we have not yet seen.)

 None
False

This is not possible: By the definition of J(θ01), it is not possible for there to exist θ0 and θ1 so that J(θ01)=0

None
True

For these values of θ0 and θ1 that satisfy J(θ01)=0, we have that hθ(x(i))=y(i) for every training example (x(i),y(i))

Not all the hθ(x(i)) need to be equal to y(i)

【原】Coursera—Andrew Ng机器学习—Week 1 习题—Linear Regression with One Variable 单变量线性回归的更多相关文章

  1. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 2_Linear regression with one variable 单变量线性回归

    Lecture2   Linear regression with one variable  单变量线性回归 2.1 模型表示 Model Representation 2.1.1  线性回归 Li ...

  2. 【原】Coursera—Andrew Ng机器学习—Week 2 习题—Linear Regression with Multiple Variables 多变量线性回归

    Gradient Descent for Multiple Variables [1]多变量线性模型  代价函数 Answer:AB [2]Feature Scaling 特征缩放 Answer:D ...

  3. 【原】Coursera—Andrew Ng机器学习—Week 3 习题—Logistic Regression 逻辑回归

    课上习题 [1]线性回归 Answer: D A 特征缩放不起作用,B for all 不对,C zero error不对 [2]概率 Answer:A [3]预测图形 Answer:A 5 - x1 ...

  4. 【原】Coursera—Andrew Ng机器学习—Week 11 习题—Photo OCR

    [1]机器学习管道 [2]滑动窗口 Answer:C ((200-20)/4)2 = 2025 [3]人工数据 [4]标记数据 Answer:B (10000-1000)*10 /(8*60*60) ...

  5. 【原】Coursera—Andrew Ng机器学习—Week 5 习题—Neural Networks learning

    课上习题 [1]代价函数 [2]代价函数计算 [3] [4]矩阵的向量化 [5]梯度校验 Answer:(1.013 -0.993) / 0.02 = 3.001 [6]梯度校验 Answer:学习的 ...

  6. 【原】Coursera—Andrew Ng机器学习—Week 10 习题—大规模机器学习

    [1]大规模数据 [2]随机梯度下降 [3]小批量梯度下降 [4]随机梯度下降的收敛 Answer:BD A 错误.学习率太小,算法容易很慢 B 正确.学习率小,效果更好 C 错误.应该是确定阈值吧 ...

  7. 【原】Coursera—Andrew Ng机器学习—Week 9 习题—异常检测

    [1]异常检测 [2]高斯分布 [3]高斯分布 [4] 异常检测 [5]特征选择 [6] [7]多变量高斯分布 Answer: ACD B 错误.需要矩阵Σ可逆,则要求m>n  测验1 Answ ...

  8. 【原】Coursera—Andrew Ng机器学习—Week 8 习题—聚类 和 降维

    [1]无监督算法 [2]聚类 [3]代价函数 [4] [5]K的选择 [6]降维 Answer:本来是 n 维,降维之后变成 k 维(k ≤ n) [7] [8] Answer: 斜率-1 [9] A ...

  9. 【原】Coursera—Andrew Ng机器学习—Week 7 习题—支持向量机SVM

    [1] [2] Answer: B. 即 x1=3这条垂直线. [3] Answer: B 因为要尽可能小.对B,右侧红叉,有1/2 * 2  = 1 ≥ 1,左侧圆圈,有1/2 * -2  = -1 ...

随机推荐

  1. react : code splitting

    1.webpack config output: { ... chunkFilename: 'js/[name].min.js' ... } optimization: { splitChunks: ...

  2. canvas - 饼状图

    <!DOCTYPE html> <html> <head> <title>Canvas测试</title> <meta charset ...

  3. PS常用美化处理方法大全

    学习PS的同学都知道,我们日常生活中使用PS就是进行一些简单的图像美白,图像颜色的优化,其他的基本不用,在长时间的PS使用过程中本人总结了一些处理皮肤的方法,都是一些非常简单的方法,希望能够帮助那些刚 ...

  4. bzoj 3124 直径

    Written with StackEdit. Description 小\(Q\)最近学习了一些图论知识.根据课本,有如下定义. 树:无回路且连通的无向图,每条边都有正整数的权值来表示其长度.如果一 ...

  5. NIO:异步非阻塞I/O,AIO,BIO

    Neety的基础使用及说明 https://www.cnblogs.com/rrong/p/9712847.html BIO(缺乏弹性伸缩能力,并发量小,容易出现内存溢出,出现宕机 每一个客户端对应一 ...

  6. python Beautiful Soup的使用

    上一节我们介绍了正则表达式,它的内容其实还是蛮多的,如果一个正则匹配稍有差池,那可能程序就处在永久的循环之中,而且有的小伙伴们也对写正则表 达式的写法用得不熟练,没关系,我们还有一个更强大的工具,叫B ...

  7. 处理get中文乱码

    package com.servlet;              import java.io.IOException;       import java.io.PrintWriter;      ...

  8. Vue脚手架搭建过程

    1.使用npm全局安装vue-cli(前提是你已经安装了nodejs,否则你连npm都用不了),在cmd中输入一下命令 npm install --global vue-cli 安装完成后,创建自己的 ...

  9. 关于 FastAdmin 中的 trait

    关于 FastAdmin 中的 trait 来自ThinkPHP5 官网的介绍 1 trait是一种为类似 PHP 的单继承语言而准备的代码复用机制.trait为了减少单继承语言的限制,使开发人员能够 ...

  10. aspupload ,在winows server 2008 下无法使用

    aspupload ,在winows server 2008 下无法使用.求助解决办法 2014-01-12 13:31 goolean | 浏览 775 次 操作系统 aspupload64位,安装 ...