deeplearning.net/data/mnist/mnist.pkl.gz

  The MNIST dataset consists of handwritten digit images and it is divided in 60,000 examples
for the training set and 10,000 examples for testing. In many papers as well as in this tutorial,
the official training set of 60,000 is divided into an actual training set of 50,000 examples and
10,000 validation examples (for selecting hyper-parameters like learning rate and size of the
model). All digit images have been size-normalized and centered in a fixed size image of 28 x
28 pixels. In the original dataset each pixel of the image is represented by a value between 0
and 255, where 0 is black, 255 is white and anything in between is a different shade of grey.
  Here are some examples of MNIST digits:

aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABuElEQVRIie3Ur4vycBzA8a8mGchY8RdYFBZEwaRRDEbZFoX5D2iy2BREcWAVwxeDyaAD2wSFlYFgEIsmMUywihNhTP0gF8Zz3ONx9/jslu7u/Qe8+P78IPTb98npdFJ/qlQqgiCMRiO/39/v9+/3u67r1Wr1KSgYDIbD4Xw+jzEeDAbwd6qqiqIIAKfTSVGUVCr1bzEejx8OB/ig6/XK8zzHcRzHJZNJmqafWiZFUZvN5sGazWaSJOm6rmnaU8r7GIbpdruFQsEUF4sFQRAIoUgkgjG2iCKE3G43QghjDAC5XM469L5WqwUAsiw7HA7bUIIgZFkGgEwmYxuKEAqFQpqmqara6/WKxaJtLsuyx+PRvLRyuezz+exxo9HoZDIx3U6nEwgE7HFJkuR5/na7AcB0OrUHNTMMAwAMw3jqd35eLBar1Wrj8dg8geVy+aVHRtN0u93e7/evX/ZyuUiSZJHzer2lUmm73b4dAvP5PJvNWuE8Hk86nV6v1w8zhWVZK7umKGo4HD5MKUVRGIZxuVz/zSUSCVEUd7vdW+58PjcaDXNEWUkQhFdrtVo1m816vU6SpEXup/YC+WlArfd1WoMAAAAASUVORK5CYII=" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAAB1UlEQVRIie3UravqYBwH8FMcyJqcMJhRq8OBOBCMCwqWzeyfIGuaRIPJoNgHypjFMpugxaFgshnn8AUWnIIgCL8f44SHK5d7ri/TC7ecb3t4vnx43raPj5/83/A8r6oqIqqqGo/H/4HIcZzruvAr+/3+XTGRSKzXa0QEgMPh4DgOAAiCEAgEXuGCwWAqlVqtVgBA0Pl8LkkSGZbL5VfQbrd73TJBAaBQKIxGI0TUdd23yPO867qIiIjj8VhRFETcbDaxWCyXy3me1+v1/Im/38xgMKBpOpPJlEqlz89PUkDE0+nk4xlEo1FN0xDRcZzFYiFJ0vcOOQ1N054SKYoyDAMAjsejKIqhUIhl2VvoZDJ5ChUEgew6nU7fqflDp9MpuZn7Nc/zENE0zcdiNps9n88AUCwW7zfJStvt9mNUlmUA2O12DMPc6lAUVa/XEXE4HNI0/SxqWdYdsVarAYBt26IoPhavaLPZ/Ossx3GapgFAv99/iiPJ5/OIaNv29ylFUcgH1ul0fIjXlV4ul1arxXFcOByWZdkwDNu2EdGyLF3Xk8nkKyjJdrtdLpfXoWma1WrVH0fCsuxsNvvjt+Q4zq1TfjYMw1QqlSvaaDQikchb4k/ezBe96mplmScRmQAAAABJRU5ErkJggg==" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABXUlEQVRIie3SsYuCUADHcYPWaHOo5pyUtlpa/AsaahFEHPoLAt+ie1NNglOQU+NV0H/getDkIK5aECQSiOhPGuIOjgN9g8vFfUf58fHBewzz3/snimIYhv1+v2w0Ho8nkwk9Sgg5Ho8VqGEYtm1Tio1Gw7Ks8/nMcVzZzvM8erTT6QDYbrcVO9/36dHT6QRA1/WyEc/zj8eDHnUcB8BwOCwbEUIAUKIsywZBAKDX65XtNpsNAEIIDWrbNgDXddvtdjUqimLJptVqzWaz/X6fJAkASZIqfv5Cp9Pp9xdBEAaDwWKxWK/XpmlGURTH8fV6PRwOURRlWVbxmBiGMU0zz/Pb7fb5VZ7nANI0vd/vjuOsVitJkrrdbrPZvFwuaZpWiK80Tfv4maqqv+93Pp8D8DyPCqVst9sBWC6X9aOj0egvoEVRyLJcMwpAUZT6UcuyakaLoqgZVRSl/pO+UU8ucOJyx7AQigAAAABJRU5ErkJggg==" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABF0lEQVRIie3UIa6DQBCA4XEVTTBNqNt12D1AQwnpAbA1BNlDNO0F8AjEXqAHqEFsgkXVUrcKEAgSxGSyeWLTE7x55r33H+DLJDMZgP9+c9frlYicc8fjkUcsimJZFkQkojiOedD7/Y6InOjpdJqmCRFfr5cQYrPZfFc8HA7WWj9mnucMMwJAXddERERN0/CIu92OiBBxHMc0TRlEKWXXdR693W4MIgBcLhe/7ufzGQQBg5hl2TzPiGiMCcOQQZRS0ietNYMIAFVV4acoihhEpdT7/fbi4/FgEAFgGAYvtm273W55UH9DiHg+n3lErbVzzq9ICMEgKqWstUS0rmtZlgyPAwCSJPHX3vc9A/eD6H6/N8Ywo3+jL7shx67N05YQAAAAAElFTkSuQmCC" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABhklEQVRIie3UoavCUBQG8IGCogyLw2QyDcxXi2AyqXHFYBYMwsAmCCaLoPsj1LSklonFYF0RbWqYYFIQg99BXjg8ywN5V/Ze8mv3wH4czrm7ivLJJ/8fIYRlWa7rEhERNRoNwzAsyxJCvCkahnE8HgEQkeM4rusC4ONwOJTmAoFANpu9XC4A5vN5Pp8PBoPRaHQymTBqmqY0Wq1WuanpdKqqKhcrlQoXd7tdPB6XEzudDhEB6Pf7T1FRlPV6zWi5XJYTW60WEd1uN9u2w+EwF0OhUKlUul6vRNRut+XEWCzGm7Ft+1lMpVKr1Yp7HI1GkUhEDtU0jT9OJpOapjWbzeVyeT6feRr3+71YLMqJ3KnnebxffGe/3x8OBwCe50mLHCHE6XQios1m0+12dV1PJBKLxQJAr9d7E/2ZXC73eDyIqF6v+4YWCgWehvTdfB3/0T/ptFar+Y+m02lelM8z3W63ADKZjJ8oP1qO4+i67huqqupsNgMwHo+l//3X7mAwAOBns7/MF5RoDnCMHLPMAAAAAElFTkSuQmCC" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAAB/klEQVRIie2TP8hpYRzHnw5HKX8LUUgo6xmOAQsTAysGZbAwmBiOEBObsrCYDOqUhaPMJpEoC6Uoik3nIMvvpDuc7ptFXd673Nv7mZ7n1/P79Pz7IvTDf4hMJtPr9YlEIpfLSSSS7+osFkuj0VgsFvCber3+uc7hcDSbzev1CgA8z+92u+VyCQCn08nhcLytUygUzWbzfD5/7W61WpnNZrvdLkw9Hs/b0ng8Dk+s12uj0YgQ+pZ0MBgIzZvNptPpmEwmoR4MBj+XGgyGUqnkcrm0Wu1zPZFIfC59RavV+jvSdDpNUVQul6MoajKZAMBoNMIw7BOXVColSbLf7/M8z/P84/EQBofDwWq1vq0Ti8VOp3O/3wPA5XI5HA40TXMcJxz8eDxms1kcx98w4jgeCoWE/kKh4Ha7EUJqtfo5UQAQDof/NKxisbhSqQhtDMMolUqEkEajmU6nPM/f7/dyudztdoUFw+HQ5/MRBEEQxEsjhmHVahUAWJZNpVIqlQohRJLkeDwWEuX1ehFCcrnc7/e3222WZQX7drt9KU0mkwDAcVwkElGr1YFAgKZpIfjFYlFI1DPRaJRhGIZh7Hb7S+nxeASA2+02m81Wq9XX9eXz+Q8/EEJoPp8/P0Wv18tkMjabTSQSfWhECMlkslgsVqvVKIrS6XTvfZof/jF+AdReWL8sdEZ6AAAAAElFTkSuQmCC" alt="" />
  For convenience we pickled the dataset to make it easier to use in python. It is available for
download here. The pickled file represents a tuple of 3 lists : the training set, the validation
set and the testing set. Each of the three lists is a pair formed from a list of images and a list
of class labels for each of the images. An image is represented as numpy 1-dimensional array

of 784 (28 x 28) float values between 0 and 1 (0 stands for black, 1 for white). The labels are
numbers between 0 and 9 indicating which digit the image represents. The code block below
shows how to load the dataset.

import cPickle, gzip, numpy
# Load the dataset
f = gzip.open(’mnist.pkl.gz’, ’rb’)
train_set, valid_set, test_set = cPickle.load(f)
f.close()

  When using the dataset, we usually divide it in minibatches (see Stochastic Gradient Descent).
We encourage you to store the dataset into shared variables and access it based on the minibatch
index, given a fixed and known batch size. The reason behind shared variables is related to
using the GPU. There is a large overhead when copying data into the GPU memory. If you
would copy data on request ( each minibatch individually when needed) as the code will do if
you do not use shared variables, due to this overhead, the GPU code will not be much faster
then the CPU code (maybe even slower). If you have your data in Theano shared variables
though, you give Theano the possibility to copy the entire data on the GPU in a single call
when the shared variables are constructed. Afterwards the GPU can access any minibatch by
taking a slice from this shared variables, without needing to copy any information from the
CPU memory and therefore bypassing the overhead. Because the datapoints and their labels
are usually of different nature (labels are usually integers while datapoints are real numbers) we
suggest to use different variables for labes and data. Also we recomand using different variables
for the training set, validation set and testing set to make the code more readable (resulting in 6
different shared variables).
  Since now the data is in one variable, and a minibatch is defined as a slice of that variable,
it comes more natural to define a minibatch by indicating its index and its size. In our setup
the batch size stays constant through out the execution of the code, therefore a function will
actually require only the index to identify on which datapoints to work. The code below shows
how to store your data and how to access a minibatch:

def shared_dataset(data_xy):
  """ Function that loads the dataset into shared variables
  The reason we store our dataset in shared variables is to allow
  Theano to copy it into the GPU memory (when code is run on GPU).
  Since copying data into the GPU is slow, copying a minibatch everytime
  is needed (the default behaviour if the data is not in a shared
  variable) would lead to a large decrease in performance.
  """
  data_x, data_y = data_xy
  shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
  shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
  # When storing data on the GPU it has to be stored as floats
  # therefore we will store the labels as ‘‘floatX‘‘ as well
  # (‘‘shared_y‘‘ does exactly that). But during our computations
  # we need them as ints (we use labels as index, and if they are
  # floats it doesn’t make sense) therefore instead of returning
  # ‘‘shared_y‘‘ we will have to cast it to int. This little hack
  # lets us get around this issue
  return shared_x, T.cast(shared_y, ’int32’)
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * 500: 3 * 500]
label = train_set_y[2 * 500: 3 * 500]

   The data has to be stored as floats on the GPU ( the right dtype for storing on the GPU is given by
theano.config.floatX). To get around this shortcomming for the labels, we store them as float, and
then cast it to int.

----------------------------------------------------------------------------------------------------------------
Note: If you are running your code on the GPU and the dataset you are using is too large to fit in memory
the code will crash. In such a case you should store the data in a shared variable. You can however store a
sufficiently small chunk of your data (several minibatches) in a shared variable and use that during training.
Once you got through the chunk, update the values it stores. This way you minimize the number of data
transfers between CPU memory and GPU memory.

----------------------------------------------------------------------------------------------------------------

资料:mnist.pkl.gz数据包的下载以及数据内容解释的更多相关文章

  1. python+fiddler 抓取抖音数据包并下载抖音视频

    这个我们要下载视频,那么肯定首先去找抖音视频的url地址,那么这个地址肯定在json格式的数据包中,所以我们就去专门查看json格式数据包 这个怎么找我就不用了,直接看结果吧 你找json包,可以选大 ...

  2. 用R包来下载sra数据

    1)介绍 我们用SRAdb library来对SRA数据进行处理. SRAdb 可以更方便更快的接入  metadata associated with submission, 包括study, sa ...

  3. C#开发BIMFACE系列46 服务端API之离线数据包下载及结构详解

    BIMFACE二次开发系列目录     [已更新最新开发文章,点击查看详细] 在前一篇博客<C#开发BIMFACE系列45 服务端API之创建离线数据包>中通过调用接口成功的创建一个离线数 ...

  4. IM通信协议逆向分析、Wireshark自定义数据包格式解析插件编程学习

    相关学习资料 http://hi.baidu.com/hucyuansheng/item/bf2bfddefd1ee70ad68ed04d http://en.wikipedia.org/wiki/I ...

  5. Windows下底层数据包发送实战

    1.简介 所谓“底层数据包”指的是在“运行”于数据链路层的数据包,简单的说就是“以太网帧”,而我们常用的Socket只能发送“运行”在传输层的TCP.UDP等包,这些传输层数据包已经能满足绝大部分需求 ...

  6. sk_buff封装和解封装网络数据包的过程详解(转载)

    http://dog250.blog.51cto.com/2466061/1612791 可以说sk_buff结构体是Linux网络协议栈的核心中的核心,几乎所有的操作都是围绕sk_buff这个结构体 ...

  7. C# 实现的多线程异步Socket数据包接收器框架

    转载自Csdn : http://blog.csdn.net/jubao_liang/article/details/4005438 几天前在博问中看到一个C# Socket问题,就想到笔者2004年 ...

  8. 史上最全最强Charles截取手机https协议数据包教程(附上利用此技术制作最近微信比较火的头脑王者辅助外挂)!

    纯原创,思路也是本人花了半个小时整理出来的,整个完成花费了本人半天时间,由于不才刚大学毕业,所以有的编码方面可能不入大牛们的眼,敬请原谅!如有转载请附上本地址,谢谢! 最近微信朋友圈刚刚被跳一跳血洗, ...

  9. Linux 中的网络数据包捕获

    Linux 中的网络数据包捕获 Ashish Chaurasia, 工程师 简介: 本教程介绍了捕获和操纵数据包的不同机制.安全应用程序,如 VPN.防火墙和嗅探器,以及网络应用程序,如路由程序,都依 ...

随机推荐

  1. .NET基础知识(一、认识.Net)

    参考链接:http://blog.csdn.net/shanyongxu/article/details/50849111 认识.NET平台 可能很多人问什么是.NET框架,它包含了哪些内容?为开发程 ...

  2. WPF 渲染级别 (Tier)

    在WPF中,显卡的功能相差很大.当WPF评估显卡时,它会考虑许多因素,包括显卡上的RAM数量.对像素着色器(piexl shader)的支持(计算每个像素效果的内置程序,如透明效果),以及对顶点着色器 ...

  3. iOS开发之--最简单的导航按钮更换方法/导航颜色的改变

    有很多时候,我们需要用到导航,那么更换导航的时候,是在那用那修改,还是自定义一个导航,或者是声明一个代理方法,经过查资料和对导航属性的一些了解,用一种方法最为简单,就是在入口类里面添加一个方法,调用偏 ...

  4. Cmake Make makefile GNU autotools

    个人总结 首先makefile是由make来编译,而makefile的生成可以由GUN autotools和CMake来实现,但前者没有CMake的CMakelist.txt直观,所以我们一般用CMa ...

  5. netty 网关 http请求 请求转发

    https://netty.io/4.1/xref/io/netty/example/proxy/package-summary.html https://netty.io/4.1/xref/io/n ...

  6. window.navigator.userAgent $_SERVER['HTTP_USER_AGENT']

    wjs php返回结果一致 <script> !function () { var UA = window.navigator.userAgent, docEl = document.do ...

  7. Tomcat 下 mysql的连接池配置和使用

    最近维护的一个项目出了问题,最后分析是卡在数据库连接池上,然后就做了些学习. 先把我自己的方法写出来,再说下网上其他的没有成功的方法. 1.首先当然是先把mysql的jar包放在lib目录下,tonc ...

  8. 前端开发 - jsDom

    一.jsDom简介 jsDom = javascript document object model在JS中,所有的事物都是节点,元素.文本等都是节点.应用场景:可以通过节点进行DOM对象的增删改查 ...

  9. 剑指Offer——整数中1出现的次数(从1到n整数中1出现的次数)

    题目描述: 求出1~13的整数中1出现的次数,并算出100~1300的整数中1出现的次数?为此他特别数了一下1~13中包含1的数字有1.10.11.12.13因此共出现6次,但是对于后面问题他就没辙了 ...

  10. 目标检测-Faster R-CNN

    [目标检测]Faster RCNN算法详解 Ren, Shaoqing, et al. “Faster R-CNN: Towards real-time object detection with r ...