deeplearning.net/data/mnist/mnist.pkl.gz

  The MNIST dataset consists of handwritten digit images and it is divided in 60,000 examples
for the training set and 10,000 examples for testing. In many papers as well as in this tutorial,
the official training set of 60,000 is divided into an actual training set of 50,000 examples and
10,000 validation examples (for selecting hyper-parameters like learning rate and size of the
model). All digit images have been size-normalized and centered in a fixed size image of 28 x
28 pixels. In the original dataset each pixel of the image is represented by a value between 0
and 255, where 0 is black, 255 is white and anything in between is a different shade of grey.
  Here are some examples of MNIST digits:

aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABuElEQVRIie3Ur4vycBzA8a8mGchY8RdYFBZEwaRRDEbZFoX5D2iy2BREcWAVwxeDyaAD2wSFlYFgEIsmMUywihNhTP0gF8Zz3ONx9/jslu7u/Qe8+P78IPTb98npdFJ/qlQqgiCMRiO/39/v9+/3u67r1Wr1KSgYDIbD4Xw+jzEeDAbwd6qqiqIIAKfTSVGUVCr1bzEejx8OB/ig6/XK8zzHcRzHJZNJmqafWiZFUZvN5sGazWaSJOm6rmnaU8r7GIbpdruFQsEUF4sFQRAIoUgkgjG2iCKE3G43QghjDAC5XM469L5WqwUAsiw7HA7bUIIgZFkGgEwmYxuKEAqFQpqmqara6/WKxaJtLsuyx+PRvLRyuezz+exxo9HoZDIx3U6nEwgE7HFJkuR5/na7AcB0OrUHNTMMAwAMw3jqd35eLBar1Wrj8dg8geVy+aVHRtN0u93e7/evX/ZyuUiSZJHzer2lUmm73b4dAvP5PJvNWuE8Hk86nV6v1w8zhWVZK7umKGo4HD5MKUVRGIZxuVz/zSUSCVEUd7vdW+58PjcaDXNEWUkQhFdrtVo1m816vU6SpEXup/YC+WlArfd1WoMAAAAASUVORK5CYII=" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAAB1UlEQVRIie3UravqYBwH8FMcyJqcMJhRq8OBOBCMCwqWzeyfIGuaRIPJoNgHypjFMpugxaFgshnn8AUWnIIgCL8f44SHK5d7ri/TC7ecb3t4vnx43raPj5/83/A8r6oqIqqqGo/H/4HIcZzruvAr+/3+XTGRSKzXa0QEgMPh4DgOAAiCEAgEXuGCwWAqlVqtVgBA0Pl8LkkSGZbL5VfQbrd73TJBAaBQKIxGI0TUdd23yPO867qIiIjj8VhRFETcbDaxWCyXy3me1+v1/Im/38xgMKBpOpPJlEqlz89PUkDE0+nk4xlEo1FN0xDRcZzFYiFJ0vcOOQ1N054SKYoyDAMAjsejKIqhUIhl2VvoZDJ5ChUEgew6nU7fqflDp9MpuZn7Nc/zENE0zcdiNps9n88AUCwW7zfJStvt9mNUlmUA2O12DMPc6lAUVa/XEXE4HNI0/SxqWdYdsVarAYBt26IoPhavaLPZ/Ossx3GapgFAv99/iiPJ5/OIaNv29ylFUcgH1ul0fIjXlV4ul1arxXFcOByWZdkwDNu2EdGyLF3Xk8nkKyjJdrtdLpfXoWma1WrVH0fCsuxsNvvjt+Q4zq1TfjYMw1QqlSvaaDQikchb4k/ezBe96mplmScRmQAAAABJRU5ErkJggg==" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABXUlEQVRIie3SsYuCUADHcYPWaHOo5pyUtlpa/AsaahFEHPoLAt+ie1NNglOQU+NV0H/getDkIK5aECQSiOhPGuIOjgN9g8vFfUf58fHBewzz3/snimIYhv1+v2w0Ho8nkwk9Sgg5Ho8VqGEYtm1Tio1Gw7Ks8/nMcVzZzvM8erTT6QDYbrcVO9/36dHT6QRA1/WyEc/zj8eDHnUcB8BwOCwbEUIAUKIsywZBAKDX65XtNpsNAEIIDWrbNgDXddvtdjUqimLJptVqzWaz/X6fJAkASZIqfv5Cp9Pp9xdBEAaDwWKxWK/XpmlGURTH8fV6PRwOURRlWVbxmBiGMU0zz/Pb7fb5VZ7nANI0vd/vjuOsVitJkrrdbrPZvFwuaZpWiK80Tfv4maqqv+93Pp8D8DyPCqVst9sBWC6X9aOj0egvoEVRyLJcMwpAUZT6UcuyakaLoqgZVRSl/pO+UU8ucOJyx7AQigAAAABJRU5ErkJggg==" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABF0lEQVRIie3UIa6DQBCA4XEVTTBNqNt12D1AQwnpAbA1BNlDNO0F8AjEXqAHqEFsgkXVUrcKEAgSxGSyeWLTE7x55r33H+DLJDMZgP9+c9frlYicc8fjkUcsimJZFkQkojiOedD7/Y6InOjpdJqmCRFfr5cQYrPZfFc8HA7WWj9mnucMMwJAXddERERN0/CIu92OiBBxHMc0TRlEKWXXdR693W4MIgBcLhe/7ufzGQQBg5hl2TzPiGiMCcOQQZRS0ietNYMIAFVV4acoihhEpdT7/fbi4/FgEAFgGAYvtm273W55UH9DiHg+n3lErbVzzq9ICMEgKqWstUS0rmtZlgyPAwCSJPHX3vc9A/eD6H6/N8Ywo3+jL7shx67N05YQAAAAAElFTkSuQmCC" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAABhklEQVRIie3UoavCUBQG8IGCogyLw2QyDcxXi2AyqXHFYBYMwsAmCCaLoPsj1LSklonFYF0RbWqYYFIQg99BXjg8ywN5V/Ze8mv3wH4czrm7ivLJJ/8fIYRlWa7rEhERNRoNwzAsyxJCvCkahnE8HgEQkeM4rusC4ONwOJTmAoFANpu9XC4A5vN5Pp8PBoPRaHQymTBqmqY0Wq1WuanpdKqqKhcrlQoXd7tdPB6XEzudDhEB6Pf7T1FRlPV6zWi5XJYTW60WEd1uN9u2w+EwF0OhUKlUul6vRNRut+XEWCzGm7Ft+1lMpVKr1Yp7HI1GkUhEDtU0jT9OJpOapjWbzeVyeT6feRr3+71YLMqJ3KnnebxffGe/3x8OBwCe50mLHCHE6XQios1m0+12dV1PJBKLxQJAr9d7E/2ZXC73eDyIqF6v+4YWCgWehvTdfB3/0T/ptFar+Y+m02lelM8z3W63ADKZjJ8oP1qO4+i67huqqupsNgMwHo+l//3X7mAwAOBns7/MF5RoDnCMHLPMAAAAAElFTkSuQmCC" alt="" />aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAIAAAD9b0jDAAAB/klEQVRIie2TP8hpYRzHnw5HKX8LUUgo6xmOAQsTAysGZbAwmBiOEBObsrCYDOqUhaPMJpEoC6Uoik3nIMvvpDuc7ptFXd673Nv7mZ7n1/P79Pz7IvTDf4hMJtPr9YlEIpfLSSSS7+osFkuj0VgsFvCber3+uc7hcDSbzev1CgA8z+92u+VyCQCn08nhcLytUygUzWbzfD5/7W61WpnNZrvdLkw9Hs/b0ng8Dk+s12uj0YgQ+pZ0MBgIzZvNptPpmEwmoR4MBj+XGgyGUqnkcrm0Wu1zPZFIfC59RavV+jvSdDpNUVQul6MoajKZAMBoNMIw7BOXVColSbLf7/M8z/P84/EQBofDwWq1vq0Ti8VOp3O/3wPA5XI5HA40TXMcJxz8eDxms1kcx98w4jgeCoWE/kKh4Ha7EUJqtfo5UQAQDof/NKxisbhSqQhtDMMolUqEkEajmU6nPM/f7/dyudztdoUFw+HQ5/MRBEEQxEsjhmHVahUAWJZNpVIqlQohRJLkeDwWEuX1ehFCcrnc7/e3222WZQX7drt9KU0mkwDAcVwkElGr1YFAgKZpIfjFYlFI1DPRaJRhGIZh7Hb7S+nxeASA2+02m81Wq9XX9eXz+Q8/EEJoPp8/P0Wv18tkMjabTSQSfWhECMlkslgsVqvVKIrS6XTvfZof/jF+AdReWL8sdEZ6AAAAAElFTkSuQmCC" alt="" />
  For convenience we pickled the dataset to make it easier to use in python. It is available for
download here. The pickled file represents a tuple of 3 lists : the training set, the validation
set and the testing set. Each of the three lists is a pair formed from a list of images and a list
of class labels for each of the images. An image is represented as numpy 1-dimensional array

of 784 (28 x 28) float values between 0 and 1 (0 stands for black, 1 for white). The labels are
numbers between 0 and 9 indicating which digit the image represents. The code block below
shows how to load the dataset.

import cPickle, gzip, numpy
# Load the dataset
f = gzip.open(’mnist.pkl.gz’, ’rb’)
train_set, valid_set, test_set = cPickle.load(f)
f.close()

  When using the dataset, we usually divide it in minibatches (see Stochastic Gradient Descent).
We encourage you to store the dataset into shared variables and access it based on the minibatch
index, given a fixed and known batch size. The reason behind shared variables is related to
using the GPU. There is a large overhead when copying data into the GPU memory. If you
would copy data on request ( each minibatch individually when needed) as the code will do if
you do not use shared variables, due to this overhead, the GPU code will not be much faster
then the CPU code (maybe even slower). If you have your data in Theano shared variables
though, you give Theano the possibility to copy the entire data on the GPU in a single call
when the shared variables are constructed. Afterwards the GPU can access any minibatch by
taking a slice from this shared variables, without needing to copy any information from the
CPU memory and therefore bypassing the overhead. Because the datapoints and their labels
are usually of different nature (labels are usually integers while datapoints are real numbers) we
suggest to use different variables for labes and data. Also we recomand using different variables
for the training set, validation set and testing set to make the code more readable (resulting in 6
different shared variables).
  Since now the data is in one variable, and a minibatch is defined as a slice of that variable,
it comes more natural to define a minibatch by indicating its index and its size. In our setup
the batch size stays constant through out the execution of the code, therefore a function will
actually require only the index to identify on which datapoints to work. The code below shows
how to store your data and how to access a minibatch:

def shared_dataset(data_xy):
  """ Function that loads the dataset into shared variables
  The reason we store our dataset in shared variables is to allow
  Theano to copy it into the GPU memory (when code is run on GPU).
  Since copying data into the GPU is slow, copying a minibatch everytime
  is needed (the default behaviour if the data is not in a shared
  variable) would lead to a large decrease in performance.
  """
  data_x, data_y = data_xy
  shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
  shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
  # When storing data on the GPU it has to be stored as floats
  # therefore we will store the labels as ‘‘floatX‘‘ as well
  # (‘‘shared_y‘‘ does exactly that). But during our computations
  # we need them as ints (we use labels as index, and if they are
  # floats it doesn’t make sense) therefore instead of returning
  # ‘‘shared_y‘‘ we will have to cast it to int. This little hack
  # lets us get around this issue
  return shared_x, T.cast(shared_y, ’int32’)
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * 500: 3 * 500]
label = train_set_y[2 * 500: 3 * 500]

   The data has to be stored as floats on the GPU ( the right dtype for storing on the GPU is given by
theano.config.floatX). To get around this shortcomming for the labels, we store them as float, and
then cast it to int.

----------------------------------------------------------------------------------------------------------------
Note: If you are running your code on the GPU and the dataset you are using is too large to fit in memory
the code will crash. In such a case you should store the data in a shared variable. You can however store a
sufficiently small chunk of your data (several minibatches) in a shared variable and use that during training.
Once you got through the chunk, update the values it stores. This way you minimize the number of data
transfers between CPU memory and GPU memory.

----------------------------------------------------------------------------------------------------------------

资料:mnist.pkl.gz数据包的下载以及数据内容解释的更多相关文章

  1. python+fiddler 抓取抖音数据包并下载抖音视频

    这个我们要下载视频,那么肯定首先去找抖音视频的url地址,那么这个地址肯定在json格式的数据包中,所以我们就去专门查看json格式数据包 这个怎么找我就不用了,直接看结果吧 你找json包,可以选大 ...

  2. 用R包来下载sra数据

    1)介绍 我们用SRAdb library来对SRA数据进行处理. SRAdb 可以更方便更快的接入  metadata associated with submission, 包括study, sa ...

  3. C#开发BIMFACE系列46 服务端API之离线数据包下载及结构详解

    BIMFACE二次开发系列目录     [已更新最新开发文章,点击查看详细] 在前一篇博客<C#开发BIMFACE系列45 服务端API之创建离线数据包>中通过调用接口成功的创建一个离线数 ...

  4. IM通信协议逆向分析、Wireshark自定义数据包格式解析插件编程学习

    相关学习资料 http://hi.baidu.com/hucyuansheng/item/bf2bfddefd1ee70ad68ed04d http://en.wikipedia.org/wiki/I ...

  5. Windows下底层数据包发送实战

    1.简介 所谓“底层数据包”指的是在“运行”于数据链路层的数据包,简单的说就是“以太网帧”,而我们常用的Socket只能发送“运行”在传输层的TCP.UDP等包,这些传输层数据包已经能满足绝大部分需求 ...

  6. sk_buff封装和解封装网络数据包的过程详解(转载)

    http://dog250.blog.51cto.com/2466061/1612791 可以说sk_buff结构体是Linux网络协议栈的核心中的核心,几乎所有的操作都是围绕sk_buff这个结构体 ...

  7. C# 实现的多线程异步Socket数据包接收器框架

    转载自Csdn : http://blog.csdn.net/jubao_liang/article/details/4005438 几天前在博问中看到一个C# Socket问题,就想到笔者2004年 ...

  8. 史上最全最强Charles截取手机https协议数据包教程(附上利用此技术制作最近微信比较火的头脑王者辅助外挂)!

    纯原创,思路也是本人花了半个小时整理出来的,整个完成花费了本人半天时间,由于不才刚大学毕业,所以有的编码方面可能不入大牛们的眼,敬请原谅!如有转载请附上本地址,谢谢! 最近微信朋友圈刚刚被跳一跳血洗, ...

  9. Linux 中的网络数据包捕获

    Linux 中的网络数据包捕获 Ashish Chaurasia, 工程师 简介: 本教程介绍了捕获和操纵数据包的不同机制.安全应用程序,如 VPN.防火墙和嗅探器,以及网络应用程序,如路由程序,都依 ...

随机推荐

  1. 用Java实现自己的ArrayList

    利用自己对ArrayList的理解,重写了Java的ArrayList工具类,旨在理解源码的精髓: public class MyArrayList<T> { //成员变量 private ...

  2. 一、Android Studio入门——Eclipse快捷键配置

    [Studio总体介绍] 第一个是运行. 第二个是Debug.  是Studio的设置界面.   工程的配置.   Sync,更改配置.导入JAR包,都会去Sync一次.   SDK Manager. ...

  3. JZOJ.5326【NOIP2017模拟8.21】LCA 的统计

    Description

  4. 170324、Spring 处理器和Resource

    1.Spring 框架允许开发者使用两种后处理器扩展 IoC 容器,这两种后处理器扩展 IoC 容器,这两种后处理器可以后处理 IoC 容器本身,或对容器中所有的 Bean 进行后处理.IoC 容器还 ...

  5. 160401、关于cronExpression的介绍

    关于cronExpression的介绍:   每一个字段都有一套可以指定有效值,如 Seconds (秒):可以用数字0-59 表示, Minutes(分)          :可以用数字0-59 表 ...

  6. SignalR循序渐进(一)简单的聊天程序

    前阵子把玩了一下SignalR,起初以为只是个real-time的web通讯组件.研究了几天后发现,这玩意简直屌炸天,它完全就是个.net的双向异步通讯框架,用它能做很多不可思议的东西.它基于Owin ...

  7. git学习(5)分支管理(续)

    git学习(5)分支管理(续) 1.解决冲突 冲突的产生 如我们在新建分支和原来master分支上对同一文件做了修改并提交,在合并分支的时候就会遇到冲突 比如我新建了分支myBranch,在这个分支上 ...

  8. 1.引入jQuery

    http://libs.baidu.com/jquery/2.1.4/jquery.js

  9. remote tomcat monitor---jmc--jvisualvm

    http://mspring.org/article/1229----------jmc http://doorgods.blog.163.com/blog/static/78547857201481 ...

  10. vue下使用echarts折线图及其横坐标拖拽功能

    vue页面中使用折线图,并且有时间段筛选.因此就需要用到横坐标的拖拽功能. 界面效果如下: 现在来看这个效果的实现代码: drawLine() { let that = this, lineDate ...