[DL学习笔记]从人工神经网络到卷积神经网络_3_使用tensorflow搭建CNN来分类not_MNIST数据(有一些问题)
3:用tensorflow搭个神经网络出来
为什么用tensorflow呢,应为谷歌是亲爹啊,虽然有些人说caffe更适合图像啊mxnet效率更高等等,但爸爸就是爸爸,Android都能那么火,一个道理嘛。其实这些个框架一通百通,就是语法不一样了些。从tensorflow开始吧。
关于tf的安装详见另一篇博文,此处tensorflow的学习基本来自Udacity中google的深度学习课程。
1:tensorflow的计算图
在tensorflow中编写代码可以分成两个部分,首先是要定义一个计算的流程,或者叫计算图,然后再建立一个任务,让tensorflow调用系统资源去运算这个东西,举个栗子:
- import tensorflow as tf #导入tensorflow库
- matrix1=tf.constant([[3.,3.]])#创建常量节点
- matrix2=tf.constant([2.],[2.])
- product=tf.matmul(matrix1,matrix2)#创建矩阵乘法节点
上边并没有运算具体的值,而只是一个运算图。
真正的计算要用到session:
- sess=tf.Session()#启动默认图
- #运行这里会有一堆运行信息出来
- result = sess.run(product)#调用sess的run方法来执行矩阵乘法节点的操作,product代表了矩阵乘法这个节点的输出
- print result
- sess.close()#完成任务后关闭会话
这就是tf的基础运行方式,对于变量,使用Variable方法定义:
- W1=tf.Variable(tf.zeros((2,2)), name=”weights”)
- sess.run(tf.initialize_all_variables())#变量需要预先初始化
- print sess.run(W1)
另一个栗子:
- state = tf.Variable(0,name=”counter”)
- new_value=tf.add(state, tf.constant(1))#对state加1
- update=tf.assign(state,new_value)#将自增后的值重新赋值给state
- with tf.Session() as sess: #使用with可以省去close()操作,还可以处理一些操作出现的异常(也可以用try)
- sess.run(tf.initialize_all_variables())
- print(sess.run(state))#输出计数器值
- for _ in range(3):
- sess.run(update)
- print(sess.run(state))
为毛谷歌爸爸这么蛋疼呢,直接算不好吗?其实一点也不蛋疼,这么设计,同样一套计算图,就可以扔给不同的设备或者分布式的设备去运算了:
- with tf.Session() as sess:
- with tf.device(“/gpu:1”):
- …
另一个好处就是python的运算效率较低,所以设计成使用python编写运算图,之后再使用python之外的运算器(比如底层的C++)去计算。
2:使用tensorflow搭建一个卷积神经网络
这里会详解Google发布在udacity中使用CNN分类not_MINIST数据代码,这些代码包含在了tensorflow源代码中的examples中
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/udacity
<1>:准备数据(notMINIST)
代码的第一部分是载入数据:
- # These are all the modules we'll be using later. Make sure you can import them
- # before proceeding further.
- from __future__ import print_function
- import numpy as np
- import matplotlib.pyplot as plt
- import os
- import sys
- import tarfile
- from IPython.display import display, Image
- from scipy import ndimage
- from sklearn.linear_model import LogisticRegression
- from six.moves.urllib.request import urlretrieve
- import tensorflow as tf
- from six.moves import cPickle as pickle
- from six.moves import range
- # Config the matplotlib backend as plotting inline in IPython
- %matplotlib inline
- url = 'http://commondatastorage.googleapis.com/books1000/'
- last_percent_reported = None
- def download_progress_hook(count, blockSize, totalSize):
- """A hook to report the progress of a download. This is mostly intended for users with
- slow internet connections. Reports every 5% change in download progress.
- """
- global last_percent_reported
- percent = int(count * blockSize * 100 / totalSize)
- if last_percent_reported != percent:
- if percent % 5 == 0:
- sys.stdout.write("%s%%" % percent)
- sys.stdout.flush()
- else:
- sys.stdout.write(".")
- sys.stdout.flush()
- last_percent_reported = percent
- def maybe_download(filename, expected_bytes, force=False):
- """Download a file if not present, and make sure it's the right size."""
- if force or not os.path.exists(filename):
- print('Attempting to download:', filename)
- filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
- print('\nDownload Complete!')
- statinfo = os.stat(filename)
- if statinfo.st_size == expected_bytes:
- print('Found and verified', filename)
- else:
- raise Exception(
- 'Failed to verify ' + filename + '. Can you get to it with a browser?')
- return filename
- train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
- test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
上边的代码是下载所需要的数据集压缩包,下一步是解压
- num_classes = 10
- np.random.seed(133)
- def maybe_extract(filename, force=False):
- root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
- if os.path.isdir(root) and not force:
- # You may override by setting force=True.
- print('%s already present - Skipping extraction of %s.' % (root, filename))
- else:
- print('Extracting data for %s. This may take a while. Please wait.' % root)
- tar = tarfile.open(filename)
- sys.stdout.flush()
- tar.extractall()
- tar.close()
- data_folders = [
- os.path.join(root, d) for d in sorted(os.listdir(root))
- if os.path.isdir(os.path.join(root, d))]
- if len(data_folders) != num_classes:
- raise Exception(
- 'Expected %d folders, one per class. Found %d instead.' % (
- num_classes, len(data_folders)))
- print(data_folders)
- return data_folders
- train_folders = maybe_extract(train_filename)
- test_folders = maybe_extract(test_filename)
解压后可以查看一下代码文件所在的文件夹中会有两个文件夹not_MNIST_large和not_MNIST_small,large用来训练,small用来验证,每个文件夹中都有10个文件夹,分别保存了A到J的图像(28*28),这些图像就是数据集,标签就是A到J,当然之前下载的压缩文件也在。下一步是将这些数据转换成python中更容易处理的pickle格式,为了确保内存装得下,我们把每一个类别分别转换成一个独立的pickle文件,同时也对数据进行去均值和归一化,在这个过程中可能会有一些文件是不可读的,跳过即可,无所谓:
- image_size = 28 # Pixel width and height.
- pixel_depth = 255.0 # Number of levels per pixel.
- def load_letter(folder, min_num_images):
- """Load the data for a single letter label."""
- image_files = os.listdir(folder)
- dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
- dtype=np.float32)
- print(folder)
- num_images = 0
- for image in image_files:
- image_file = os.path.join(folder, image)#文件路径拼接
- try:
- image_data = (ndimage.imread(image_file).astype(float) -
- pixel_depth / 2) / pixel_depth #去均值和归一化
- if image_data.shape != (image_size, image_size):
- raise Exception('Unexpected image shape: %s' % str(image_data.shape))
- dataset[num_images, :, :] = image_data
- num_images = num_images + 1
- except IOError as e:
- print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
- dataset = dataset[0:num_images, :, :]
- if num_images < min_num_images:
- raise Exception('Many fewer images than expected: %d < %d' %
- (num_images, min_num_images))
- print('Full dataset tensor:', dataset.shape)
- print('Mean:', np.mean(dataset))
- print('Standard deviation:', np.std(dataset))
- return dataset
- def maybe_pickle(data_folders, min_num_images_per_class, force=False):
- dataset_names = []
- for folder in data_folders:#本例中就是not_MNIST_large/A, not_MNIST_large/B等等
- set_filename = folder + '.pickle'#folders是A到J,设定文件名
- dataset_names.append(set_filename)#往dataset_names后边添加set_filename
- if os.path.exists(set_filename) and not force:
- # You may override by setting force=True.
- print('%s already present - Skipping pickling.' % set_filename)
- else:
- print('Pickling %s.' % set_filename)
- dataset = load_letter(folder, min_num_images_per_class)
- try:
- with open(set_filename, 'wb') as f:
- pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
- except Exception as e:
- print('Unable to save data to', set_filename, ':', e)
- return dataset_names
- train_datasets = maybe_pickle(train_folders, 45000)
- test_datasets = maybe_pickle(test_folders, 1800)
上边的代码就是把数据压缩到了一个pickle文件中去了,这样生成的数据文件可以在后续的程序中继续使用,这也就是没有直接采集图像数据的原因之一,下一步是将这些pickle文件中的数据进行合并和分类,生成一个拥有训练集、测试集合验证集的文件,训练数据的量取决于内存,如果非要使用超出内存的量的数据必须就分开运算了。
- def make_arrays(nb_rows, img_size):#在merge_dagasets方法中把数据转换成图片个数*imgsize*imgsize(28),同时建一个标签向量,大小为nb_rows
- if nb_rows:
- dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
- labels = np.ndarray(nb_rows, dtype=np.int32)
- else:
- dataset, labels = None, None
- return dataset, labels
- def merge_datasets(pickle_files, train_size, valid_size=0):
- num_classes = len(pickle_files)
- valid_dataset, valid_labels = make_arrays(valid_size, image_size)
- train_dataset, train_labels = make_arrays(train_size, image_size)
- vsize_per_class = valid_size // num_classes
- tsize_per_class = train_size // num_classes
- start_v, start_t = 0, 0
- end_v, end_t = vsize_per_class, tsize_per_class
- end_l = vsize_per_class+tsize_per_class
- for label, pickle_file in enumerate(pickle_files):#将分布在10个pickle文件中的数据合并成一个张量
- try:
- with open(pickle_file, 'rb') as f:
- letter_set = pickle.load(f)
- # 将读取到的pickle文件中的数据打乱
- np.random.shuffle(letter_set)
- if valid_dataset is not None:
- valid_letter = letter_set[:vsize_per_class, :, :]
- valid_dataset[start_v:end_v, :, :] = valid_letter
- valid_labels[start_v:end_v] = label
- start_v += vsize_per_class
- end_v += vsize_per_class
- train_letter = letter_set[vsize_per_class:end_l, :, :]
- train_dataset[start_t:end_t, :, :] = train_letter
- train_labels[start_t:end_t] = label
- start_t += tsize_per_class
- end_t += tsize_per_class
- except Exception as e:
- print('Unable to process data from', pickle_file, ':', e)
- raise
- return valid_dataset, valid_labels, train_dataset, train_labels
- train_size = 200000
- valid_size = 10000
- test_size = 10000
- valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
- train_datasets, train_size, valid_size)
- _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
- print('Training:', train_dataset.shape, train_labels.shape)
- print('Validation:', valid_dataset.shape, valid_labels.shape)
- print('Testing:', test_dataset.shape, test_labels.shape)
最后将数据再次打乱保存后,就得到了最后的pickle文件。
- def randomize(dataset, labels):
- permutation = np.random.permutation(labels.shape[0])
- shuffled_dataset = dataset[permutation,:,:]
- shuffled_labels = labels[permutation]
- return shuffled_dataset, shuffled_labels
- train_dataset, train_labels = randomize(train_dataset, train_labels)
- test_dataset, test_labels = randomize(test_dataset, test_labels)
- valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
- pickle_file = 'notMNIST.pickle'
- try:
- f = open(pickle_file, 'wb')
- save = {#存到一个dictionary中去
- 'train_dataset': train_dataset,#num*28*28
- 'train_labels': train_labels,#num*10
- 'valid_dataset': valid_dataset,#…
- 'valid_labels': valid_labels,
- 'test_dataset': test_dataset,
- 'test_labels': test_labels,
- }
- pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
- f.close()
- except Exception as e:
- print('Unable to save data to', pickle_file, ':', e)
- raise
- statinfo = os.stat(pickle_file)
- print('Compressed pickle size:', statinfo.st_size)
上面是一些预操作,然后我们读取这个pickle文件,得到卷积神经网络要使用的数据文件:
- pickle_file = 'notMNIST.pickle'
- with open(pickle_file, 'rb') as f:
- save = pickle.load(f)
- train_dataset = save['train_dataset']
- train_labels = save['train_labels']
- valid_dataset = save['valid_dataset']
- valid_labels = save['valid_labels']
- test_dataset = save['test_dataset']
- test_labels = save['test_labels']
- del save # hint to help gc free up memory
- print('Training set', train_dataset.shape, train_labels.shape)
- print('Validation set', valid_dataset.shape, valid_labels.shape)
- print('Test set', test_dataset.shape, test_labels.shape)
- 运行后:
- Training set (200000, 28, 28) (200000,)
- Validation set (10000, 28, 28) (10000,)
- Test set (18724, 28, 28) (18724,)
可见训练集、验证集和测试集的原始格式。如果要将数据用到一个人工神经网络中,就要把每个图像数据都转换成一个长×宽维的向量,而在卷积神经网络中我们需要将图片数据转换成长×宽×深度的样子,同时将labels转换成one-hot encodings格式,于是:
- image_size = 28
- num_labels = 10
- num_channels = 1 # grayscale,如果要使用RGB格式数据就是3了
- import numpy as np
- def reformat(dataset, labels):
- dataset = dataset.reshape(
- (-1, image_size, image_size, num_channels)).astype(np.float32)
- #-1表示我懒得计算该填什么数字,由python通过a和其他的值3推测出来(这句话来自知乎,感觉好精辟啊)
- labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
- #这句写的很迷,肿么解释。。。
- #labels[:,None]相当于把shape为(20w,)的labels转换成shape为(20w,1)的数组,从[1,2,3,4...]到[[1],[2],[3],…]
- #np.arange是生成了一个[0,1,2,3...]的(10,)的数组
- #判断一个(10,)是否等于一个(10,1)的数组,或者说判断一个列向量是否等于一个行向量,可理解为矩阵乘法了,定义乘法规则为一样就是true,不一样就是false,那么这个判断式的结果就是一个20w*10的数组。
- return dataset, labels
- train_dataset, train_labels = reformat(train_dataset, train_labels)
- valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
- test_dataset, test_labels = reformat(test_dataset, test_labels)
- print('Training set', train_dataset.shape, train_labels.shape)
- print('Validation set', valid_dataset.shape, valid_labels.shape)
- print('Test set', test_dataset.shape, test_labels.shape)
运行结果为:
Training set (200000, 28, 28, 1) (200000, 10)
Validation set (10000, 28, 28, 1) (10000, 10)
Test set (10000, 28, 28, 1) (10000, 10)
下一步我们先定义一个用来检测预测精度的方法:
- def accuracy(predictions, labels):
- return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
- / predictions.shape[0])#注意这里的argmax方法返回的是数组的索引值。
<2>:Draw a graph
前边说过了,tensorflow中进行运算,首先需要构建一个运算图,在这里将建立一个拥有两个卷积层和一个全连接层的卷积神经网络,算这个东西需要很土豪的显卡,所以限制了一下深度和全卷积层的节点。
- batch_size = 16 #SGD每次选取的图片个数
- patch_size = 5 #卷积窗口大小
- depth = 16 #卷积深度,就是特征图的个数
- num_hidden = 64 #全连接层隐层大小
- graph = tf.Graph()
- with graph.as_default():
- # Input data.4
- tf_train_dataset = tf.placeholder(
- tf.float32, shape=(batch_size, image_size, image_size, num_channels))#每次选出batch_size个图片参与运算
- tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
- tf_valid_dataset = tf.constant(valid_dataset)
- tf_test_dataset = tf.constant(test_dataset)
- # Variables.
- layer1_weights = tf.Variable(tf.truncated_normal(
- [patch_size, patch_size, num_channels, depth], stddev=0.1))#随机初始化第一卷积层权重参数,depth*num_channels张特征图,滑动窗口大小为5*5
- layer1_biases = tf.Variable(tf.zeros([depth]))#第一卷积层bias项初始化为0
- layer2_weights = tf.Variable(tf.truncated_normal(#随机初始化第二卷积层权重参数,depth*depth张特征图,滑动窗口5*5
- [patch_size, patch_size, depth, depth], stddev=0.1))
- layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))#第二卷积层bias项初始化为0
- layer3_weights = tf.Variable(tf.truncated_normal(
- [image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))#全连接层第一层,//4是因为后边定义模型的时候定义stride为2,
- #所以两次卷积后的数据就是7*7*16*16了???????
- layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
- layer4_weights = tf.Variable(tf.truncated_normal(#全连接层第二层
- [num_hidden, num_labels], stddev=0.1))
- layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
- # Model.
- def model(data):
- conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')#这里1,2,2,1是stride,依次对应到data的格式中去
- #same padding是补0的那种padding模式,比较便于运算,所以基本上都用这种的。
- hidden = tf.nn.relu(conv + layer1_biases)
- conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
- hidden = tf.nn.relu(conv + layer2_biases)
- shape = hidden.get_shape().as_list()
- reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
- hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
- return tf.matmul(hidden, layer4_weights) + layer4_biases
- # Training computation.
- logits = model(tf_train_dataset)
- loss = tf.reduce_mean(
- tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
- # Optimizer.
- optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)#梯度下降
- # Predictions for the training, validation, and test data.
- train_prediction = tf.nn.softmax(logits)
- valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
- test_prediction = tf.nn.softmax(model(tf_test_dataset))
这里看程序的话感觉好像是计算了,尤其是最后几句话,其实并没有计算的,下一步才是使用session来计算。
- num_steps = 1001
- #batch_size=16
- with tf.Session(graph=graph) as session:
- #tf.global_variables_initializer().run()#for old version of tf0
- session.run(tf.initialize_all_variables())
- print('Initialized')
- for step in range(num_steps):
- offset = (step * batch_size) % (train_labels.shape[0] - batch_size)#这句是防止迭代次数过多超出数据集范围,就通过取余数改变取batch的偏置
- batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
- batch_labels = train_labels[offset:(offset + batch_size), :]
- feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
- _, l, predictions = session.run(
- [optimizer, loss, train_prediction], feed_dict=feed_dict)
- if (step % 50 == 0):
- print('Minibatch loss at step %d: %f' % (step, l))
- print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
- print('Validation accuracy: %.1f%%' % accuracy(
- valid_prediction.eval(), valid_labels))
- print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
以上就是实用卷积神经网络简单的区分not_MNIST数据的程序,池化层我还没加上,还有dropout防止过拟合也没有添加,待续。先看rfcn吧。
[DL学习笔记]从人工神经网络到卷积神经网络_3_使用tensorflow搭建CNN来分类not_MNIST数据(有一些问题)的更多相关文章
- [DL学习笔记]从人工神经网络到卷积神经网络_2_卷积神经网络
先一层一层的说卷积神经网络是啥: 1:卷积层,特征提取 我们输入这样一幅图片(28*28): 如果用传统神经网络,下一层的每个神经元将连接到输入图片的每一个像素上去,但是在卷积神经网络中,我们只把输入 ...
- [DL学习笔记]从人工神经网络到卷积神经网络_1_神经网络和BP算法
前言:这只是我的一个学习笔记,里边肯定有不少错误,还希望有大神能帮帮找找,由于是从小白的视角来看问题的,所以对于初学者或多或少会有点帮助吧. 1:人工全连接神经网络和BP算法 <1>:人工 ...
- tensorflow学习笔记五:mnist实例--卷积神经网络(CNN)
mnist的卷积神经网络例子和上一篇博文中的神经网络例子大部分是相同的.但是CNN层数要多一些,网络模型需要自己来构建. 程序比较复杂,我就分成几个部分来叙述. 首先,下载并加载数据: import ...
- Deep Q-Network 学习笔记(二)—— Q-Learning与神经网络结合使用(有代码实现)
参考资料: https://morvanzhou.github.io/ 非常感谢莫烦老师的教程 http://mnemstudio.org/path-finding-q-learning-tutori ...
- 深度学习:Keras入门(二)之卷积神经网络(CNN)
说明:这篇文章需要有一些相关的基础知识,否则看起来可能比较吃力. 1.卷积与神经元 1.1 什么是卷积? 简单来说,卷积(或内积)就是一种先把对应位置相乘然后再把结果相加的运算.(具体含义或者数学公式 ...
- 深度学习:Keras入门(二)之卷积神经网络(CNN)【转】
本文转载自:https://www.cnblogs.com/lc1217/p/7324935.html 说明:这篇文章需要有一些相关的基础知识,否则看起来可能比较吃力. 1.卷积与神经元 1.1 什么 ...
- 深度学习:Keras入门(二)之卷积神经网络(CNN)(转)
转自http://www.cnblogs.com/lc1217/p/7324935.html 1.卷积与神经元 1.1 什么是卷积? 简单来说,卷积(或内积)就是一种先把对应位置相乘然后再把结果相加的 ...
- Spring MVC 学习笔记12 —— SpringMVC+Hibernate开发(1)依赖包搭建
Spring MVC 学习笔记12 -- SpringMVC+Hibernate开发(1)依赖包搭建 用Hibernate帮助建立SpringMVC与数据库之间的联系,通过配置DAO层,Service ...
- 学习笔记CB009:人工神经网络模型、手写数字识别、多层卷积网络、词向量、word2vec
人工神经网络,借鉴生物神经网络工作原理数学模型. 由n个输入特征得出与输入特征几乎相同的n个结果,训练隐藏层得到意想不到信息.信息检索领域,模型训练合理排序模型,输入特征,文档质量.文档点击历史.文档 ...
随机推荐
- 记一次内存泄漏DUMP分析
自从进入一家创业公司以后,逐渐忙成狗,却无所收获,感觉自身的技术能力用武之地很少,工作生活都在业务逻辑中颠倒. 前些天线上服务内存吃紧,让运维把DUMP拿下来,分析一下聊以自慰. 先来统计一下大对象信 ...
- LibVLC audio controls
原文 http://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__audio.html LibVLC audio co ...
- HTTP 报文中的 Header 字段进行身份验证
[小技巧][ASP.Net MVC Hack] 使用 HTTP 报文中的 Header 字段进行身份验证 在一些 Web 系统中,身份验证是依靠硬件证书进行的:在电脑上插入 USB 证书,浏览器插件读 ...
- 删除指定表的所有索引,包括主键索引,唯一索引和普通索引 ,适用于sql server 2005,
原文:删除指定表的所有索引,包括主键索引,唯一索引和普通索引 ,适用于sql server 2005, --删除指定表中所有索引 --用法:declare @tableName varchar(100 ...
- Date的使用
方法 说明 Date() 返回当日的日期和时间 getDate() 获取当天(1-31) getDay() 获取当天的星期(0-6) getMonth() 获取月份(0-11) getFullYear ...
- 利用自定义的AuthenticationFilter实现Basic认证
[ASP.NET MVC] 利用自定义的AuthenticationFilter实现Basic认证 很多情况下目标Action方法都要求在一个安全上下文中被执行,这里所谓的安全上下文主要指的是当前 ...
- 《剑指Offer》面试题-从头到尾打印链表
题目描述: 输入一个链表,从尾到头打印链表每个节点的值. 输入: 每个输入文件仅包含一组测试样例.每一组测试案例包含多行,每行一个大于0的整数,代表一个链表的节点.第一行是链表第一个节点的值,依次类推 ...
- 2014.first[未填]
之后就按照自己的直觉,整理了第一套,难度为简单,差不多比2013noipday1水一点...先练练手而已 T1 vijos1196吃糖果游戏 博弈论 依题意,我们可知,如果去分数目为2,3,7,8必输 ...
- Stimulsoft.Report.net报表简单实用
1 using System; using System.Collections.Generic; using System.Linq; using System.Web; using System. ...
- [转]Creating an iPhone Daemon
ref: http://chrisalvares.com/blog/7/creating-an-iphone-daemon-part-1/ http://chrisalvares.com/blog/3 ...