笔记:CS231n+assignment2(作业二)(一)
第二个作业难度很高,但做(抄)完之后收获还是很大的....
一、Fully-Connected Neural Nets
首先是对之前的神经网络的程序进行重构,目的是可以构建任意大小的全连接的neural network,这里用模块化的思想构建整个代码,具体思路如下:
- #前向传播
- def layer_forward(x, w):
- """ Receive inputs x and weights w """
- # 做前向计算
- z = # 需要存储的中间值,便于BP的时候使用
- # Do some more computations ...
- out = # the output
- cache = (x, w, z, out) # Values we need to compute gradients
- return out, cache
- #后向传播
- def layer_backward(dout, cache):
- """
- Receive derivative of loss with respect to outputs and cache,
- and compute derivative with respect to inputs.
- """
- # Unpack cache values
- x, w, z, out = cache
- # Use values in cache to compute derivatives
- dx = # Derivative of loss with respect to x
- dw = # Derivative of loss with respect to w
- return dx, dw
在上面的思想指导下,要求实现下面的代码:
- def affine_forward(x, w, b):
- """
- X的shape是(N,d_1,d_2,...d_k),第一维带便minibatch的数目,后面是把图片的shape,所以进来的时候把后面全面转为
一维的向量- Inputs:
- - x: A numpy array containing input data, of shape (N, d_1, ..., d_k)
- - w: A numpy array of weights, of shape (D, M)
- - b: A numpy array of biases, of shape (M,)
- Returns a tuple of:
- - out: output, of shape (N, M)
- - cache: (x, w, b)
- """
- out = None
- N=x.shape[0]
- x_new=x.reshape(N,-1)#转为二维向量
- out=np.dot(x_new,w)+b
- cache = (x, w, b) # 不需要保存out
- return out, cache
- def affine_backward(dout, cache):
- x, w, b = cache
- dx, dw, db = None, None, None
- dx=np.dot(dout,w.T)
- dx=np.reshape(dx,x.shape)
- x_new=x.reshape(x.shape[0],-1)
- dw=np.dot(x_new.T,dout)
- db=np.sum(dout,axis=0,keepdims=True)
- return dx, dw, db
- def relu_forward(x):
- """
- Computes the forward pass for a layer of rectified linear units (ReLUs).
- Input:
- - x: Inputs, of any shape
- Returns a tuple of:
- - out: Output, of the same shape as x
- - cache: x
- """
- out = None
- out=np.maximum(0,x)
- cache = x
- return out, cache
- def relu_backward(dout, cache):
- dx, x = None, cache
- #############################################################################
- # TODO: Implement the ReLU backward pass. #
- #############################################################################
- dx=dout
- dx[x<=0]=0
- #############################################################################
- # END OF YOUR CODE #
- #############################################################################
- return dx
上面值得商讨的就是为什么求db的公式是db=np.sum(dout,axis=0,keepdims=True),在我看来是少了一个平均的操作的,个人感觉还是因为db的作用小,所以这里用sum的话会方便...grandient check的代码不需要专门为它进行改变。
完成上面两个基本的layer,就可以构建一个Sandwich的层了,因为fc-relu的使用还是比较常见的,所以这里直接构建了出来:
- def affine_relu_forward(x, w, b):
- """
- Convenience layer that perorms an affine transform followed by a ReLU
- Inputs:
- - x: Input to the affine layer
- - w, b: Weights for the affine layer
- Returns a tuple of:
- - out: Output from the ReLU
- - cache: Object to give to the backward pass
- """
- a, fc_cache = affine_forward(x, w, b)
- out, relu_cache = relu_forward(a)
- cache = (fc_cache, relu_cache)
- return out, cache
- def affine_relu_backward(dout, cache):
- """
- Backward pass for the affine-relu convenience layer
- """
- fc_cache, relu_cache = cache
- da = relu_backward(dout, relu_cache)
- dx, dw, db = affine_backward(da, fc_cache)
- return dx, dw, db
后面有一个构建上层layer的网络,我不准备说了,直接聊一聊一个迄今为止最厉害的类FullyConnectecNEt吧,先上代码和注释:
- 1 class FullyConnectedNet(object):
2 """- A fully-connected neural network with an arbitrary number of hidden layers,
- ReLU nonlinearities, and a softmax loss function. This will also implement
- dropout and batch normalization as options. For a network with L layers,
- the architecture will be
- {affine - [batch norm] - relu - [dropout]} x (L - 1) - affine - softmax
- where batch normalization and dropout are optional, and the {...} block is
- repeated L - 1 times.
- Similar to the TwoLayerNet above, learnable parameters are stored in the
- self.params dictionary and will be learned using the Solver class.
- """
- def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,
- dropout=0, use_batchnorm=False, reg=0.0,
- weight_scale=1e-2, dtype=np.float32, seed=None):
- """
- Initialize a new FullyConnectedNet.
- Inputs:
- - hidden_dims: A list of integers giving the size of each hidden layer.
- - input_dim: An integer giving the size of the input.
- - num_classes: An integer giving the number of classes to classify.
- - dropout: Scalar between 0 and 1 giving dropout strength. If dropout=0 then
- the network should not use dropout at all.
- - use_batchnorm: Whether or not the network should use batch normalization.
- - reg: Scalar giving L2 regularization strength.
- - weight_scale: Scalar giving the standard deviation for random
- initialization of the weights.
- - dtype: A numpy datatype object; all computations will be performed using
- this datatype. float32 is faster but less accurate, so you should use
- float64 for numeric gradient checking.
- - seed: If not None, then pass this random seed to the dropout layers. This
- will make the dropout layers deteriminstic so we can gradient check the
- model.
- """
- self.use_batchnorm = use_batchnorm
- self.use_dropout = dropout > 0
- self.reg = reg
- self.num_layers = 1 + len(hidden_dims)
- self.dtype = dtype
- self.params = {}
- ############################################################################
- # TODO: Initialize the parameters of the network, storing all values in #
- # the self.params dictionary. Store weights and biases for the first layer #
- # in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #
- # initialized from a normal distribution with standard deviation equal to #
- # weight_scale and biases should be initialized to zero. #
- # #
- # When using batch normalization, store scale and shift parameters for the #
- # first layer in gamma1 and beta1; for the second layer use gamma2 and #
- # beta2, etc. Scale parameters should be initialized to one and shift #
- # parameters should be initialized to zero. #
- ############################################################################
- layers_dims = [input_dim] + hidden_dims + [num_classes] #z这里存储的是每个layer的大小,因为中间的是list,所以要把前后连个加上list来做
- for i in xrange(self.num_layers):
- self.params['W' + str(i + 1)] = weight_scale * np.random.randn(layers_dims[i], layers_dims[i + 1])
- self.params['b' + str(i + 1)] = np.zeros((1, layers_dims[i + 1]))
- if self.use_batchnorm and i < len(hidden_dims):#最后一层是不需要batchnorm的
- self.params['gamma' + str(i + 1)] = np.ones((1, layers_dims[i + 1]))
- self.params['beta' + str(i + 1)] = np.zeros((1, layers_dims[i + 1]))
- ############################################################################
- # END OF YOUR CODE #
- ############################################################################
- # When using dropout we need to pass a dropout_param dictionary to each
- # dropout layer so that the layer knows the dropout probability and the mode
- # (train / test). You can pass the same dropout_param to each dropout layer.
- self.dropout_param = {}
- if self.use_dropout:
- self.dropout_param = {'mode': 'train', 'p': dropout}
- if seed is not None:
- self.dropout_param['seed'] = seed
- # With batch normalization we need to keep track of running means and
- # variances, so we need to pass a special bn_param object to each batch
- # normalization layer. You should pass self.bn_params[0] to the forward pass
- # of the first batch normalization layer, self.bn_params[1] to the forward
- # pass of the second batch normalization layer, etc.
- self.bn_params = []
- if self.use_batchnorm:
- self.bn_params = [{'mode': 'train'} for i in xrange(self.num_layers - 1)]
- # Cast all parameters to the correct datatype
- for k, v in self.params.iteritems():
- self.params[k] = v.astype(dtype)
- def loss(self, X, y=None):
- """
- Compute loss and gradient for the fully-connected net.
- Input / output: Same as TwoLayerNet above.
- """
- X = X.astype(self.dtype)
- mode = 'test' if y is None else 'train'
- # Set train/test mode for batchnorm params and dropout param since they
- # behave differently during training and testing.
- if self.dropout_param is not None:
- self.dropout_param['mode'] = mode
- if self.use_batchnorm:
- for bn_param in self.bn_params:
- bn_param[mode] = mode
- scores = None
- ############################################################################
- # TODO: Implement the forward pass for the fully-connected net, computing #
- # the class scores for X and storing them in the scores variable. #
- # #
- # When using dropout, you'll need to pass self.dropout_param to each #
- # dropout forward pass. #
- # #
- # When using batch normalization, you'll need to pass self.bn_params[0] to #
- # the forward pass for the first batch normalization layer, pass #
- # self.bn_params[1] to the forward pass for the second batch normalization #
- # layer, etc. #
- ############################################################################
- h, cache1, cache2, cache3,cache4, bn, out = {}, {}, {}, {}, {}, {},{}
- out[0] = X #存储每一层的out,按照逻辑,X就是out0[0]
- # Forward pass: compute loss
- for i in xrange(self.num_layers - 1):
- # 得到每一层的参数
- w, b = self.params['W' + str(i + 1)], self.params['b' + str(i + 1)]
- if self.use_batchnorm:
- gamma, beta = self.params['gamma' + str(i + 1)], self.params['beta' + str(i + 1)]
- h[i], cache1[i] = affine_forward(out[i], w, b)
- bn[i], cache2[i] = batchnorm_forward(h[i], gamma, beta, self.bn_params[i])
- out[i + 1], cache3[i] = relu_forward(bn[i])
- if self.use_dropout:
- out[i+1], cache4[i] = dropout_forward(out[i+1] , self.dropout_param)
- else:
- out[i + 1], cache3[i] = affine_relu_forward(out[i], w, b)
- if self.use_dropout:
- out[i + 1], cache4[i] = dropout_forward(out[i + 1], self.dropout_param)
- W, b = self.params['W' + str(self.num_layers)], self.params['b' + str(self.num_layers)]
- scores, cache = affine_forward(out[self.num_layers - 1], W, b) #对最后一层进行计算
- ############################################################################
- # END OF YOUR CODE #
- ############################################################################
- # If test mode return early
- if mode == 'test':
- return scores
- loss, grads = 0.0, {}
- ############################################################################
- # TODO: Implement the backward pass for the fully-connected net. Store the #
- # loss in the loss variable and gradients in the grads dictionary. Compute #
- # data loss using softmax, and make sure that grads[k] holds the gradients #
- # for self.params[k]. Don't forget to add L2 regularization! #
- # #
- # When using batch normalization, you don't need to regularize the scale #
- # and shift parameters. #
- # #
- # NOTE: To ensure that your implementation matches ours and you pass the #
- # automated tests, make sure that your L2 regularization includes a factor #
- # of 0.5 to simplify the expression for the gradient. #
- ############################################################################
- data_loss, dscores = softmax_loss(scores, y)
- reg_loss = 0
- for i in xrange(self.num_layers):
- reg_loss += 0.5 * self.reg * np.sum(self.params['W' + str(i + 1)] * self.params['W' + str(i + 1)])
- loss = data_loss + reg_loss
- # Backward pass: compute gradients
- dout, dbn, dh, ddrop = {}, {}, {}, {}
- t = self.num_layers - 1
- dout[t], grads['W' + str(t + 1)], grads['b' + str(t + 1)] = affine_backward(dscores, cache)#这个cache就是上面得到的 for i in xrange(t):
- if self.use_batchnorm:
- if self.use_dropout:
- dout[t - i] = dropout_backward(dout[t-i], cache4[t-1-i])
- dbn[t - 1 - i] = relu_backward(dout[t - i], cache3[t - 1 - i])
- dh[t - 1 - i], grads['gamma' + str(t - i)], grads['beta' + str(t - i)] = batchnorm_backward(dbn[t - 1 - i],
- cache2[
- t - 1 - i])
- dout[t - 1 - i], grads['W' + str(t - i)], grads['b' + str(t - i)] = affine_backward(dh[t - 1 - i],
- cache1[t - 1 - i])
- else:
- if self.use_dropout:
- dout[t - i] = dropout_backward(dout[t - i], cache4[t - 1 - i])
- dout[t - 1 - i], grads['W' + str(t - i)], grads['b' + str(t - i)] = affine_relu_backward(dout[t - i],
- cache3[t - 1 - i])
- # Add the regularization gradient contribution
- for i in xrange(self.num_layers):
- grads['W' + str(i + 1)] += self.reg * self.params['W' + str(i + 1)]
- ############################################################################
- # END OF YOUR CODE #
- ############################################################################
- return loss, grads
上面的代码因为是上层代码,不需要关心具体的Bp如何实现(因为之前已经实现了),所以还是很好看懂的,但到现在还是没有结束的,我们还要使用slover来对
神经网络进优化求解。
- import numpy as np
- from cs231n import optim
- class Solver(object):
- """
- A Solver encapsulates all the logic necessary for training classification
- models. The Solver performs stochastic gradient descent using different
- update rules defined in optim.py.
- The solver accepts both training and validataion data and labels so it can
- periodically check classification accuracy on both training and validation
- data to watch out for overfitting.
- To train a model, you will first construct a Solver instance, passing the
- model, dataset, and various optoins (learning rate, batch size, etc) to the
- constructor. You will then call the train() method to run the optimization
- procedure and train the model.
- After the train() method returns, model.params will contain the parameters
- that performed best on the validation set over the course of training.
- In addition, the instance variable solver.loss_history will contain a list
- of all losses encountered during training and the instance variables
- solver.train_acc_history and solver.val_acc_history will be lists containing
- the accuracies of the model on the training and validation set at each epoch.
- Example usage might look something like this:
- data = {
- 'X_train': # training data
- 'y_train': # training labels
- 'X_val': # validation data
- 'X_train': # validation labels
- }
- model = MyAwesomeModel(hidden_size=100, reg=10)
- solver = Solver(model, data,
- update_rule='sgd',
- optim_config={
- 'learning_rate': 1e-3,
- },
- lr_decay=0.95,
- num_epochs=10, batch_size=100,
- print_every=100)
- solver.train()
- A Solver works on a model object that must conform to the following API:
- - model.params must be a dictionary mapping string parameter names to numpy
- arrays containing parameter values.
- - model.loss(X, y) must be a function that computes training-time loss and
- gradients, and test-time classification scores, with the following inputs
- and outputs:
- Inputs:
- - X: Array giving a minibatch of input data of shape (N, d_1, ..., d_k)
- - y: Array of labels, of shape (N,) giving labels for X where y[i] is the
- label for X[i].
- Returns:
- If y is None, run a test-time forward pass and return:
- - scores: Array of shape (N, C) giving classification scores for X where
- scores[i, c] gives the score of class c for X[i].
- If y is not None, run a training time forward and backward pass and return
- a tuple of:
- - loss: Scalar giving the loss
- - grads: Dictionary with the same keys as self.params mapping parameter
- names to gradients of the loss with respect to those parameters.
- """
- def __init__(self, model, data, **kwargs):
- """
- Construct a new Solver instance.
- Required arguments:
- - model: A model object conforming to the API described above
- - data: A dictionary of training and validation data with the following:
- 'X_train': Array of shape (N_train, d_1, ..., d_k) giving training images
- 'X_val': Array of shape (N_val, d_1, ..., d_k) giving validation images
- 'y_train': Array of shape (N_train,) giving labels for training images
- 'y_val': Array of shape (N_val,) giving labels for validation images
- Optional arguments:
- - update_rule: A string giving the name of an update rule in optim.py.
- Default is 'sgd'.
- - optim_config: A dictionary containing hyperparameters that will be
- passed to the chosen update rule. Each update rule requires different
- hyperparameters (see optim.py) but all update rules require a
- 'learning_rate' parameter so that should always be present.
- - lr_decay: A scalar for learning rate decay; after each epoch the learning
- rate is multiplied by this value.
- - batch_size: Size of minibatches used to compute loss and gradient during
- training.
- - num_epochs: The number of epochs to run for during training.
- - print_every: Integer; training losses will be printed every print_every
- iterations.
- - verbose: Boolean; if set to false then no output will be printed during
- training.
- """
- self.model = model
- self.X_train = data['X_train']
- self.y_train = data['y_train']
- self.X_val = data['X_val']
- self.y_val = data['y_val']
- # Unpack keyword arguments
- self.update_rule = kwargs.pop('update_rule', 'sgd')
- self.optim_config = kwargs.pop('optim_config', {})
- self.lr_decay = kwargs.pop('lr_decay', 1.0)
- self.batch_size = kwargs.pop('batch_size', 100)
- self.num_epochs = kwargs.pop('num_epochs', 10)
- self.print_every = kwargs.pop('print_every', 10)
- self.verbose = kwargs.pop('verbose', True)
- # Throw an error if there are extra keyword arguments
- if len(kwargs) > 0:
- extra = ', '.join('"%s"' % k for k in kwargs.keys())
- raise ValueError('Unrecognized arguments %s' % extra)
- # Make sure the update rule exists, then replace the string
- # name with the actual function
- if not hasattr(optim, self.update_rule):
- raise ValueError('Invalid update_rule "%s"' % self.update_rule)
- self.update_rule = getattr(optim, self.update_rule)
- self._reset()
- def _reset(self):
- """
- Set up some book-keeping variables for optimization. Don't call this
- manually.
- """
- # Set up some variables for book-keeping
- self.epoch = 0
- self.best_val_acc = 0
- self.best_params = {}
- self.loss_history = []
- self.train_acc_history = []
- self.val_acc_history = []
- # Make a deep copy of the optim_config for each parameter
- self.optim_configs = {}
- for p in self.model.params:
- d = {k: v for k, v in self.optim_config.iteritems()}
- self.optim_configs[p] = d
- def _step(self):
- """
- Make a single gradient update. This is called by train() and should not
- be called manually.
- """
- # Make a minibatch of training data
- num_train = self.X_train.shape[0]
- batch_mask = np.random.choice(num_train, self.batch_size)
- X_batch = self.X_train[batch_mask]
- y_batch = self.y_train[batch_mask]
- # Compute loss and gradient
- loss, grads = self.model.loss(X_batch, y_batch)
- self.loss_history.append(loss)
- # Perform a parameter update
- for p, w in self.model.params.iteritems():
- dw = grads[p]
- config = self.optim_configs[p]
- next_w, next_config = self.update_rule(w, dw, config) #因为有很多update的方法
- self.model.params[p] = next_w
- self.optim_configs[p] = next_config
- def check_accuracy(self, X, y, num_samples=None, batch_size=100):
- """
- Check accuracy of the model on the provided data.
- Inputs:
- - X: Array of data, of shape (N, d_1, ..., d_k)
- - y: Array of labels, of shape (N,)
- - num_samples: If not None, subsample the data and only test the model
- on num_samples datapoints.
- - batch_size: Split X and y into batches of this size to avoid using too
- much memory.
- Returns:
- - acc: Scalar giving the fraction of instances that were correctly
- classified by the model.
- """
- # Maybe subsample the data
- N = X.shape[0]
- if num_samples is not None and N > num_samples:
- mask = np.random.choice(N, num_samples)
- N = num_samples
- X = X[mask]
- y = y[mask]
- # Compute predictions in batches
- num_batches = N / batch_size
- if N % batch_size != 0:
- num_batches += 1
- y_pred = []
- for i in xrange(num_batches):
- start = i * batch_size
- end = (i + 1) * batch_size
- scores = self.model.loss(X[start:end])
- y_pred.append(np.argmax(scores, axis=1))
- y_pred = np.hstack(y_pred)
- acc = np.mean(y_pred == y)
- return acc
- def train(self):
- """
- Run optimization to train the model.
- """
- num_train = self.X_train.shape[0]
- iterations_per_epoch = max(num_train / self.batch_size, 1)
- num_iterations = self.num_epochs * iterations_per_epoch
- for t in xrange(num_iterations):
- self._step()
- # Maybe print training loss
- if self.verbose and t % self.print_every == 0:
- print '(Iteration %d / %d) loss: %f' % (
- t + 1, num_iterations, self.loss_history[-1])
- # At the end of every epoch, increment the epoch counter and decay the
- # learning rate.
- epoch_end = (t + 1) % iterations_per_epoch == 0
- if epoch_end:
- self.epoch += 1
- for k in self.optim_configs:
- self.optim_configs[k]['learning_rate'] *= self.lr_decay
- # Check train and val accuracy on the first iteration, the last
- # iteration, and at the end of each epoch.
- first_it = (t == 0)
- last_it = (t == num_iterations + 1)
- if first_it or last_it or epoch_end:
- train_acc = self.check_accuracy(self.X_train, self.y_train,
- num_samples=1000)
- val_acc = self.check_accuracy(self.X_val, self.y_val)
- self.train_acc_history.append(train_acc)
- self.val_acc_history.append(val_acc)
- if self.verbose:
- print '(Epoch %d / %d) train acc: %f; val_acc: %f' % (
- self.epoch, self.num_epochs, train_acc, val_acc)
- # Keep track of the best model
- if val_acc > self.best_val_acc:
- self.best_val_acc = val_acc
- self.best_params = {}
- for k, v in self.model.params.iteritems():
- self.best_params[k] = v.copy()
- # At the end of training swap the best params into the model
- self.model.params = self.best_params
至此可以说构建了一个deep learning全连接网络的框架,我们可以来回顾一下具体做了些事:
1.编写全连接层,Relu层的前向传播和反向传播算法。
2.编写Sandwich的函数,只是将上面的集成起来而已。
3.编一个FUllyconnect的类,功能是:传入neural network相应的参数,得到一个对应的model。
4.编写一个solver的类,功能是:传入model和图片,进行最后的最优求解。
有哪些问题呢:
1.前向传播的时候需要保存一些参数,这里直接返回cache和out。
2.编写多层的时候需要注意很多点,各层的参数,注意,对于i层,它的输入时out[i],输出是out[i+1],参数信息是cache[i]。
3.SGD的update的rule毕竟还是太navie了,之后可以尝试一下别的。
写好了上面的代码,然后呢?
这里有一些很有用的trick需要记住的。
当你构建了一个neural network准备去跑你的数据集的时候,你肯定不能一次就去跑那个最大最原始的,最好的方法是先去overfitting一个小数据集,证明你的网络是有不错的学习能力的,这个时候就要大胆调参了...个人建议LR小一点,迭代次数多一点,scale也要看情况。
总结:第二次作业的内容很多,这次先说这么多了,未完待续。
笔记:CS231n+assignment2(作业二)(一)的更多相关文章
- 【hadoop代码笔记】hadoop作业提交之汇总
一.概述 在本篇博文中,试图通过代码了解hadoop job执行的整个流程.即用户提交的mapreduce的jar文件.输入提交到hadoop的集群,并在集群中运行.重点在代码的角度描述整个流程,有些 ...
- ufldl学习笔记与编程作业:Logistic Regression(逻辑回归)
ufldl学习笔记与编程作业:Logistic Regression(逻辑回归) ufldl出了新教程,感觉比之前的好,从基础讲起.系统清晰,又有编程实践. 在deep learning高质量群里面听 ...
- Java菜鸟学习笔记--数组篇(三):二维数组
定义 //1.二维数组的定义 //2.二维数组的内存空间 //3.不规则数组 package me.array; public class Array2Demo{ public static void ...
- Python学习之编写三级菜单(Day1,作业二)
作业二:多级菜单 三级菜单 可依次进入各子菜单 在各级菜单中输入B返回上一级Q退出程序 知识点:字典的操作,while循环,for循环,if判断 思路: 1.开始,打印一级菜单让用户进行选择(可以输入 ...
- ufldl学习笔记和编程作业:Feature Extraction Using Convolution,Pooling(卷积和汇集特征提取)
ufldl学习笔记与编程作业:Feature Extraction Using Convolution,Pooling(卷积和池化抽取特征) ufldl出了新教程,感觉比之前的好,从基础讲起.系统清晰 ...
- ufldl学习笔记和编程作业:Softmax Regression(softmax回报)
ufldl学习笔记与编程作业:Softmax Regression(softmax回归) ufldl出了新教程.感觉比之前的好,从基础讲起.系统清晰,又有编程实践. 在deep learning高质量 ...
- JavaScript学习笔记之数组(二)
JavaScript学习笔记之数组(二) 1.['1','2','3'].map(parseInt) 输出什么,为什么? ['1','2','3'].map(parseInt)//[1,NaN,NaN ...
- vue2.0学习笔记之路由(二)路由嵌套+动画
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- vue2.0学习笔记之路由(二)路由嵌套
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- day1作业二:多级菜单操作
作业二:多级菜单 (1)三级菜单 (2)可以次选择进入各子菜单 (3)所需新知识点:列表.字典 要求:输入back返回上一层,输入quit退出整个程序 思路: (1)首先定义好三级菜单字典: (2)提 ...
随机推荐
- solr 近实时搜索
摘要: Solr的近实时搜索NRT(Near Real Time Searching)意味着文档可以在索引以后马上可以被查询到. Solr不会因为本次提交而阻塞更新操作,不会等待后台合并操作(merg ...
- SAP系统管理中常见问题解答(转载)
1.如何查看SAP系统的位数? system——status看 Platform ID Platform 32-bit 64-bit --------------------------------- ...
- 一、MySQL 安装
MySQL 安装 所有平台的 MySQL 下载地址为: MySQL 下载 . 挑选你需要的 MySQL Community Server 版本及对应的平台. 注意:安装过程我们需要通过开启管理员权限来 ...
- python列表中的赋值与深浅拷贝
首先创建一个列表 a=[[1,2,3],4,5,6] 一.赋值 a=[[1,2,3],4,5,6]b=aa[0][1]='tom'print(a)print(b)结果: [[1, 'tom', 3], ...
- 748. Shortest Completing Word
https://leetcode.com/problems/shortest-completing-word/description/ class Solution { public: string ...
- Green Space【绿色空间】
Green Space Living in an urban area with green spaces has a long-lasting positive impact on people's ...
- makefile学习(1)
GNU Make / Makefile 学习资料 GNU Make学习总结(一) GNU Make学习总结(二) 这篇学习总结,从一个简单的小例子开始,逐步加深,来讲解Makefile的用法. 最后用 ...
- Windows下安装配置SQLite和使用的教程
什么是SQLite SQLite是一款非常轻量级的关系数据库系统,支持多数SQL92标准.SQLite在使用前不需要安装设置,不需要进程来启动.停止或配置,而其他大多数SQL数据库引擎是作为一个单独的 ...
- 自定义View/ViewGroup的步骤和实现
1.设置属性(供XML调用) 在res目录新建attrs.xml文件 <?xml version="1.0" encoding="utf-8"?> ...
- ipv4配置