Kaggle竞赛丨入门手写数字识别之KNN、CNN、降维
引言
- 这段时间来,看了西瓜书、蓝皮书,各种机器学习算法都有所了解,但在实践方面却缺乏相应的锻炼。于是我决定通过Kaggle这个平台来提升一下自己的应用能力,培养自己的数据分析能力。
- 我个人的计划是先从简单的数据集入手如手写数字识别、泰坦尼克号、房价预测,这些目前已经有丰富且成熟的方案可以参考,之后关注未来就业的方向如计算广告、点击率预测,有合适的时机,再与小伙伴一同参加线上比赛。
数据集
介绍
MNIST ("Modified National Institute of Standards and Technology")是计算机视觉中最典型的数据集之一,其一共包含训练集
train.csv
,测试集test.csv
和提交示例sample_submission.csv
。csv
是一种数据格式,以逗号作为文件的分隔值。训练集
train.csv
包含40000
张28*28=784
的图片,图片像素值为0-255
,每张图片有对应的标签,其数据格式如下,可以看作是一个40000 * 785
的矩阵,第一列存放标签;测试集
test.csv
包含28000
张28*28=784
的图片,其不提供标签,矩阵维度为28000*784
。
读取数据集
观察到不同方案中数据的读取方法各不同,这里小结一下。
- csv
def loadTrainData():
l=[]
with open('/kaggle/input/digit-recognizer/train.csv') as file:
lines = csv.reader(file)
for line in lines:
l.append(line)
- open
# load csv files to numpy arrays
def load_data(data_dir):
train_data = open(data_dir + "train.csv").read()
- numpy
def load_data(path):
with np.load(path) as f:
x_train, y_train = f['x_train'], f['y_train']
x_test, y_test = f['x_test'], f['y_test']
return (x_train, y_train), (x_test, y_test)
- panda
train = pd.read_csv('../input/train.csv')
K近邻算法KNN
- 这里不再介绍kNN的原理,贴一个简洁的实现,参考自https://blog.csdn.net/u012162613/article/details/41929171,其主要采用了二值化、L2范数作为距离度量。
实现A
from numpy import *
import csv
# 读取训练集
def loadTrainData():
l=[]
with open('/kaggle/input/digit-recognizer/train.csv') as file:
lines = csv.reader(file)
for line in lines:
l.append(line)
l.remove(l[0])
l=array(l)
data, label = l[:,1:], l[:,0]
label = label[:,newaxis]
a = normalizing(toInt(data))
b = toInt(label)
return a, b
# 字符转整形
def toInt(array):
array = mat(array)
m,n = shape(array)
newArray = zeros((m,n))
for i in range(m):
for j in range(n):
newArray[i,j]=int(array[i,j])
return newArray
# 二值化处理
def normalizing(array):
m,n = shape(array)
for i in range(m):
for j in range(n):
if array[i,j] != 0:
array[i,j]=1
return array
# 加载测试集
def loadTestData():
l=[]
with open('/kaggle/input/digit-recognizer/test.csv') as file:
lines = csv.reader(file)
for line in lines:
l.append(line)
l.remove(l[0])
l=array(l)
data=l
return normalizing(toInt(data))
def loadTestResult():
l=[]
with open('/kaggle/input/digit-recognizer/sample_submission.csv') as file:
lines = csv.reader(file)
for line in lines:
l.append(line)
l.remove(l[0])
l=array(l)
label=l[:,1]
label = label[:, newaxis]
return toInt(label)
# 保存结果
def saveResult(result):
with open('/kaggle/working/knn.csv', 'w', newline='') as myFile:
myWriter = csv.writer(myFile)
myWriter.writerow(['ImageId','Label'])
for i, label in enumerate(result):
tmp = [i+1, int(label)]
myWriter.writerow(tmp)
# kNN分类
def classify(inX, dataSet, labels, k):
inX = mat(inX)
dataSet = mat(dataSet)
labels = mat(labels)
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize, 1)) - dataSet
spDiffMat = array(diffMat) ** 2
spDistances = spDiffMat.sum(axis=1)
#计算L2距离
distances = spDistances ** 0.5
sortedDistIndicies = distances.argsort()
classCount = {}
for i in range(k):
voteIlabel = labels[sortedDistIndicies[i],0]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.items(), key=lambda item:item[1], reverse=True)
return sortedClassCount[0][0]
# 主函数
def handwritingClassTest():
trainData,trainLabel=loadTrainData()
testData =loadTestData()
testLabel = loadTestResult()
m,n=shape(testData)
errorCount=0
resultList=[]
for i in range(m):
classifierResult = classify(testData[i], trainData, trainLabel, 1)
resultList.append(classifierResult)
print("the classifier for %d came back with: %d, the real answer is: %d" % (i, classifierResult, testLabel[i]))
if (classifierResult != testLabel[i]): errorCount += 1.0
print("\nthe total number of errors is: %d" % errorCount)
print("\nthe total error rate is: %f" % (errorCount/float(m)))
saveResult(resultList)
handwritingClassTest()
- 结果:k=5,准确率96.40%;k=1,准确率96.27%。PS:按照个人理解,K值越小,结果应该更高才对。随后我换了另一个实现,其采用了numpy实现矩阵计算,运行速度比上面的代码块多了。
实现B
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
import time
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load csv files to numpy arrays
def load_data(data_dir):
train_data = open(data_dir + "train.csv").read()
train_data = train_data.split("\n")[1:-1]
train_data = [i.split(",") for i in train_data]
# print(len(train_data))
X_train = np.array([[int(i[j]) for j in range(1,len(i))] for i in train_data])
y_train = np.array([int(i[0]) for i in train_data])
# print(X_train.shape, y_train.shape)
test_data = open(data_dir + "test.csv").read()
test_data = test_data.split("\n")[1:-1]
test_data = [i.split(",") for i in test_data]
# print(len(test_data))
X_test = np.array([[int(i[j]) for j in range(0,len(i))] for i in test_data])
# print(X_test.shape)
return X_train, y_train, X_test
class simple_knn():
"a simple kNN with L2 distance"
def __init__(self):
pass
def train(self, X, y):
self.X_train = X
self.y_train = y
def predict(self, X, k=1):
dists = self.compute_distances(X)
# print("computed distances")
num_test = dists.shape[0]
y_pred = np.zeros(num_test)
for i in range(num_test):
k_closest_y = []
labels = self.y_train[np.argsort(dists[i,:])].flatten()
# find k nearest lables
k_closest_y = labels[:k]
# out of these k nearest lables which one is most common
# for 5NN [1, 1, 1, 2, 3] returns 1
# break ties by selecting smaller label
# for 5NN [1, 2, 1, 2, 3] return 1 even though 1 and 2 appeared twice.
c = Counter(k_closest_y)
y_pred[i] = c.most_common(1)[0][0]
return(y_pred)
def compute_distances(self, X):
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dot_pro = np.dot(X, self.X_train.T)
sum_square_test = np.square(X).sum(axis = 1)
sum_square_train = np.square(self.X_train).sum(axis = 1)
dists = np.sqrt(-2 * dot_pro + sum_square_train + np.matrix(sum_square_test).T)
return(dists)
# runs for 13 minutes
predictions = []
for i in range(int(len(X_test)/(2*batch_size))):
# predicts from i * batch_size to (i+1) * batch_size
print("Computing batch " + str(i+1) + "/" + str(int(len(X_test)/batch_size)) + "...")
tic = time.time()
predts = classifier.predict(X_test[i * batch_size:(i+1) * batch_size], k)
toc = time.time()
predictions = predictions + list(predts)
# print("Len of predictions: " + str(len(predictions)))
print("Completed this batch in " + str(toc-tic) + " Secs.")
print("Completed predicting the test data.")
# runs for 13 minutes
# uncomment predict lines to predict second half of test data
for i in range(int(len(X_test)/(2*batch_size)), int(len(X_test)/batch_size)):
# predicts from i * batch_size to (i+1) * batch_size
print("Computing batch " + str(i+1) + "/" + str(int(len(X_test)/batch_size)) + "...")
tic = time.time()
predts = classifier.predict(X_test[i * batch_size:(i+1) * batch_size], k)
toc = time.time()
predictions = predictions + list(predts)
# print("Len of predictions: " + str(len(predictions)))
print("Completed this batch in " + str(toc-tic) + " Secs.")
print("Completed predicting the test data.")
out_file = open("predictions.csv", "w")
out_file.write("ImageId,Label\n")
for i in range(len(predictions)):
out_file.write(str(i+1) + "," + str(int(predictions[i])) + "\n")
out_file.close()
- 结果:K=5,96.90%,k=1,97.11%;相同的k值,实现B的准确率比实现A要高,原因是实现B未采用二值化,保留了更多的数字图像信息。
卷积神经网络CNN
- 这里主要基于Pytorch实现。
数据加载
- 这里transform的作用是将PIL图片转化为Tensor,并进行归一化。具体可参考:https://zhuanlan.zhihu.com/p/27382990
# Construct the transform
import torchvision.transforms as transforms
from PIL import Image
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
# Get the device we're training on
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def get_digits(df):
"""Loads images as PyTorch tensors"""
# Load the labels if they exist
# (they wont for the testing data)
labels = []
start_inx = 0
if 'label' in df.columns:
labels = [v for v in df.label.values]
start_inx = 1
# Load the digit information
digits = []
for i in range(df.pixel0.size):
digit = df.iloc[i].astype(float).values[start_inx:]
digit = np.reshape(digit, (28,28))
digit = transform(digit).type('torch.FloatTensor')
if len(labels) > 0:
digits.append([digit, labels[i]])
else:
digits.append(digit)
return digits
- SubsetRandomSampler对数据进行随机划分,num_workers采用多线程加载数据。可参考:https://stackoverflow.com/questions/53998282/how-does-the-number-of-workers-parameter-in-pytorch-dataloader-actually-work
# Load the training data
train_X = get_digits(train)
# Some configuration parameters
num_workers = 0 # number of subprocesses to use for data loading
batch_size = 64 # how many samples per batch to load
valid_size = 0.2 # percentage of training set to use as validation
# Obtain training indices that will be used for validation
num_train = len(train_X)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# Define samplers for obtaining training and validation batches
from torch.utils.data.sampler import SubsetRandomSampler
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# Construct the data loaders
train_loader = torch.utils.data.DataLoader(train_X, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_X, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
# Test the size and shape of the output
dataiter = iter(train_loader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
网络模型
- 网络的结果主要分为
cnn_layers
和fc_layers
,个人认为fc_layers
有些繁杂。
# Import the necessary modules
import torch.nn as nn
def calc_out(in_layers, stride, padding, kernel_size, pool_stride):
"""
Helper function for computing the number of outputs from a
conv layer
"""
return int((1+(in_layers - kernel_size + (2*padding))/stride)/pool_stride)
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Some helpful values
inputs = [1,32,64,64]
kernel_size = [5,5,3]
stride = [1,1,1]
pool_stride = [2,2,2]
# Layer lists
layers = []
self.out = 28
self.depth = inputs[-1]
for i in range(len(kernel_size)):
# Get some variables
padding = int(kernel_size[i]/2)
# Define the output from this layer
self.out = calc_out(self.out, stride[i], padding,
kernel_size[i], pool_stride[i])
# convolutional layer 1
layers.append(nn.Conv2d(inputs[i], inputs[i], kernel_size[i],
stride=stride[i], padding=padding))
layers.append(nn.ReLU())
# convolutional layer 2
layers.append(nn.Conv2d(inputs[i], inputs[i+1], kernel_size[i],
stride=stride[i], padding=padding))
layers.append(nn.ReLU())
# maxpool layer
layers.append(nn.MaxPool2d(pool_stride[i],pool_stride[i]))
layers.append(nn.Dropout(p=0.2))
self.cnn_layers = nn.Sequential(*layers)
print(self.depth*self.out*self.out)
# Now for our fully connected layers
layers2 = []
layers2.append(nn.Dropout(p=0.2))
layers2.append(nn.Linear(self.depth*self.out*self.out, 512))
layers2.append(nn.Dropout(p=0.2))
layers2.append(nn.Linear(512, 256))
layers2.append(nn.Dropout(p=0.2))
layers2.append(nn.Linear(256, 256))
layers2.append(nn.Dropout(p=0.2))
layers2.append(nn.Linear(256, 10))
self.fc_layers = nn.Sequential(*layers2)
self.fc_last = nn.Linear(self.depth*self.out*self.out, 10)
def forward(self, x):
x = self.cnn_layers(x)
x = x.view(-1, self.depth*self.out*self.out)
x = self.fc_layers(x)
# x = self.fc_last(x)
return x
# create a complete CNN
model = Net()
model
模型训练
- 定义优化器,这里采用Adam。
import torch.optim as optim
# specify loss function
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.Adam(model.parameters(), lr=0.0005)
- 采用交叉验证法,即从训练集中划分一定比例的验证集作为评价标准,防止过拟合。
# number of epochs to train the model
n_epochs = 25 # you may increase this number to train a final model
valid_loss_min = np.Inf # track change in validation loss
# Additional rotation transformation
#rand_rotate = transforms.Compose([
# transforms.ToPILImage(),
# transforms.RandomRotation(20),
# transforms.ToTensor()
#])
# Get the device
print(device)
model.to(device)
tLoss, vLoss = [], []
for epoch in range(n_epochs):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
#########
# train #
#########
model.train()
for data, target in train_loader:
# move tensors to GPU if CUDA is available
data = data.to(device)
target = target.to(device)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
############
# validate #
############
model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
data = data.to(device)
target = target.to(device)
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
tLoss.append(train_loss)
vLoss.append(valid_loss)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_cifar.pt')
valid_loss_min = valid_loss
- 绘制Loss曲线
# Plot the resulting loss over time
plt.plot(tLoss, label='Training Loss')
plt.plot(vLoss, label='Validation Loss')
plt.legend();
训练结果
这里只展示模型在验证集上的结果,采用混淆矩阵表示(Confusion Matrix)。
该矩阵中\((i,j)\)表示原本为\(i\)的样本被判定为\(j\)的数目。理想情况不存在误判,只有对角线上有值,其他部分为0。但我们的结果显示多多少少存在一些误判,比如\((9,4)\)表示原本为9的样本被误判为了4,这可以理解,因为4和9确实很相近。
测试结果
- 加载训练权重
model.load_state_dict(torch.load('model_cifar.pt'));
- 加载测试集
# Define the test data loader
test = pd.read_csv("../input/digit-recognizer/test.csv")
test_X = get_digits(test)
test_loader = torch.utils.data.DataLoader(test_X, batch_size=batch_size,
num_workers=num_workers)
- 预测并保存结果
# Create storage objects
ImageId, Label = [],[]
# Loop through the data and get the predictions
for data in test_loader:
# Move tensors to GPU if CUDA is available
data = data.to(device)
# Make the predictions
output = model(data)
# Get the most likely predicted digit
_, pred = torch.max(output, 1)
for i in range(len(pred)):
ImageId.append(len(ImageId)+1)
Label.append(pred[i].cpu().numpy())
sub = pd.DataFrame(data={'ImageId':ImageId, 'Label':Label})
sub.describe
# Write to csv file ignoring index column
sub.to_csv("submission.csv", index=False)
- 最终的结果是98.90%,比KNN要高接近两个点,而我将网络模型中的
fc_layers
替换成一层普通的全连接层后,结果变成了99.21%。
降维Dimensionality Reduction
- 在高维数据下,算法的性能可能会变得很差,即维度灾难。因此我们使用降维方法将数据从高维投影到低维,这样学习算法将会简单很多。
主成干分析PCA
PCA是一类线性变换,将原始特征投射到子空间并且尽可能保留信息。因此算法尝试寻找最合适的方向和角度(即主成分)来最大化子空间的方差。
算法
实现
# Standardize the data
from sklearn.preprocessing import StandardScaler
X = train.values
X_std = StandardScaler().fit_transform(X) # Calculating Eigenvectors and eigenvalues of Cov matirx
mean_vec = np.mean(X_std, axis=0)
cov_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
# Create a list of (eigenvalue, eigenvector) tuples
eig_pairs = [ (np.abs(eig_vals[i]),eig_vecs[:,i]) for i in range(len(eig_vals))] # Sort the eigenvalue, eigenvector pair from high to low
eig_pairs.sort(key = lambda x: x[0], reverse= True) # Calculation of Explained Variance from the eigenvalues
tot = sum(eig_vals)
var_exp = [(i/tot)*100 for i in sorted(eig_vals, reverse=True)] # Individual explained variance
cum_var_exp = np.cumsum(var_exp) # Cumulative explained variance可视化
- 单独的方差(黑色)随着维度增大而减小,累计方差随着维度的增大而饱和。90%的方差可用前200个维度来表示。
可视化PCA找到的前30个最大方差方向上的特征值。
# Invoke SKlearn's PCA method
n_components = 30
pca = PCA(n_components=n_components).fit(train.values) eigenvalues = pca.components_.reshape(n_components, 28, 28) # Extracting the PCA components ( eignevalues )
#eigenvalues = pca.components_.reshape(n_components, 28, 28)
eigenvalues = pca.components_ n_row = 4
n_col = 7 # Plot the first 8 eignenvalues
plt.figure(figsize=(13,12))
for i in list(range(n_row * n_col)):
offset =0
plt.subplot(n_row, n_col, i + 1)
plt.imshow(eigenvalues[i].reshape(28,28), cmap='jet')
title_text = 'Eigenvalue ' + str(i + 1)
plt.title(title_text, size=6.5)
plt.xticks(())
plt.yticks(())
plt.show()用5个特征做PCA并可视化前2个特征(代码略),数据点被分为几个集群,每个集群就是一类数字。
由于PCA是无监督方法,这里也没有提供标签,于是我们接着采用K-means聚类算法并可视化。
from sklearn.cluster import KMeans # KMeans clustering
# Set a KMeans clustering with 9 components ( 9 chosen sneakily ;) as hopefully we get back our 9 class labels)
kmeans = KMeans(n_clusters=9)
# Compute cluster centers and predict cluster indices
X_clustered = kmeans.fit_predict(X_5d)
线性判别分析LDA
参考https://www.cnblogs.com/pinard/p/6244265.html
LDA跟PCA一样,也采用线性降维,但其是监督的。
算法过程如下
实现、可视化
lda = LDA(n_components=5)
# Taking in as second argument the Target as labels
X_LDA_2D = lda.fit_transform(X_std, Target.values )LDA vs PCA
t-SNE(t-Distributed Stochastic Neighbour Embedding)
- 不同于PCA、LDA,t-SNE是非线性、基于概率的降维方法。
- 算法不同于寻找最大信息分离的方向,t-SNE将欧氏距离转化为条件概率,然后对概率应用t分布。概率应用衡量数据点之间的相似性。
- 实现、可视化
# Invoking the t-SNE method
tsne = TSNE(n_components=2)
tsne_results = tsne.fit_transform(X_std)
- 相比PCA、LDA,数据点被更直观的分离,t-SNE更好地保留了数据的拓扑信息,但t-SNE的缺点是识别集群会出现多个局部极小点,可见颜色相同的集群被分为两个子群。
参考
- Kaggle入门,看这一篇就够了
- 大数据竞赛平台——Kaggle 入门
- Interactive Intro to Dimensionality Reduction
- MNIST: PyTorch Convolutional Neural Nets
- kNN from scratch in Python at 97.1%
- 线性判别分析LDA原理总结
Kaggle竞赛丨入门手写数字识别之KNN、CNN、降维的更多相关文章
- 一看就懂的K近邻算法(KNN),K-D树,并实现手写数字识别!
1. 什么是KNN 1.1 KNN的通俗解释 何谓K近邻算法,即K-Nearest Neighbor algorithm,简称KNN算法,单从名字来猜想,可以简单粗暴的认为是:K个最近的邻居,当K=1 ...
- Keras cnn 手写数字识别示例
#基于mnist数据集的手写数字识别 #构造了cnn网络拟合识别函数,前两层为卷积层,第三层为池化层,第四层为Flatten层,最后两层为全连接层 #基于Keras 2.1.1 Tensorflow ...
- 基于tensorflow的MNIST手写数字识别(二)--入门篇
http://www.jianshu.com/p/4195577585e6 基于tensorflow的MNIST手写字识别(一)--白话卷积神经网络模型 基于tensorflow的MNIST手写数字识 ...
- kaggle 实战 (1): PCA + KNN 手写数字识别
文章目录 加载package read data PCA 降维探索 选择50维度, 拆分数据为训练集,测试机 KNN PCA降维和K值筛选 分析k & 维度 vs 精度 预测 生成提交文件 本 ...
- mnist手写数字识别——深度学习入门项目(tensorflow+keras+Sequential模型)
前言 今天记录一下深度学习的另外一个入门项目——<mnist数据集手写数字识别>,这是一个入门必备的学习案例,主要使用了tensorflow下的keras网络结构的Sequential模型 ...
- Pytorch入门——手把手教你MNIST手写数字识别
MNIST手写数字识别教程 要开始带组内的小朋友了,特意出一个Pytorch教程来指导一下 [!] 这里是实战教程,默认读者已经学会了部分深度学习原理,若有不懂的地方可以先停下来查查资料 目录 MNI ...
- TensorFlow.NET机器学习入门【5】采用神经网络实现手写数字识别(MNIST)
从这篇文章开始,终于要干点正儿八经的工作了,前面都是准备工作.这次我们要解决机器学习的经典问题,MNIST手写数字识别. 首先介绍一下数据集.请首先解压:TF_Net\Asset\mnist_png. ...
- Pytorch1.0入门实战一:LeNet神经网络实现 MNIST手写数字识别
记得第一次接触手写数字识别数据集还在学习TensorFlow,各种sess.run(),头都绕晕了.自从接触pytorch以来,一直想写点什么.曾经在2017年5月,Andrej Karpathy发表 ...
- 5 TensorFlow入门笔记之RNN实现手写数字识别
------------------------------------ 写在开头:此文参照莫烦python教程(墙裂推荐!!!) ---------------------------------- ...
随机推荐
- 【Bad Blood】翻译0
(一)作者注 本书基于对超过150人以上的上百个采访,包括60多个Theranos的职工而著成.叙述中出现的人物大多数都是他们的真实姓名,也有些人希望我隐藏身份信息,他们有些害怕公司的惩罚,有些担心会 ...
- TOJ 6121: 学长的情书 ( 二分)
传送门: 点我 6121: 学长的情书 时间限制(普通/Java):2000MS/6000MS 内存限制:65536KByte总提交: 79 测试通过:2 描述 抹布收 ...
- ZR979B. 【十联测 Day 9】唯一睿酱
ZR979B. [十联测 Day 9]唯一睿酱 题目大意: 给定一个数组\(r_i\),表明对于第\(i\)个数来说,他是\([max(1,i - r_i),min(n,i+r_i)]\)中最大的,求 ...
- 运行APP脚本的步骤
1.打开ride(可运行命令:ride.py) 2.启动Appium 3.启动android-sdk\tools\uiantomatorviewer.bat(目的是为了抓取设备元素ID)
- 【算法随记七】巧用SIMD指令实现急速的字节流按位反转算法。
字节按位反转算法,在有些算法加密或者一些特殊的场合有着较为重要的应用,其速度也是一个非常关键的应用,比如一个byte变量a = 3,其二进制表示为00000011,进行按位反转后的结果即为110000 ...
- 使用Gson中的JsonElement所遇到的坑
使用Gson中的JsonElement所遇到的坑 原文链接: https://blog.csdn.net/weixin_30326515/article/details/98196013 声明:此博客 ...
- 驾驭git merge——git merge的规范化操作
这两天负责将一个开发了较长时间,代码量数万行的C语言项目(A项目)的代码分支合并到主线.由于之前参与过一些其他项目分支收编时采用git merge引入问题的修改,个人从心理上对git merge有所抵 ...
- Autofac总结
Autofac 介绍|术语 控制反转:IOC和DI IOC 调用者不再创建(不自己new)被调用者的实例,而是交给容器去创建(AutoFac就充当这里的容器),这就是控制反转 控制反转中引入的第三方对 ...
- Verilog门级建模
门级建模就是将逻辑电路图用HDL规定的文本语言表示出来,即调用Verilog语言中内置的基本门级元件描述逻辑图中的元件以及元件之间的连接关系. Verilog语言内置了12个基本门级元件模型,如下表所 ...
- 【转载】CSS filter:hue-rotate色调旋转滤镜实现按钮批量生产
文章转载自 张鑫旭-鑫空间-鑫生活 http://www.zhangxinxu.com/ 原文链接:https://www.zhangxinxu.com/wordpress/2018/11/css-f ...