[cnn]FashionMINST训练+保存模型+调用模型判断给定图片
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as Data
from torchvision import datasets,transforms
import matplotlib.pyplot as plt
import numpy as np
input_size = 28 #图像尺寸 28*28
num_class = 10 #标签总数
num_epochs = 3 #训练总周期
batch_size = 64 #一个批次多少图片
train_dataset = datasets.FashionMNIST(
root='data',
train=True,
transform=transforms.ToTensor(),
download=True,
)
test_dataset = datasets.FashionMNIST(
root='data',
train=False,
transform=transforms.ToTensor(),
download=True,
)
train_loader = torch.utils.data.DataLoader(
dataset = train_dataset,
batch_size = batch_size,
shuffle = True,
)
test_loader = torch.utils.data.DataLoader(
dataset = test_dataset,
batch_size = batch_size,
shuffle = True,
)
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential( #输入为(1,28,28)
nn.Conv2d(
in_channels=1,
out_channels=16, #要得到几个特征图
kernel_size=5, #卷积核大小
stride=1, #步长
padding=2,
), #输出特征图为(16*28*28)
nn.ReLU(),
nn.MaxPool2d(kernel_size=2), #池化(2x2) 输出为(16,14,14)
)
self.conv2 = nn.Sequential( #输入(16,14,14)
nn.Conv2d(16, 32, 5, 1, 2), #输出(32,14,14)
nn.ReLU(),
nn.MaxPool2d(2), #输出(32,7,7)
)
self.out = nn.Linear(32 * 7 * 7, 10) #全连接
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1) #flatten操作 输出为(batch_size,32*7*7)
output = self.out(x)
return output, x
def accuracy(predictions,labels):
pred = torch.max(predictions.data,1)[1]
rights = pred.eq(labels.data.view_as(pred)).sum()
return rights,len(labels)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
'cuda'
net = CNN().to(device)
criterion = nn.CrossEntropyLoss() #损失函数
#优化器
optimizer = optim.Adam(net.parameters(),lr = 0.001)
for epoch in range(num_epochs+1):
#保留epoch的结果
train_rights = []
for batch_idx,(data,target) in enumerate(train_loader):
data = data.to(device)
target = target.to(device)
net.train()
output = net(data)[0]
loss = criterion(output,target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
right = accuracy(output,target)
train_rights.append(right)
if batch_idx %100 ==0:
net.eval()
val_rights = []
for(data,target) in test_loader:
data = data.to(device)
target = target.to(device)
output = net(data)[0]
right = accuracy(output,target)
val_rights.append(right)
#计算准确率
train_r = (sum([i[0] for i in train_rights]),sum(i[1] for i in train_rights))
val_r = (sum([i[0] for i in val_rights]),sum(i[1] for i in val_rights))
print('当前epoch:{}[{}/{}({:.0f}%)]\t损失:{:.2f}\t训练集准确率:{:.2f}%\t测试集准确率:{:.2f}%'.format(
epoch,
batch_idx * batch_size,
len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.data,
100. * train_r[0].cpu().numpy() / train_r[1],
100. * val_r[0].cpu().numpy() / val_r[1]
)
)
torch.save(net, 'cnn_test.pt')
当前epoch:0[0/60000(0%)] 损失:2.31 训练集准确率:7.81% 测试集准确率:19.02%
当前epoch:0[6400/60000(11%)] 损失:0.70 训练集准确率:66.34% 测试集准确率:74.87%
当前epoch:0[12800/60000(21%)] 损失:0.39 训练集准确率:72.43% 测试集准确率:81.34%
当前epoch:0[19200/60000(32%)] 损失:0.49 训练集准确率:75.35% 测试集准确率:81.99%
当前epoch:0[25600/60000(43%)] 损失:0.49 训练集准确率:77.42% 测试集准确率:84.12%
当前epoch:0[32000/60000(53%)] 损失:0.31 训练集准确率:78.93% 测试集准确率:84.10%
当前epoch:0[38400/60000(64%)] 损失:0.35 训练集准确率:80.00% 测试集准确率:84.13%
当前epoch:0[44800/60000(75%)] 损失:0.34 训练集准确率:80.90% 测试集准确率:85.27%
当前epoch:0[51200/60000(85%)] 损失:0.31 训练集准确率:81.55% 测试集准确率:85.85%
当前epoch:0[57600/60000(96%)] 损失:0.48 训练集准确率:82.14% 测试集准确率:86.35%
当前epoch:1[0/60000(0%)] 损失:0.29 训练集准确率:89.06% 测试集准确率:86.13%
当前epoch:1[6400/60000(11%)] 损失:0.42 训练集准确率:87.19% 测试集准确率:84.78%
当前epoch:1[12800/60000(21%)] 损失:0.35 训练集准确率:86.99% 测试集准确率:87.09%
当前epoch:1[19200/60000(32%)] 损失:0.38 训练集准确率:87.33% 测试集准确率:86.61%
当前epoch:1[25600/60000(43%)] 损失:0.32 训练集准确率:87.55% 测试集准确率:86.70%
当前epoch:1[32000/60000(53%)] 损失:0.51 训练集准确率:87.72% 测试集准确率:87.08%
当前epoch:1[38400/60000(64%)] 损失:0.46 训练集准确率:87.88% 测试集准确率:87.95%
当前epoch:1[44800/60000(75%)] 损失:0.33 训练集准确率:87.94% 测试集准确率:88.00%
当前epoch:1[51200/60000(85%)] 损失:0.28 训练集准确率:88.04% 测试集准确率:88.51%
当前epoch:1[57600/60000(96%)] 损失:0.18 训练集准确率:88.15% 测试集准确率:87.43%
当前epoch:2[0/60000(0%)] 损失:0.17 训练集准确率:93.75% 测试集准确率:87.79%
当前epoch:2[6400/60000(11%)] 损失:0.29 训练集准确率:89.71% 测试集准确率:87.70%
当前epoch:2[12800/60000(21%)] 损失:0.27 训练集准确率:89.51% 测试集准确率:88.32%
当前epoch:2[19200/60000(32%)] 损失:0.20 训练集准确率:89.24% 测试集准确率:88.77%
当前epoch:2[25600/60000(43%)] 损失:0.23 训练集准确率:89.45% 测试集准确率:88.24%
当前epoch:2[32000/60000(53%)] 损失:0.21 训练集准确率:89.57% 测试集准确率:88.22%
当前epoch:2[38400/60000(64%)] 损失:0.30 训练集准确率:89.54% 测试集准确率:88.10%
当前epoch:2[44800/60000(75%)] 损失:0.19 训练集准确率:89.54% 测试集准确率:88.92%
当前epoch:2[51200/60000(85%)] 损失:0.45 训练集准确率:89.50% 测试集准确率:88.96%
当前epoch:2[57600/60000(96%)] 损失:0.20 训练集准确率:89.53% 测试集准确率:89.55%
当前epoch:3[0/60000(0%)] 损失:0.27 训练集准确率:93.75% 测试集准确率:88.16%
当前epoch:3[6400/60000(11%)] 损失:0.19 训练集准确率:90.24% 测试集准确率:89.76%
当前epoch:3[12800/60000(21%)] 损失:0.19 训练集准确率:90.10% 测试集准确率:89.41%
当前epoch:3[19200/60000(32%)] 损失:0.24 训练集准确率:90.32% 测试集准确率:89.48%
当前epoch:3[25600/60000(43%)] 损失:0.34 训练集准确率:90.42% 测试集准确率:89.58%
当前epoch:3[32000/60000(53%)] 损失:0.27 训练集准确率:90.30% 测试集准确率:88.86%
当前epoch:3[38400/60000(64%)] 损失:0.34 训练集准确率:90.28% 测试集准确率:89.39%
当前epoch:3[44800/60000(75%)] 损失:0.37 训练集准确率:90.36% 测试集准确率:88.66%
当前epoch:3[51200/60000(85%)] 损失:0.17 训练集准确率:90.36% 测试集准确率:89.72%
当前epoch:3[57600/60000(96%)] 损失:0.20 训练集准确率:90.41% 测试集准确率:89.29%
当前epoch:4[0/60000(0%)] 损失:0.15 训练集准确率:92.19% 测试集准确率:89.55%
当前epoch:4[6400/60000(11%)] 损失:0.30 训练集准确率:91.43% 测试集准确率:89.89%
当前epoch:4[12800/60000(21%)] 损失:0.15 训练集准确率:91.25% 测试集准确率:89.62%
当前epoch:4[19200/60000(32%)] 损失:0.20 训练集准确率:91.23% 测试集准确率:89.95%
当前epoch:4[25600/60000(43%)] 损失:0.16 训练集准确率:91.24% 测试集准确率:89.70%
当前epoch:4[32000/60000(53%)] 损失:0.21 训练集准确率:91.22% 测试集准确率:89.95%
当前epoch:4[38400/60000(64%)] 损失:0.33 训练集准确率:91.18% 测试集准确率:90.42%
当前epoch:4[44800/60000(75%)] 损失:0.19 训练集准确率:91.24% 测试集准确率:89.69%
当前epoch:4[51200/60000(85%)] 损失:0.26 训练集准确率:91.22% 测试集准确率:90.35%
当前epoch:4[57600/60000(96%)] 损失:0.28 训练集准确率:91.25% 测试集准确率:88.77%
当前epoch:5[0/60000(0%)] 损失:0.25 训练集准确率:93.75% 测试集准确率:89.79%
当前epoch:5[6400/60000(11%)] 损失:0.21 训练集准确率:91.21% 测试集准确率:89.90%
当前epoch:5[12800/60000(21%)] 损失:0.15 训练集准确率:91.51% 测试集准确率:90.71%
当前epoch:5[19200/60000(32%)] 损失:0.16 训练集准确率:91.77% 测试集准确率:90.45%
当前epoch:5[25600/60000(43%)] 损失:0.21 训练集准确率:91.84% 测试集准确率:90.56%
当前epoch:5[32000/60000(53%)] 损失:0.12 训练集准确率:91.86% 测试集准确率:89.10%
当前epoch:5[38400/60000(64%)] 损失:0.28 训练集准确率:91.82% 测试集准确率:90.42%
当前epoch:5[44800/60000(75%)] 损失:0.15 训练集准确率:91.88% 测试集准确率:90.19%
当前epoch:5[51200/60000(85%)] 损失:0.33 训练集准确率:91.87% 测试集准确率:90.03%
当前epoch:5[57600/60000(96%)] 损失:0.10 训练集准确率:91.80% 测试集准确率:90.74%
当前epoch:6[0/60000(0%)] 损失:0.15 训练集准确率:93.75% 测试集准确率:90.36%
当前epoch:6[6400/60000(11%)] 损失:0.31 训练集准确率:92.28% 测试集准确率:90.85%
当前epoch:6[12800/60000(21%)] 损失:0.23 训练集准确率:92.15% 测试集准确率:90.68%
当前epoch:6[19200/60000(32%)] 损失:0.15 训练集准确率:92.37% 测试集准确率:90.71%
当前epoch:6[25600/60000(43%)] 损失:0.31 训练集准确率:92.29% 测试集准确率:91.02%
当前epoch:6[32000/60000(53%)] 损失:0.21 训练集准确率:92.43% 测试集准确率:90.57%
当前epoch:6[38400/60000(64%)] 损失:0.25 训练集准确率:92.43% 测试集准确率:90.51%
当前epoch:6[44800/60000(75%)] 损失:0.21 训练集准确率:92.48% 测试集准确率:90.56%
当前epoch:6[51200/60000(85%)] 损失:0.07 训练集准确率:92.43% 测试集准确率:91.04%
当前epoch:6[57600/60000(96%)] 损失:0.14 训练集准确率:92.43% 测试集准确率:90.68%
当前epoch:7[0/60000(0%)] 损失:0.25 训练集准确率:89.06% 测试集准确率:91.18%
当前epoch:7[6400/60000(11%)] 损失:0.11 训练集准确率:92.51% 测试集准确率:91.09%
当前epoch:7[12800/60000(21%)] 损失:0.17 训练集准确率:92.98% 测试集准确率:91.21%
当前epoch:7[19200/60000(32%)] 损失:0.23 训练集准确率:93.06% 测试集准确率:90.80%
当前epoch:7[25600/60000(43%)] 损失:0.18 训练集准确率:92.95% 测试集准确率:91.39%
当前epoch:7[32000/60000(53%)] 损失:0.24 训练集准确率:93.01% 测试集准确率:91.06%
当前epoch:7[38400/60000(64%)] 损失:0.27 训练集准确率:92.94% 测试集准确率:91.18%
当前epoch:7[44800/60000(75%)] 损失:0.31 训练集准确率:92.77% 测试集准确率:90.88%
当前epoch:7[51200/60000(85%)] 损失:0.17 训练集准确率:92.73% 测试集准确率:91.42%
当前epoch:7[57600/60000(96%)] 损失:0.17 训练集准确率:92.75% 测试集准确率:90.75%
当前epoch:8[0/60000(0%)] 损失:0.15 训练集准确率:95.31% 测试集准确率:91.15%
当前epoch:8[6400/60000(11%)] 损失:0.18 训练集准确率:93.13% 测试集准确率:91.42%
当前epoch:8[12800/60000(21%)] 损失:0.12 训练集准确率:93.24% 测试集准确率:91.31%
当前epoch:8[19200/60000(32%)] 损失:0.27 训练集准确率:93.37% 测试集准确率:91.25%
当前epoch:8[25600/60000(43%)] 损失:0.17 训练集准确率:93.38% 测试集准确率:91.52%
当前epoch:8[32000/60000(53%)] 损失:0.19 训练集准确率:93.16% 测试集准确率:91.51%
当前epoch:8[38400/60000(64%)] 损失:0.26 训练集准确率:93.11% 测试集准确率:91.34%
当前epoch:8[44800/60000(75%)] 损失:0.44 训练集准确率:93.05% 测试集准确率:91.35%
当前epoch:8[51200/60000(85%)] 损失:0.31 训练集准确率:93.03% 测试集准确率:91.23%
当前epoch:8[57600/60000(96%)] 损失:0.22 训练集准确率:93.01% 测试集准确率:90.74%
当前epoch:9[0/60000(0%)] 损失:0.19 训练集准确率:93.75% 测试集准确率:91.15%
当前epoch:9[6400/60000(11%)] 损失:0.27 训练集准确率:93.64% 测试集准确率:90.78%
当前epoch:9[12800/60000(21%)] 损失:0.25 训练集准确率:93.73% 测试集准确率:91.31%
当前epoch:9[19200/60000(32%)] 损失:0.23 训练集准确率:93.42% 测试集准确率:89.56%
当前epoch:9[25600/60000(43%)] 损失:0.27 训练集准确率:93.24% 测试集准确率:90.82%
当前epoch:9[32000/60000(53%)] 损失:0.23 训练集准确率:93.33% 测试集准确率:91.29%
当前epoch:9[38400/60000(64%)] 损失:0.09 训练集准确率:93.31% 测试集准确率:91.24%
当前epoch:9[44800/60000(75%)] 损失:0.25 训练集准确率:93.31% 测试集准确率:90.78%
当前epoch:9[51200/60000(85%)] 损失:0.19 训练集准确率:93.33% 测试集准确率:91.34%
当前epoch:9[57600/60000(96%)] 损失:0.12 训练集准确率:93.35% 测试集准确率:91.30%
当前epoch:10[0/60000(0%)] 损失:0.17 训练集准确率:93.75% 测试集准确率:91.27%
当前epoch:10[6400/60000(11%)] 损失:0.13 训练集准确率:93.81% 测试集准确率:91.61%
当前epoch:10[12800/60000(21%)] 损失:0.22 训练集准确率:93.91% 测试集准确率:91.13%
当前epoch:10[19200/60000(32%)] 损失:0.14 训练集准确率:93.92% 测试集准确率:91.19%
当前epoch:10[25600/60000(43%)] 损失:0.22 训练集准确率:93.92% 测试集准确率:91.78%
当前epoch:10[32000/60000(53%)] 损失:0.15 训练集准确率:93.95% 测试集准确率:90.79%
当前epoch:10[38400/60000(64%)] 损失:0.09 训练集准确率:93.92% 测试集准确率:91.42%
当前epoch:10[44800/60000(75%)] 损失:0.12 训练集准确率:93.86% 测试集准确率:91.62%
当前epoch:10[51200/60000(85%)] 损失:0.14 训练集准确率:93.84% 测试集准确率:90.67%
当前epoch:10[57600/60000(96%)] 损失:0.13 训练集准确率:93.78% 测试集准确率:91.42%
from PIL import Image
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
cols, rows = 1,1 #准备显示几幅图像
for i in range(1, cols * rows + 1): #[1,10)
sample_idx = torch.randint(len(train_dataset),
size=(1,)).item() #取一幅随机图像
img, label = train_dataset[sample_idx]
print(img.shape)
print(label)
figure.add_subplot(rows, cols, i) #3x3的figure加载第i幅图像
#plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
torch.Size([1, 28, 28])
9
model_path = "cnn_test.pt" # 这里是你保存的.pt文件的路径
model = torch.load(model_path)
model.cuda()
print(model)
image_path = "1.png" # 这里是你要分类的图像的路径
# 使用PIL库加载图像并将其转换为张量
image = Image.open(image_path)
print(image)
#image = image.convert('L')
transform = transforms.Compose([
transforms.Resize((28,28)),
#transforms.RandomResizedCrop(28),
#transforms.Grayscale(),
transforms.ToTensor(),
transforms.Normalize((.5, .5, .5), (.5, .5, .5)),
])
image_tensor = transform(image).unsqueeze(0).type(torch.cuda.FloatTensor)
#torch.transpose(image_tensor,1,2)
image_tensor.shape
CNN(
(conv1): Sequential(
(0): Conv2d(1, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv2): Sequential(
(0): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(out): Linear(in_features=1568, out_features=10, bias=True)
)
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=102x103 at 0x1F19505CE48>
torch.Size([1, 3, 28, 28])
#image_tensor.shape
get_gray = transforms.Compose([
transforms.Grayscale(),
])
gray_image = get_gray(image_tensor)
gray_image.shape
torch.Size([1, 1, 28, 28])
with torch.no_grad():
output = model(gray_image)
output[0].shape
torch.Size([1, 10])
with torch.no_grad():
test_output, _ = model(gray_image)
pred_y = (torch.max(test_output, 1)[1].data).cpu().numpy()
labels_map[int(pred_y)]
'Dress'
with torch.no_grad():
output = model(gray_image)
softmax = torch.nn.Softmax(dim=1)
probs = softmax(output[0])
probs
tensor([[1.3675e-04, 4.2272e-01, 3.3998e-11, 5.7714e-01, 2.2880e-16, 4.4630e-10,
1.5328e-11, 5.6080e-26, 3.3082e-10, 2.3702e-28]], device='cuda:0')
max_val, max_idx = torch.max(probs, dim=1)
# 输出结果
print("最大值为:", max_val.item())
print("最大值的索引为:", max_idx.item())
最大值为: 0.5771426558494568
最大值的索引为: 3
labels_map[max_idx.item()]
'Dress'
[cnn]FashionMINST训练+保存模型+调用模型判断给定图片的更多相关文章
- [深度学习] Pytorch(三)—— 多/单GPU、CPU,训练保存、加载模型参数问题
[深度学习] Pytorch(三)-- 多/单GPU.CPU,训练保存.加载预测模型问题 上一篇实践学习中,遇到了在多/单个GPU.GPU与CPU的不同环境下训练保存.加载使用使用模型的问题,如果保存 ...
- Python之TensorFlow的模型训练保存与加载-3
一.TensorFlow的模型保存和加载,使我们在训练和使用时的一种常用方式.我们把训练好的模型通过二次加载训练,或者独立加载模型训练.这基本上都是比较常用的方式. 二.模型的保存与加载类型有2种 1 ...
- 【Demo 1】基于object_detection API的行人检测 3:模型训练并在OpenCV调用模型
训练准备 模型选择 选择ssd_mobilenet_v2_coco模型,下载地址(https://github.com/tensorflow/models/blob/master/research/o ...
- TensorFlow学习笔记:保存和读取模型
TensorFlow 更新频率实在太快,从 1.0 版本正式发布后,很多 API 接口就发生了改变.今天用 TF 训练了一个 CNN 模型,结果在保存模型的时候居然遇到各种问题.Google 搜出来的 ...
- 机器学习在入侵检测方面的应用 - 基于ADFA-LD训练集训练入侵检测判别模型
1. ADFA-LD数据集简介 ADFA-LD数据集是澳大利亚国防学院对外发布的一套主机级入侵检测数据集合,包括Linux和Windows,是一个包含了入侵事件的系统调用syscall序列的数据集(以 ...
- 第六节,TensorFlow编程基础案例-保存和恢复模型(中)
在我们使用TensorFlow的时候,有时候需要训练一个比较复杂的网络,比如后面的AlexNet,ResNet,GoogleNet等等,由于训练这些网络花费的时间比较长,因此我们需要保存模型的参数. ...
- 1 如何使用pb文件保存和恢复模型进行迁移学习(学习Tensorflow 实战google深度学习框架)
学习过程是Tensorflow 实战google深度学习框架一书的第六章的迁移学习环节. 具体见我提出的问题:https://www.tensorflowers.cn/t/5314 参考https:/ ...
- TensorFlow保存和载入模型
首先定义一个tf.train.Saver类: saver = tf.train.Saver(max_to_keep=1) 其中,max_to_keep参数设定只保存最后一个参数,默认值是5,即保存最后 ...
- (原+译)pytorch中保存和载入模型
转载请注明出处: http://www.cnblogs.com/darkknightzh/p/8108466.html 参考网址: http://pytorch.org/docs/master/not ...
- AI - TensorFlow - 示例05:保存和恢复模型
保存和恢复模型(Save and restore models) 官网示例:https://www.tensorflow.org/tutorials/keras/save_and_restore_mo ...
随机推荐
- DASCTF 2023 & 0X401七月暑期挑战赛
比赛只出了一道,小菜不是罪过-_- controlflow 这个题动调到底就行 for i in range(40): after_xor[i]=inp[i]^0x401 after_xor[i] + ...
- chatglm2-6b模型在9n-triton中部署并集成至langchain实践
一.前言 近期, ChatGLM-6B 的第二代版本ChatGLM2-6B已经正式发布,引入了如下新特性: ①. 基座模型升级,性能更强大,在中文C-Eval榜单中,以51.7分位列第6: ②. 支持 ...
- 【opencv】传统目标检测:HOG+SVM实现行人检测
传统目标分类器主要包括Viola Jones Detector.HOG Detector.DPM Detector,本文主要介绍HOG Detector与SVM分类器的组合实现行人检测. HOG(Hi ...
- [ABC129E] Sum Equals Xor
2023-01-15 题目 题目传送门 翻译 翻译 难度&重要性(1~10):4 题目来源 AtCoder 题目算法 dp/模拟 解题思路 我们都知道,异或是一种不进位的加法,而要想 $ a ...
- vue中添加音频和视频
视频播放功能 1. 安装vue-video-player npm install vue-video-player --save 或 yarn add vue-video-player --save ...
- CTFshow misc1-10
小提示:需要从图片上提取flag文字,可以通过截图翻译或者微信发送图片,这两个的ai图像识别挺好用的. misc1: 解压打开就能看见flag,提取出来就行 misc2: 记事本打开,看见 ng字符, ...
- 图解Spark Graphx基于connectedComponents函数实现连通图底层原理
原创/朱季谦 第一次写这么长的graphx源码解读,还是比较晦涩,有较多不足之处,争取改进. 一.连通图说明 连通图是指图中的任意两个顶点之间都存在路径相连而组成的一个子图. 用一个图来说明,例如,下 ...
- .NET开源最全的第三方登录整合库 - CollectiveOAuth
前言 我相信很多同学都对接过各种各样的第三方平台的登录授权获取用户信息(如:微信登录.支付宝登录.GitHub登录等等).今天给大家推荐一个.NET开源最全的第三方登录整合库:CollectiveOA ...
- Nomad 系列-安装
系列文章 Nomad 系列文章 Nomad 简介 开新坑!近期算是把自己的家庭实验室环境初步搞好了,终于可以开始进入正题研究了. 首先开始的是 HashiCorp Nomad 系列,欢迎阅读. 关于 ...
- 一个关于 i++ 和 ++i 的面试题打趴了所有人
前言 都说大城市现在不好找工作,可小城市却也不好招人. 我们公司招了挺久都没招到,主管感到有些心累. 我提了点建议,是不是面试问的太深了,在这种小城市,能干活就行. 他说自己问的面试题都很浅显,如果答 ...