技术背景

MindSpore Graph Learning是一个基于MindSpore的高效易用的图学习框架。得益于MindSpore的图算融合能力,MindSpore Graph Learning能够针对图模型特有的执行模式进行编译优化,帮助开发者缩短训练时间。 MindSpore Graph Learning 还创新提出了以点为中心编程范式,提供更原生的图神经网络表达方式,并内置覆盖了大部分应用场景的模型,使开发者能够轻松搭建图神经网络。

这是一个关于mindspore-gl的官方介绍,其定位非常接近于dgl,而且从文章(参考链接3)中的数据来看,mindspore-gl的运算效率还要高于dgl。

在传统的机器学习中,我们可以对各种Tensor进行高效的运算、卷积等。但是如果是一个图结构的网络,除了把图结构转换成Tensor数据,再对Tensor进行处理之外,有没有可能用一种更加便捷的运算方式,能够直接在图的基础上去计算呢?在这里mindSpore-gl也给出了自己的答案。我们可以一起来看一下mindspore-gl是如何安装和使用的。

mindspore-gl的安装

虽然官方有提供pip的安装方法,但是在库中能够提供的软件版本是非常有限的,这里我们推荐使用源码编译安装,这样也可以跟自己本地的MindSpore的版本更好的对应上。首先把仓库clone下来,并进入到graphlearning目录下:

  1. $ git clone https://gitee.com/mindspore/graphlearning.git
  2. 正克隆到 'graphlearning'...
  3. remote: Enumerating objects: 1275, done.
  4. remote: Counting objects: 100% (221/221), done.
  5. remote: Compressing objects: 100% (152/152), done.
  6. remote: Total 1275 (delta 116), reused 127 (delta 68), pack-reused 1054
  7. 接收对象中: 100% (1275/1275), 1.41 MiB | 316.00 KiB/s, 完成.
  8. 处理 delta 中: 100% (715/715), 完成.
  9. $ cd graphlearning/
  10. $ ll
  11. 总用量 112
  12. drwxrwxr-x 12 dechin dechin 4096 11 9 17:19 ./
  13. drwxrwxr-x 10 dechin dechin 4096 11 9 17:19 ../
  14. -rwxrwxr-x 1 dechin dechin 1429 11 9 17:19 build.sh*
  15. drwxrwxr-x 2 dechin dechin 4096 11 9 17:19 examples/
  16. -rwxrwxr-x 1 dechin dechin 3148 11 9 17:19 FAQ_CN.md*
  17. -rwxrwxr-x 1 dechin dechin 4148 11 9 17:19 faq.md*
  18. drwxrwxr-x 8 dechin dechin 4096 11 9 17:19 .git/
  19. -rwxrwxr-x 1 dechin dechin 1844 11 9 17:19 .gitignore*
  20. drwxrwxr-x 2 dechin dechin 4096 11 9 17:19 images/
  21. drwxrwxr-x 3 dechin dechin 4096 11 9 17:19 .jenkins/
  22. -rw-rw-r-- 1 dechin dechin 11357 11 9 17:19 LICENSE
  23. drwxrwxr-x 11 dechin dechin 4096 11 9 17:19 mindspore_gl/
  24. drwxrwxr-x 11 dechin dechin 4096 11 9 17:19 model_zoo/
  25. -rwxrwxr-x 1 dechin dechin 52 11 9 17:19 OWNERS*
  26. -rwxrwxr-x 1 dechin dechin 3648 11 9 17:19 README_CN.md*
  27. -rwxrwxr-x 1 dechin dechin 4570 11 9 17:19 README.md*
  28. drwxrwxr-x 4 dechin dechin 4096 11 9 17:19 recommendation/
  29. -rwxrwxr-x 1 dechin dechin 922 11 9 17:19 RELEASE.md*
  30. -rwxrwxr-x 1 dechin dechin 108 11 9 17:19 requirements.txt*
  31. drwxrwxr-x 2 dechin dechin 4096 11 9 17:19 scripts/
  32. -rwxrwxr-x 1 dechin dechin 4164 11 9 17:19 setup.py*
  33. drwxrwxr-x 5 dechin dechin 4096 11 9 17:19 tests/
  34. drwxrwxr-x 5 dechin dechin 4096 11 9 17:19 tools/

然后执行官方提供的编译构建的脚本:

  1. $ bash build.sh
  2. mkdir: 已创建目录 '/home/dechin/projects/mindspore/graphlearning/output'
  3. Collecting Cython>=0.29.24
  4. Downloading Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (2.0 MB)
  5. |████████████████████████████████| 2.0 MB 823 kB/s
  6. Collecting ast-decompiler>=0.6.0
  7. Downloading ast_decompiler-0.7.0-py3-none-any.whl (13 kB)
  8. Collecting astpretty>=2.1.0
  9. Downloading astpretty-3.0.0-py2.py3-none-any.whl (4.9 kB)
  10. Collecting scikit-learn>=0.24.2
  11. Downloading scikit_learn-1.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (30.8 MB)
  12. |████████████████████████████████| 30.8 MB 2.6 MB/s
  13. Requirement already satisfied: numpy>=1.21.2 in /home/dechin/anaconda3/envs/mindspore16/lib/python3.9/site-packages (from -r /home/dechin/projects/mindspore/graphlearning/requirements.txt (line 5)) (1.23.2)
  14. Collecting networkx>=2.6.3
  15. Downloading networkx-2.8.8-py3-none-any.whl (2.0 MB)
  16. |████████████████████████████████| 2.0 MB 4.6 MB/s
  17. Requirement already satisfied: scipy>=1.3.2 in /home/dechin/anaconda3/envs/mindspore16/lib/python3.9/site-packages (from scikit-learn>=0.24.2->-r /home/dechin/projects/mindspore/graphlearning/requirements.txt (line 4)) (1.5.3)
  18. Collecting threadpoolctl>=2.0.0
  19. Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
  20. Collecting joblib>=1.0.0
  21. Downloading joblib-1.2.0-py3-none-any.whl (297 kB)
  22. |████████████████████████████████| 297 kB 2.2 MB/s
  23. Installing collected packages: threadpoolctl, joblib, scikit-learn, networkx, Cython, astpretty, ast-decompiler
  24. Successfully installed Cython-0.29.32 ast-decompiler-0.7.0 astpretty-3.0.0 joblib-1.2.0 networkx-2.8.8 scikit-learn-1.1.3 threadpoolctl-3.1.0
  25. running bdist_wheel
  26. running build
  27. running build_py
  28. creating build
  29. creating build/lib.linux-x86_64-3.9
  30. ...
  31. removing build/bdist.linux-x86_64/wheel
  32. mindspore_gl_gpu-0.1-cp39-cp39-linux_x86_64.whl
  33. ------Successfully created mindspore_gl package------

如果看到以上的消息,那就表示编译构建成功了,接下来只要把生成的whl包使用pip进行安装即可:

  1. $ python3 -m pip install ./output/mindspore_gl_gpu-0.1-cp39-cp39-linux_x86_64.whl
  2. Processing ./output/mindspore_gl_gpu-0.1-cp39-cp39-linux_x86_64.whl
  3. Requirement already satisfied: Cython in /home/dechin/.local/lib/python3.9/site-packages (from mindspore-gl-gpu==0.1) (0.29.32)
  4. Requirement already satisfied: astpretty in /home/dechin/.local/lib/python3.9/site-packages (from mindspore-gl-gpu==0.1) (3.0.0)
  5. Requirement already satisfied: ast-decompiler>=0.3.2 in /home/dechin/.local/lib/python3.9/site-packages (from mindspore-gl-gpu==0.1) (0.7.0)
  6. Requirement already satisfied: scikit-learn>=0.24.2 in /home/dechin/.local/lib/python3.9/site-packages (from mindspore-gl-gpu==0.1) (1.1.3)
  7. Requirement already satisfied: threadpoolctl>=2.0.0 in /home/dechin/.local/lib/python3.9/site-packages (from scikit-learn>=0.24.2->mindspore-gl-gpu==0.1) (3.1.0)
  8. Requirement already satisfied: joblib>=1.0.0 in /home/dechin/.local/lib/python3.9/site-packages (from scikit-learn>=0.24.2->mindspore-gl-gpu==0.1) (1.2.0)
  9. Requirement already satisfied: scipy>=1.3.2 in /home/dechin/anaconda3/envs/mindspore16/lib/python3.9/site-packages (from scikit-learn>=0.24.2->mindspore-gl-gpu==0.1) (1.5.3)
  10. Requirement already satisfied: numpy>=1.17.3 in /home/dechin/anaconda3/envs/mindspore16/lib/python3.9/site-packages (from scikit-learn>=0.24.2->mindspore-gl-gpu==0.1) (1.23.2)
  11. Installing collected packages: mindspore-gl-gpu
  12. Successfully installed mindspore-gl-gpu-0.1

我们可以用如下指令验证一下mindspore-gl是否安装成功(后面的告警信息是MindSpore产生的,不是mindspore-gl产生的,一般情况下,我们可以忽视掉):

  1. $ python3 -c 'import mindspore_gl'
  2. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.348.03 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install
  3. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.350.73 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install
  4. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.351.54 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install
  5. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.352.40 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install
  6. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.352.94 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install
  7. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.353.43 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install
  8. [WARNING] ME(3662914:140594637309120,MainProcess):2022-11-09-17:22:29.353.91 [mindspore/run_check/_check_version.py:189] Cuda ['10.1', '11.1'] version(need by mindspore-gpu) is not found, please confirm that the path of cuda is set to the env LD_LIBRARY_PATH, please refer to the installation guidelines: https://www.mindspore.cn/install

mindspore-gl的简单案例

我们先考虑这样一个比较基础的案例,就是最简单的一个全连接图,一个三角形。其顶点编号分别为0、1、2,节点值分别为1、2、3,但是这里需要注意的一点是:mindspore-gl所构建的图是有向图,如果我们需要构建一个无向图,那么就需要手动copy+concat一份反方向的参数。mindspore-gl的一种典型的使用方法,是使用稀疏形式的近邻表COO去定义一个图结构GraphField,再把图作为GNNCell的一个入参传进去。

在计算的过程中,mindspore-gl会先执行一步编译。mindspore-gl支持用户使用一个非常简单的for循环去对图的所有节点或者邻近节点进行遍历,然后在后台对该操作进行优化和编译。为了展示编译成效和语法的简洁,mindspore-gl会在编译过程中把没有mindspore-gl支持下的语法都展示出来,从对比中可以看出,mindspore-gl极大程度上提高了编程的便利性。

  1. In [1]: import mindspore as ms
  2. In [2]: from mindspore_gl import Graph, GraphField
  3. In [3]: from mindspore_gl.nn import GNNCell
  4. In [4]: n_nodes = 3
  5. In [5]: n_edges = 3
  6. In [6]: src_idx = ms.Tensor([0, 1, 2], ms.int32)
  7. In [7]: dst_idx = ms.Tensor([1, 2, 0], ms.int32)
  8. In [8]: graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges)
  9. In [9]: node_feat = ms.Tensor([[1], [2], [3]], ms.float32)
  10. In [10]: class TestSetVertexAttr(GNNCell):
  11. ...: def construct(self, x, y, g: Graph):
  12. ...: g.set_src_attr({"hs": x})
  13. ...: g.set_dst_attr({"hd": y})
  14. ...: return [v.hd for v in g.dst_vertex] * [u.hs for u in g.src_vertex]
  15. ...:
  16. In [11]: ret = TestSetVertexAttr()(node_feat[src_idx], node_feat[dst_idx], *graph_field.get_graph()).asnumpy().tolist()
  17. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  18. | def construct(self, x, y, g: Graph): 1 || 1 def construct( |
  19. | || self, |
  20. | || x, |
  21. | || y, |
  22. | || src_idx, |
  23. | || dst_idx, |
  24. | || n_nodes, |
  25. | || n_edges, |
  26. | || UNUSED_0=None, |
  27. | || UNUSED_1=None, |
  28. | || UNUSED_2=None |
  29. | || ): |
  30. | || 2 SCATTER_ADD = ms.ops.TensorScatterAdd() |
  31. | || 3 SCATTER_MAX = ms.ops.TensorScatterMax() |
  32. | || 4 SCATTER_MIN = ms.ops.TensorScatterMin() |
  33. | || 5 GATHER = ms.ops.Gather() |
  34. | || 6 ZEROS = ms.ops.Zeros() |
  35. | || 7 FILL = ms.ops.Fill() |
  36. | || 8 MASKED_FILL = ms.ops.MaskedFill() |
  37. | || 9 IS_INF = ms.ops.IsInf() |
  38. | || 10 SHAPE = ms.ops.Shape() |
  39. | || 11 RESHAPE = ms.ops.Reshape() |
  40. | || 12 scatter_src_idx = RESHAPE(src_idx, (SHAPE(src_idx)[0], 1)) |
  41. | || 13 scatter_dst_idx = RESHAPE(dst_idx, (SHAPE(dst_idx)[0], 1)) |
  42. | g.set_src_attr({'hs': x}) 2 || 14 hs, = [x] |
  43. | g.set_dst_attr({'hd': y}) 3 || 15 hd, = [y] |
  44. | return [v.hd for v in g.dst_vertex] * [u.hs for u in g.src_vertex] 4 || 16 return hd * hs |
  45. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  46. In [12]: print (ret)
  47. [[2.0], [6.0], [3.0]]

从这个结果中,我们获得的是三条边两头的节点值的积。除了节点id和节点值之外,mindspore-gl还支持了一些如近邻节点、节点的度等参数的获取,可以参考如下图片所展示的内容(图片来自于参考链接2):

除了基本的API接口之外,还可以学习下mindspore-gl的使用中有可能出现的报错信息:

在mindspore-gl这一个框架中,还有一个对于大型数据来说非常有用的功能,当然,在文章这里只是放一下大概用法,因为暂时没有遇到这种使用的场景。那就是把一个大型的图网络根据近邻的数量去拆分成不同大小的数据块进行存储和运算。这样做一方面可以避免动态的shape出现,因为网络可能随时都在改变。另一方面本身图的近邻数大部分就不是均匀分布的,有少部分特别的密集,而更多的情况是一些比较稀疏的图,那么这个时候如果要固定shape的话,就只能padding到较大数量的那一个维度,这样一来就无形之中浪费了巨大的存储空间。这种分块模式的存储,能够最大限度上减小显存的占用,同时还能够提高运算的速度。



那么最后我们再展示一个聚合的简单案例,其实就是获取节点的近邻节点值的加和:

  1. import mindspore as ms
  2. from mindspore import ops
  3. from mindspore_gl import Graph, GraphField
  4. from mindspore_gl.nn import GNNCell
  5. n_nodes = 3
  6. n_edges = 3
  7. src_idx = ms.Tensor([0, 1, 2, 3, 4], ms.int32)
  8. dst_idx = ms.Tensor([1, 2, 0, 1, 2], ms.int32)
  9. graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges)
  10. node_feat = ms.Tensor([[1], [2], [3], [4], [5]], ms.float32)
  11. class GraphConvCell(GNNCell):
  12. def construct(self, x, y, g: Graph):
  13. g.set_src_attr({"hs": x})
  14. g.set_dst_attr({"hd": y})
  15. return [g.sum([u.hs for u in v.innbs]) for v in g.dst_vertex]
  16. ret = GraphConvCell()(node_feat[src_idx], node_feat[dst_idx], *graph_field.get_graph()).asnumpy().tolist()
  17. print (ret)

那么这里只要使用一个graph.sum这样的接口就可以实现,非常的易写方便,代码可读性很高。

  1. $ python3 test_msgl_01.py
  2. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  3. | def construct(self, x, y, g: Graph): 1 || 1 def construct( |
  4. | || self, |
  5. | || x, |
  6. | || y, |
  7. | || src_idx, |
  8. | || dst_idx, |
  9. | || n_nodes, |
  10. | || n_edges, |
  11. | || UNUSED_0=None, |
  12. | || UNUSED_1=None, |
  13. | || UNUSED_2=None |
  14. | || ): |
  15. | || 2 SCATTER_ADD = ms.ops.TensorScatterAdd() |
  16. | || 3 SCATTER_MAX = ms.ops.TensorScatterMax() |
  17. | || 4 SCATTER_MIN = ms.ops.TensorScatterMin() |
  18. | || 5 GATHER = ms.ops.Gather() |
  19. | || 6 ZEROS = ms.ops.Zeros() |
  20. | || 7 FILL = ms.ops.Fill() |
  21. | || 8 MASKED_FILL = ms.ops.MaskedFill() |
  22. | || 9 IS_INF = ms.ops.IsInf() |
  23. | || 10 SHAPE = ms.ops.Shape() |
  24. | || 11 RESHAPE = ms.ops.Reshape() |
  25. | || 12 scatter_src_idx = RESHAPE(src_idx, (SHAPE(src_idx)[0], 1)) |
  26. | || 13 scatter_dst_idx = RESHAPE(dst_idx, (SHAPE(dst_idx)[0], 1)) |
  27. | g.set_src_attr({'hs': x}) 2 || 14 hs, = [x] |
  28. | g.set_dst_attr({'hd': y}) 3 || 15 hd, = [y] |
  29. | return [g.sum([u.hs for u in v.innbs]) for v in g.dst_vertex] 4 || 16 SCATTER_INPUT_SNAPSHOT1 = GATHER(hs, src_idx, 0) |
  30. | || 17 return SCATTER_ADD( |
  31. | || ZEROS( |
  32. | || (n_nodes,) + SHAPE(SCATTER_INPUT_SNAPSHOT1)[1:], |
  33. | || SCATTER_INPUT_SNAPSHOT1.dtype |
  34. | || ), |
  35. | || scatter_dst_idx, |
  36. | || SCATTER_INPUT_SNAPSHOT1 |
  37. | || ) |
  38. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  39. [[3.0], [5.0], [7.0]]

下图是上面这个案例所对应的拓扑图:

总结概要

对于从元素运算到矩阵运算再到张量运算,最后抽象到图运算,这个预算模式的发展历程,在每个阶段都需要有配套的工具来进行支持。比如矩阵时代的numpy,张量时代的mindspore,还有图时代的mindspore-gl。我们未必说哪种运算模式就一定更加先进,但是对于coder来说,“公式即代码”这是一个永恒的话题,而mindspore-gl在这一个工作上确实做的很好。不仅仅是图模式的编程可读性更高,在GPU运算的性能上也有非常大的优化。

版权声明

本文首发链接为:https://www.cnblogs.com/dechinphy/p/mindspore_gl.html

作者ID:DechinPhy

更多原著文章请参考:https://www.cnblogs.com/dechinphy/

打赏专用链接:https://www.cnblogs.com/dechinphy/gallery/image/379634.html

腾讯云专栏同步:https://cloud.tencent.com/developer/column/91958

CSDN同步链接:https://blog.csdn.net/baidu_37157624?spm=1008.2028.3001.5343

51CTO同步链接:https://blog.51cto.com/u_15561675

参考链接

  1. https://gitee.com/mindspore/graphlearning
  2. https://www.bilibili.com/video/BV14a411976w/
  3. Seastar: Vertex-Centric Progamming for Graph Neural Networks. Yidi Wu and other co-authors.

MindSpore Graph Learning的更多相关文章

  1. 论文解读(GraphDA)《Data Augmentation for Deep Graph Learning: A Survey》

    论文信息 论文标题:Data Augmentation for Deep Graph Learning: A Survey论文作者:Kaize Ding, Zhe Xu, Hanghang Tong, ...

  2. 论文解读(GSAT)《Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism》

    论文信息 论文标题:Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism论文作者:Siqi ...

  3. 关于图计算&图学习的基础知识概览:前置知识点学习(Paddle Graph Learning (PGL))

    关于图计算&图学习的基础知识概览:前置知识点学习(Paddle Graph Learning (PGL)) 欢迎fork本项目原始链接:关于图计算&图学习的基础知识概览:前置知识点学习 ...

  4. Paddle Graph Learning (PGL)图学习之图游走类模型[系列四]

    Paddle Graph Learning (PGL)图学习之图游走类模型[系列四] 更多详情参考:Paddle Graph Learning 图学习之图游走类模型[系列四] https://aist ...

  5. 昇思MindSpore全场景AI框架 1.6版本,更高的开发效率,更好地服务开发者

    摘要:本文带大家快速浏览昇思MindSpore全场景AI框架1.6版本的关键特性. 全新的昇思MindSpore全场景AI框架1.6版本已发布,此版本中昇思MindSpore全场景AI框架易用性不断改 ...

  6. Learning Conditioned Graph Structures for Interpretable Visual Question Answering

    Learning Conditioned Graph Structures for Interpretable Visual Question Answering 2019-05-29 00:29:4 ...

  7. (转) Graph-powered Machine Learning at Google

        Graph-powered Machine Learning at Google     Thursday, October 06, 2016 Posted by Sujith Ravi, S ...

  8. GNN 相关资料记录;GCN 与 graph embedding 相关调研;社区发现算法相关;异构信息网络相关;

    最近做了一些和gnn相关的工作,经常听到GCN 和 embedding 相关技术,感觉很是困惑,所以写下此博客,对相关知识进行索引和记录: 参考链接: https://www.toutiao.com/ ...

  9. 论文解读(MCGC)《Multi-view Contrastive Graph Clustering》

    论文信息 论文标题:Multi-view Contrastive Graph Clustering论文作者:Erlin Pan.Zhao Kang论文来源:2021, NeurIPS论文地址:down ...

随机推荐

  1. 【Manim CE】常用Mobject

    当前文档版本:v0.16.0.post0 VMobject 继承自Mobject V的意思是向量化的,vectorized mobject fill_color=None, fill_opacity= ...

  2. Docker问题:"docker build" requires exactly 1 argument.

    今天在搭建Docker私有仓库的时候.提示错误:"docker build" requires exactly 1 argument. 原因是因为(少了一个 '.' , '.' 代 ...

  3. Windows如何创存储虚拟机并制作存储虚拟化LUN的映射

    创建虚拟机 只能设置为8G,不能多也不能少 选择仅主机模式 选择使用现有磁盘 浏览选择自己的vmdk文件 选择保存现有格式 点击完成 点击编辑虚拟机设置 添加一个40G的硬盘 修改为40G并选择存储为 ...

  4. Linux_more_less总结

    先写结论 : less is more,使用less 优于使用more more 和 less的区别 优于more不能后退,而less 就在其基础上增加了后退功能 less 可以使用键盘上的上下方向键 ...

  5. 一文搞懂mysql索引底层逻辑,干货满满!

    一.什么是索引 在mysql中,索引是一种特殊的数据库结构,由数据表中的一列或多列组合而成,可以用来快速查询数据表中有某一特定值的记录.通过索引,查询数据时不用读完记录的所有信息,而只是查询索引列即可 ...

  6. 国产CPLD(AGM1280)试用记录——做个SPI接口的任意波形DDS [原创www.cnblogs.com/helesheng]

    我之前用过的CPLD有Altera公司的MAX和MAX-II系列,主要有两个优点:1.程序存储在片上Flash,上电即行,保密性高.2.CPLD器件规模小,成本和功耗低,时序不收敛情况也不容易出现.缺 ...

  7. 【Java UI】HarmonyOS添加日历事件

    ​参考资料 CalendarDataHelper Events Reminders api讲解 添加权限 在config.json添加权限代码如下 "reqPermissions" ...

  8. day03-3私聊功能

    多用户即时通讯系统03 4.编码实现02 4.4功能实现-私聊功能实现 4.4.1思路分析 客户端 - 发送者: 用户在控制台输入信息,客户端接收内容 将消息构建成Messgae对象,通过对应的soc ...

  9. containerd使用总结

    # 安装 yum install -y yum-utils yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linu ...

  10. 使用docker方式安装wordpress

    mkdir -p /home/my_wordpress cd my_wordpress/ vim docker-compose.yml version: '3.3' services: db: ima ...