背景描述

  最近在进行安全扫描的时候,说hadoop存在漏洞,Hadoop 未授权访问【原理扫描】,然后就参考官方文档及一些资料,在测试环境中进行了开启,中间就遇到了很多的坑,或者说自己没有想明白的问题,在此记录下吧,这个问题搞了2天。

环境描述

  hadoop版本:2.6.2

操作步骤

1.想要开启服务级认证,需要在core-site.xml文件中开启参数hadoop.security.authorization,将其设置为true

<property>
<name>hadoop.security.authorization</name>
<value>true</value>
<description>Is service-level authorization enabled?</description>
</property>

备注:根据官方文档的解释,设置为true就是simple类型的认证,基于OS用户的认证.现在服务级的认证已经开启了。

增加此参数之后,需要重启namenode:

sbin/hadoop-daemon.sh stop namenode
sbin/hadoop-daemon.sh start namenode

如何知道是否真正的开启了该配置,查看hadoop安全日志SecurityAuth-aiprd.audit,如果有新日志增加,里面带有认证信息,说明开启成功。

2.针对具体的各个服务的认证,在配置文件hadoop-policy.xml中

<configuration>
<property>
<name>security.client.protocol.acl</name>
<value>*</value>
<description>ACL for ClientProtocol, which is used by user code
via the DistributedFileSystem.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.client.datanode.protocol.acl</name>
<value>*</value>
<description>ACL for ClientDatanodeProtocol, the client-to-datanode protocol
for block recovery.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.datanode.protocol.acl</name>
<value>*</value>
<description>ACL for DatanodeProtocol, which is used by datanodes to
communicate with the namenode.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.inter.datanode.protocol.acl</name>
<value>*</value>
<description>ACL for InterDatanodeProtocol, the inter-datanode protocol
for updating generation timestamp.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.namenode.protocol.acl</name>
<value>*</value>
<description>ACL for NamenodeProtocol, the protocol used by the secondary
namenode to communicate with the namenode.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.admin.operations.protocol.acl</name>
<value>*</value>
<description>ACL for AdminOperationsProtocol. Used for admin commands.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.refresh.user.mappings.protocol.acl</name>
<value>*</value>
<description>ACL for RefreshUserMappingsProtocol. Used to refresh
users mappings. The ACL is a comma-separated list of user and
group names. The user and group list is separated by a blank. For
e.g. "alice,bob users,wheel". A special value of "*" means all
users are allowed.</description>
</property> <property>
<name>security.refresh.policy.protocol.acl</name>
<value>*</value>
<description>ACL for RefreshAuthorizationPolicyProtocol, used by the
dfsadmin and mradmin commands to refresh the security policy in-effect.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.ha.service.protocol.acl</name>
<value>*</value>
<description>ACL for HAService protocol used by HAAdmin to manage the
active and stand-by states of namenode.</description>
</property> <property>
<name>security.zkfc.protocol.acl</name>
<value>*</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property> <property>
<name>security.qjournal.service.protocol.acl</name>
<value>*</value>
<description>ACL for QJournalProtocol, used by the NN to communicate with
JNs when using the QuorumJournalManager for edit logs.</description>
</property> <property>
<name>security.mrhs.client.protocol.acl</name>
<value>*</value>
<description>ACL for HSClientProtocol, used by job clients to
communciate with the MR History Server job status etc.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <!-- YARN Protocols --> <property>
<name>security.resourcetracker.protocol.acl</name>
<value>*</value>
<description>ACL for ResourceTrackerProtocol, used by the
ResourceManager and NodeManager to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.resourcemanager-administration.protocol.acl</name>
<value>*</value>
<description>ACL for ResourceManagerAdministrationProtocol, for admin commands.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.applicationclient.protocol.acl</name>
<value>*</value>
<description>ACL for ApplicationClientProtocol, used by the ResourceManager
and applications submission clients to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.applicationmaster.protocol.acl</name>
<value>*</value>
<description>ACL for ApplicationMasterProtocol, used by the ResourceManager
and ApplicationMasters to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.containermanagement.protocol.acl</name>
<value>*</value>
<description>ACL for ContainerManagementProtocol protocol, used by the NodeManager
and ApplicationMasters to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.resourcelocalizer.protocol.acl</name>
<value>*</value>
<description>ACL for ResourceLocalizer protocol, used by the NodeManager
and ResourceLocalizer to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.job.task.protocol.acl</name>
<value>*</value>
<description>ACL for TaskUmbilicalProtocol, used by the map and reduce
tasks to communicate with the parent tasktracker.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.job.client.protocol.acl</name>
<value>*</value>
<description>ACL for MRClientProtocol, used by job clients to
communciate with the MR ApplicationMaster to query job status etc.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.applicationhistory.protocol.acl</name>
<value>*</value>
<description>ACL for ApplicationHistoryProtocol, used by the timeline
server and the generic history service client to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
</configuration>

备注:默认有10个服务,每个服务的默认值都是*,表示的就是任何的用户都可以对其进行访问。

3.目前只需要针对客户端哪些用户能够访问namenode即可,即修改参数security.client.protocol.acl的值

  <property>
<name>security.zkfc.protocol.acl</name>
<value>aiprd</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property>

备注:表示客户端进行对应的用户是aiprd的就可以访问namenode。

刷新ACL配置:

bin/hdfs dfsadmin -refreshServiceAcl

修改格式如下:

<property>
<name>security.job.submission.protocol.acl</name>
<value>user1,user2 group1,group2</value>
</property>

备注:该值是,用户之间逗号隔开,用户组之间用逗号隔开,用户和用户组之间用空格分开,如果没有用户,要以空格开头后面接用户组。

4.远程客户端访问hdfs中文件进行验证

[aiprd@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/hbase
drwxr-xr-x - aiprd hadoop -- : hdfs://hadoop1:9000/test01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test07
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test08
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test09
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test10
drwxrwx--- - aiprd supergroup -- : hdfs://hadoop1:9000/test11
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12

备注:在客户端上,将hadoop的程序部署在aiprd用户下,执行命令能够查看其中的文件、文件夹信息。同时,aiprd用户也是启动namenode的用户即hadoop中的超级用户,所以,查看到的文件的用户组都是aiprd.

5.测试,如果增加或者使用其他的用户是否可以

  <property>
<name>security.zkfc.protocol.acl</name>
<value>aiprd1</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property>

刷新ACL配置。

bin/hdfs dfsadmin -refreshServiceAcl

将用户修改aiprd1。即只有客户端的程序用户是aiprd1才能访问。

6.在客户端中,继续使用之前部署在aiprd用户下的hadoop客户端进行访问

[aiprd@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/
ls: User aiprd (auth:SIMPLE) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null

备注:发现aiprd用户是不能访问的了

7.客户端中,在aiprd1用户下,在部署hadoop客户端,然后进行访问

[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/04
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/05
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12/10

备注:是能够访问的,所以,如果要使用用户来进行认证,那么客户端程序对应的OS用户,必须要和hadoop-policy.xml中配置的用户一致否则不能访问。

既然,服务级参数的值,可以是用户,也可以是用户组,用户验证完了,那么来验证用户组吧,此时,就遇到了很多的坑。

1.还是之前的参数security.zkfc.protocol.acl,这次使用,用户组

  <property>
<name>security.zkfc.protocol.acl</name>
<value>aiprd hadoop</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property>

刷新ACL配置:

/bin/hdfs dfsadmin -refreshServiceAcl

那么问题来了,之前的用户是基于OS级别的判断,这个应该也是,也就是判断我这个用户到底是不是这个用户组里面的。

2.在客户端上aiprd用户下的程序是可以访问的,经过之前的验证没有问题

3.在客户端上,在aiprd1下部署hadoop客户端程序,正常是访问不了hdfs的,那么将aiprd1加入到这个hadoop组下,理论上是可以访问的

[aiprd1@localhost ~]$ id aiprd1
uid=(aiprd1) gid=(aiprd1) groups=(aiprd1),(hadoop)
[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
ls: User aiprd1 (auth:SIMPLE) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null

经过验证,是不可以的,说明这个hadoop分组并没有起作用。

试了如下的办法:

  • --1.hadoop.security.group.mapping 改了这个参数的值,其实这个参数有默认的值,不需要进行设置的
  • --2.在hdfs所有的节点都建了hadoop用户组,还是没有解决问题
  • --3.默认的hdfs中文件的用户组是supergroup,也尝试将aiprd1加入到supergroup中,还是没有作用
  • --4.使用aiprd这个超级用户,将hdfs中文件的用户组改为hadoop还是没有效果
  • --5.尝试在namenode上将aiprd加入到hadoop组还是没有效果。

实在没有办法,开启DEBUG吧,开启之后,获得信息如下:

-- ::, WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user aiprd1: id: aiprd1: No such user

-- ::, WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user aiprd1
adoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null
:SIMPLE)
-- ::, DEBUG org.apache.hadoop.ipc.Server: Socket Reader # for port : responding to null from 192.168.30.1: Call#- Retry#-
-- ::, DEBUG org.apache.hadoop.ipc.Server: Socket Reader # for port : responding to null from 192.168.30.1: Call#- Retry#- Wrote bytes.
izationException: User aiprd1 (auth:SIMPLE) is not authorized for protocol interface

意思是说,当试着为这个用户查找用户组的时候,没有这个用户,就很奇怪,明明是有用户的啊。然后就基于这个报错各种查找,然后在下面的文章中获得了点启示:

https://www.e-learn.cn/content/wangluowenzhang/1136832
To accomplish your goal you'd need to add your user account (clott) on the NameNode machine and add it to hadoop group there.

If you are going to run MapReduce with your user, you'd need your user account to be configured on NodeManager hosts as well.

4.按照这个意思,在Namenode节点上,创建aiprd1用户,并加入到hadoop用户组里面。

[root@hadoop1 ~]# useradd -G hadoop aiprd1
[root@hadoop1 ~]# id aiprd1
uid=503(aiprd1) gid=503(aiprd1) groups=503(aiprd1),502(hadoop)
[root@hadoop1 ~]# su - aiprd
[aiprd@hadoop1 ~]$ jps
15289 NameNode
15644 Jps

备注:此节点运行了NameNode.

5.再次在hadoop客户端上,aiprd1用户下执行查询操作

[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/04
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/05
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12/10

可以进行查询了。

在客户端上,将aiprd1对应的用户组hadoop去掉。

[aiprd1@localhost ~]$ id
uid=(aiprd1) gid=(aiprd1) groups=(aiprd1)

再次执行查询:

[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/04
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/05
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12/10

还是可以查询的,可以看出来,用户组和客户端上用户所在的组没有关系,需要在Namenode节点设置。

查看官方,有如下解释:

Once a username has been determined as described above, the list of groups is determined by a group mapping service, configured by the hadoop.security.group.mapping property. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available, the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, org.apache.hadoop.security.ShellBasedUnixGroupsMapping, is used. This implementation shells out with the bash -c groups command (for a Linux/Unix environment) or the net group command (for a Windows environment) to resolve a list of groups for a user.

An alternate implementation, which connects directly to an LDAP server to resolve the list of groups, is available via org.apache.hadoop.security.LdapGroupsMapping. However, this provider should only be used if the required groups reside exclusively in LDAP, and are not materialized on the Unix servers. More information on configuring the group mapping service is available in the Javadocs.

For HDFS, the mapping of users to groups is performed on the NameNode. Thus, the host system configuration of the NameNode determines the group mappings for the users.

Note that HDFS stores the user and group of a file or directory as strings; there is no conversion from user and group identity numbers as is conventional in Unix.

对于HDFS来说,用户到组的映射关系是在NameNode上执行的,因此,NameNode的主机系统配置决定了用户组的映射。

实验之后才看明白,之前根本没有理解,以为是从客户端拿到用户对应的用户组信息,然后到NameNode来进行判断呢。

所以,到这里,基于服务级的ACL,用户、用户组的都已经可以配置了,对于其他的服务,可以根据实际情况进行配置。这里面只要求哪些用户、用户组可以连接上来就好了。

小结

  1.hadoop.security.authorization设置为true,开启simple认证,即基于os用户的认证,配置之后,重启namenode

  2.acl为用户认证的,保证服务acl中配置的值与客户端进程对应的用户一致即可访问。

  3.acl为用户组的,客户端如果使用A访问,那么要在NameNode上创建用户A,将A加入到acl用户组,验证过程:获取客户端的用户,比如为A,NameNode节点上,通过用户A,到NameNode的主机上来查找用户A对应的用户组信息,如果NameNode上没有用户A,认证失败,如果有用户A,没有在acl用户组上,认证失败,有用户A,用户A在acl配置的组里面,认证成功。

  4.acl配置的用户组与客户端程序用户,所在的用户组没有关系。

  5.每次修改hadoop-policy.xml中的值,记得要执行刷新操作。

另外:要注意,不同版本的参数,配置可能不同,要看和自己hadoop版本一致的文档。

https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html

https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Group_Mapping

文档创建时间:2019年8月15日17:30:24

hadoop开启Service Level Authorization 服务级认证-SIMPLE认证-过程中遇到的坑的更多相关文章

  1. 【FAQ】运动健康服务REST API接口使用过程中常见问题和解决方法总结

    华为运动健康服务(HUAWEI Health Kit)为三方生态应用提供了REST API接口,通过其接口可访问数据库,为用户提供运动健康类数据服务.在实际的集成过程中,开发者们可能会遇到各种问题,这 ...

  2. 为Secure Store Service生成新密钥,解决“生成密钥过程中发现错误”的问题

    我们集成TFS和SharePoint Server以后,一个最常见的需求是通过SharePoint Server的Excel Service读取TFS报表中的信息,利用Excel Service的强大 ...

  3. loadrunner Analysis :SLA(Service Level Agreement服务水平协议)

    SLA是为负载场景定义的具体目标,用于与实际负载结果比较,确定系统是否达到性能目标. 1.1.1     设置SLA(以Transaction Response Time(Average)为例) 可以 ...

  4. debug 查询服务日志,用于定位服务在运行和启动过程中出现的问题

    vim /usr/lib/systemd/system/sshd.service [Unit] Description=OpenSSH server daemon Documentation=man: ...

  5. Service之三种服务方式

    (一)StartService 运行Service的方法之一.任何继承于android.content.Context的Android组件(component)都可以使用一个Intent(androi ...

  6. S3 服务(Simple Storage Service简单存储服务) 简介(与hdfs同一级)

    图1  spark 相关 亚马逊云存储之S3(Simple Storage Service简单存储服务) (转 ) S3是Simple Storage Service的缩写,即简单存储服务.亚马逊的名 ...

  7. Service Mesh(服务网格)

    Service Mesh(服务网格) 什么是Service Mesh(服务网格)Service mesh 又译作 "服务网格",作为服务间通信的基础设施层.Buoyant 公司的 ...

  8. 【知识点】业务连接服务(BCS)认证概念整理

    业务连接服务(BCS)认证概念整理 I. BDC认证模型 BDC服务支持两种认证模型:信任的子系统,模拟和代理. 在信任的子系统模型中,中间层(通常是Web服务器)通过一个固定的身份来向后端服务器取得 ...

  9. Web Service实现分布式服务的基本原理

    简单的说, 就是客户端根据WSDL 生成 SOAP 的请求消息, 通过 HTTP 传输方式(也可以是其它传输方式, 如 FTP 或STMP 等,目前 HTTP 传输方式已经成为 J2EE Web Se ...

随机推荐

  1. tomcat问题解决

    tomcat问题解决 运行tomcat环境下,idea中出现 error running 项目名address localhost1099 is already in use 的时候,如何解决? 1, ...

  2. 嗨,你真的懂this吗?

    this关键字是JavaScript中最复杂的机制之一,是一个特别的关键字,被自动定义在所有函数的作用域中,但是相信很多JvaScript开发者并不是非常清楚它究竟指向的是什么.听说你很懂this,是 ...

  3. bzoj 2752 9.20考试第三题 高速公路(road)题解

    2752: [HAOI2012]高速公路(road) Time Limit: 20 Sec  Memory Limit: 128 MBSubmit: 1545  Solved: 593[Submit] ...

  4. python如何将一个多位数数值转换为列表类型

    现在:a = 10,由于暂时没找到更好的方法,且使用下面的方法进行转换. 目标:转化为['10'] 以下为错误尝试: 1.直接转换,提示整型对象不可迭代. 2.先转换为字符串,再转换为列表,发现被分成 ...

  5. wpf怎么绑定多个值,多个控件

    最近有不少wpf新手问wpf的命令怎么绑定多个控件,很多人为此绞尽脑汁,网上的答案找了也没找到靠谱的,其实用MultiBinding就可以了.从.net 3.0版本开始,就支持MultiBinding ...

  6. python,看看有没有你需要的列表元祖和range知识!

    列表--list 列表:列表是python的基础数据类型之一,存储多种数据类型 可变 支持索引 可切片 方便取值 li = ['alex',123,Ture,(1,2,3,'wusir'),[1,2, ...

  7. 个人永久性免费-Excel催化剂功能第69波-打造最专业易用的商务图表库,即点即用的高级Excel图表

    Excel很大一块细分领域是图表,数据分析的末端,数据展示环节,精美恰当的图表,能够为数据分析数据结论带来画龙点睛的一笔.Excel催化剂简单内置了图表库,利用已经做好的模板式的图表示例,可快速复制使 ...

  8. C#2.0新增功能06 协变和逆变

    连载目录    [已更新最新开发文章,点击查看详细] 在 C# 中,协变和逆变能够实现数组类型.委托类型和泛型类型参数的隐式引用转换. 协变保留分配兼容性,逆变则与之相反. 以下代码演示分配兼容性.协 ...

  9. Mybatis generator生成工具简单介绍

    Mybatis generator  其主要的功能就是方便,快捷的创建好Dao,entry,xml 加快了开发速度,使用方面根据其提供的规则配置好就OK 这里还有一个重要的开发场景,开发过程中,对数据 ...

  10. 分组在re模块中的使用以及使用正则表达式的技巧

    1.split:切割 使用split不会返回被切割的字符 import re ret = re.split("\d+","5as46asf46asf46a") ...