Writing Custom Providers
转自:https://www.terraform.io/docs/extend/writing-custom-providers.html 很详细,做为一个记录
In Terraform, a Provider is the logical abstraction of an upstream API. This guide details how to build a custom provider for Terraform.
NOTE: This guide details steps to author code and compile a working Provider. It omits many implementation details in order to get developers going with coding an example Provider and executing it with Terraform. Please refer to the rest of the Extending Terraform for a more complete reference on authoring Providers and Resources.
»Why?
There are a few possible reasons for authoring a custom Terraform provider, such as:
An internal private cloud whose functionality is either proprietary or would not benefit the open source community.
A "work in progress" provider being tested locally before contributing back.
Extensions of an existing provider
»Local Setup
Terraform supports a plugin model, and all providers are actually plugins. Plugins are distributed as Go binaries. Although technically possible to write a plugin in another language, almost all Terraform plugins are written in Go. For more information on installing and configuring Go, please visit the Golang installation guide.
This post assumes familiarity with Golang and basic programming concepts.
As a reminder, all of Terraform's core providers are open source. When stuck or looking for examples, please feel free to reference the open source providers for help.
»The Provider Schema
To start, create a file named provider.go
. This is the root of the provider and should include the following boilerplate code:
packagemainimport("github.com/hashicorp/terraform/helper/schema")funcProvider()*schema.Provider{return&schema.Provider{ResourcesMap:map[string]*schema.Resource{},}}
The helper/schema
library is part of Terraform Core. It abstracts many of the complexities and ensures consistency between providers. The example above defines an empty provider (there are no resources).
The *schema.Provider
type describes the provider's properties including:
- the configuration keys it accepts
- the resources it supports
- any callbacks to configure
»Building the Plugin
Go requires a main.go
file, which is the default executable when the binary is built. Since Terraform plugins are distributed as Go binaries, it is important to define this entry-point with the following code:
packagemainimport("github.com/hashicorp/terraform/plugin""github.com/hashicorp/terraform/terraform")funcmain(){plugin.Serve(&plugin.ServeOpts{ProviderFunc:func()terraform.ResourceProvider{returnProvider()},})}
This establishes the main function to produce a valid, executable Go binary. The contents of the main function consume Terraform's plugin
library. This library deals with all the communication between Terraform core and the plugin.
Next, build the plugin using the Go toolchain:
$ go build -o terraform-provider-example
The output name (-o
) is very important. Terraform searches for plugins in the format of:
terraform-<TYPE>-<NAME>
In the case above, the plugin is of type "provider" and of name "example".
To verify things are working correctly, execute the binary just created:
$ ./terraform-provider-example
This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically
Custom built providers can be sideloaded for Terraform to use.
This is the basic project structure and scaffolding for a Terraform plugin. To recap, the file structure is:
.
├── main.go
└── provider.go
»Defining Resources
Terraform providers manage resources. A provider is an abstraction of an upstream API, and a resource is a component of that provider. As an example, the AWS provider supports aws_instance
and aws_elastic_ip
. DNSimple supportsdnsimple_record
. Fastly supports fastly_service
. Let's add a resource to our fictitious provider.
As a general convention, Terraform providers put each resource in their own file, named after the resource, prefixed with resource_
. To create an example_server
, this would be resource_server.go
by convention:
packagemainimport("github.com/hashicorp/terraform/helper/schema")funcresourceServer()*schema.Resource{return&schema.Resource{Create:resourceServerCreate,Read:resourceServerRead,Update:resourceServerUpdate,Delete:resourceServerDelete,Schema:map[string]*schema.Schema{"address":&schema.Schema{Type:schema.TypeString,Required:true,},},}}
This uses the schema.Resource
type. This structure defines the data schema and CRUD operations for the resource. Defining these properties are the only required thing to create a resource.
The schema above defines one element, "address"
, which is a required string. Terraform's schema automatically enforces validation and type casting.
Next there are four "fields" defined - Create
, Read
, Update
, and Delete
. The Create
, Read
, and Delete
functions are required for a resource to be functional. There are other functions, but these are the only required ones. Terraform itself handles which function to call and with what data. Based on the schema and current state of the resource, Terraform can determine whether it needs to create a new resource, update an existing one, or destroy. The create and update function should always return the read function to ensure the state is reflected in the terraform.state
file.
Each of the four struct fields point to a function. While it is technically possible to inline all functions in the resource schema, best practice dictates pulling each function into its own method. This optimizes for both testing and readability. Fill in those stubs now, paying close attention to method signatures.
funcresourceServerCreate(d*schema.ResourceData,minterface{})error{returnresourceServerRead(d,m)}funcresourceServerRead(d*schema.ResourceData,minterface{})error{returnnil}funcresourceServerUpdate(d*schema.ResourceData,minterface{})error{returnresourceServerRead(d,m)}funcresourceServerDelete(d*schema.ResourceData,minterface{})error{returnnil}
Lastly, update the provider schema in provider.go
to register this new resource.
funcProvider()*schema.Provider{return&schema.Provider{ResourcesMap:map[string]*schema.Resource{"example_server":resourceServer(),},}}
Build and test the plugin. Everything should compile as-is, although all operations are a no-op.
$ go build -o terraform-provider-example
$ ./terraform-provider-example
This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically
The layout now looks like this:
.
├── main.go
├── provider.go
├── resource_server.go
└── terraform-provider-example
»Invoking the Provider
Previous sections showed running the provider directly via the shell, which outputs a warning message like:
This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically
Terraform plugins should be executed by Terraform directly. To test this, create a main.tf
in the working directory (the same place where the plugin exists).
resource "example_server" "my-server" {}
When terraform init
is run, Terraform parses configuration files and searches for providers in several locations. For the convenience of plugin developers, this search includes the current working directory. (For full details, see How Terraform Works: Plugin Discovery.)
Run terraform init
to discover our newly compiled provider:
$ terraform init
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now execute terraform plan
:
$ terraform plan
1 error(s) occurred:
* example_server.my-server: "address": required field is not set
This validates Terraform is correctly delegating work to our plugin and that our validation is working as intended. Fix the validation error by adding an address
field to the resource:
resource "example_server" "my-server" {
address = "1.2.3.4"
}
Execute terraform plan
to verify the validation is passing:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ example_server.my-server
id: <computed>
address: "1.2.3.4"
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
It is possible to run terraform apply
, but it will be a no-op because all of the resource options currently take no action.
»Implement Create
Back in resource_server.go
, implement the create functionality:
funcresourceServerCreate(d*schema.ResourceData,minterface{})error{address:=d.Get("address").(string)d.SetId(address)returnresourceServerRead(d,m)}
This uses the schema.ResourceData API
to get the value of "address"
provided by the user in the Terraform configuration. Due to the way Go works, we have to typecast it to string. This is a safe operation, however, since our schema guarantees it will be a string type.
Next, it uses SetId
, a built-in function, to set the ID of the resource to the address. The existence of a non-blank ID is what tells Terraform that a resource was created. This ID can be any string value, but should be a value that can be used to read the resource again.
Finally, we must recompile the binary and instruct Terraform to reinitialize it by rerunning terraform init
. This is only necessary because we have modified the code and recompiled the binary, and it no longer matches an internal hash Terraform uses to ensure the same binaries are used for each operation.
Run terraform init
, and then run terraform plan
.
$ go build -o terraform-provider-example
$ terraform init
# ...
$ terraform plan
+ example_server.my-server
address: "1.2.3.4"
Plan: 1 to add, 0 to change, 0 to destroy.
Terraform will ask for confirmation when you run terraform apply
. Enter yes
to create your example server and commit it to state:
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ example_server.my-server
id: <computed>
address: "1.2.3.4"
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
example_server.my-server: Creating...
address: "" => "1.2.3.4"
example_server.my-server: Creation complete after 0s (ID: 1.2.3.4)
Since the Create
operation used SetId
, Terraform believes the resource created successfully. Verify this by running terraform plan
.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
example_server.my-server: Refreshing state... (ID: 1.2.3.4)
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Again, because of the call to SetId
, Terraform believes the resource was created. When running plan
, Terraform properly determines there are no changes to apply.
To verify this behavior, change the value of the address
field and run terraform plan
again. You should see output like this:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
example_server.my-server: Refreshing state... (ID: 1.2.3.4)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ example_server.my-server
address: "1.2.3.4" => "5.6.7.8"
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Terraform detects the change and displays a diff with a ~
prefix, noting the resource will be modified in place, rather than created new.
Run terraform apply
to apply the changes. Terraform will again prompt for confirmation:
$ terraform apply
example_server.my-server: Refreshing state... (ID: 1.2.3.4)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ example_server.my-server
address: "1.2.3.4" => "5.6.7.8"
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
example_server.my-server: Modifying... (ID: 1.2.3.4)
address: "1.2.3.4" => "5.6.7.8"
example_server.my-server: Modifications complete after 0s (ID: 1.2.3.4)
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Since we did not implement the Update
function, you would expect the terraform plan
operation to report changes, but it does not! How were our changes persisted without the Update
implementation?
»Error Handling & Partial State
Previously our Update
operation succeeded and persisted the new state with an empty function definition. Recall the current update function:
funcresourceServerUpdate(d*schema.ResourceData,minterface{})error{returnresourceServerRead(d,m)}
The return nil
tells Terraform that the update operation succeeded without error. Terraform assumes this means any changes requested applied without error. Because of this, our state updated and Terraform believes there are no further changes.
To say it another way: if a callback returns no error, Terraform automatically assumes the entire diff successfully applied, merges the diff into the final state, and persists it.
Functions should never intentionally panic
or call os.Exit
- always return an error.
In reality, it is a bit more complicated than this. Imagine the scenario where our update function has to update two separate fields which require two separate API calls. What do we do if the first API call succeeds but the second fails? How do we properly tell Terraform to only persist half the diff? This is known as a partial state scenario, and implementing these properly is critical to a well-behaving provider.
Here are the rules for state updating in Terraform. Note that this mentions callbacks we have not discussed, for the sake of completeness.
If the
Create
callback returns with or without an error without an ID set usingSetId
, the resource is assumed to not be created, and no state is saved.If the
Create
callback returns with or without an error and an ID has been set, the resource is assumed created and all state is saved with it. Repeating because it is important: if there is an error, but the ID is set, the state is fully saved.If the
Update
callback returns with or without an error, the full state is saved. If the ID becomes blank, the resource is destroyed (even within an update, though this shouldn't happen except in error scenarios).If the
Destroy
callback returns without an error, the resource is assumed to be destroyed, and all state is removed.If the
Destroy
callback returns with an error, the resource is assumed to still exist, and all prior state is preserved.If partial mode (covered next) is enabled when a create or update returns, only the explicitly enabled configuration keys are persisted, resulting in a partial state.
Partial mode is a mode that can be enabled by a callback that tells Terraform that it is possible for partial state to occur. When this mode is enabled, the provider must explicitly tell Terraform what is safe to persist and what is not.
Here is an example of a partial mode with an update function:
funcresourceServerUpdate(d*schema.ResourceData,minterface{})error{// Enable partial state moded.Partial(true)ifd.HasChange("address"){// Try updating the addressiferr:=updateAddress(d,m);err!=nil{returnerr}d.SetPartial("address")}// If we were to return here, before disabling partial mode below,// then only the "address" field would be saved.// We succeeded, disable partial mode. This causes Terraform to save// all fields again.d.Partial(false)returnresourceServerRead(d,m)}
Note - this code will not compile since there is no updateAddress
function. You can implement a dummy version of this function to play around with partial state. For this example, partial state does not mean much in this documentation example. If updateAddress
were to fail, then the address field would not be updated.
»Implementing Destroy
The Destroy
callback is exactly what it sounds like - it is called to destroy the resource. This operation should never update any state on the resource. It is not necessary to call d.SetId("")
, since any non-error return value assumes the resource was deleted successfully.
funcresourceServerDelete(d*schema.ResourceData,minterface{})error{// d.SetId("") is automatically called assuming delete returns no errors, but// it is added here for explicitness.d.SetId("")returnnil}
The destroy function should always handle the case where the resource might already be destroyed (manually, for example). If the resource is already destroyed, this should not return an error. This allows Terraform users to manually delete resources without breaking Terraform. Recompile and reinitialize the Provider:
$ go build -o terraform-provider-example
$ terraform init
#...
Run terraform destroy
to destroy the resource.
$ terraform destroy
example_server.my-server: Refreshing state... (ID: 5.6.7.8)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- example_server.my-server
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
example_server.my-server: Destroying... (ID: 5.6.7.8)
example_server.my-server: Destruction complete after 0s
Destroy complete! Resources: 1 destroyed.
»Implementing Read
The Read
callback is used to sync the local state with the actual state (upstream). This is called at various points by Terraform and should be a read-only operation. This callback should never modify the real resource.
If the ID is updated to blank, this tells Terraform the resource no longer exists (maybe it was destroyed out of band). Just like the destroy callback, the Read
function should gracefully handle this case.
funcresourceServerRead(d*schema.ResourceData,minterface{})error{client:=m.(*MyClient)// Attempt to read from an upstream APIobj,ok:=client.Get(d.Id())// If the resource does not exist, inform Terraform. We want to immediately// return here to prevent further processing.if!ok{d.SetId("")returnnil}d.Set("address",obj.Address)returnnil}
»Implementing a more complex Read
Often the resulting data structure from the API is more complicated and contains nested structures. The following example illustrates this fact. The goal is that the terraform.state
maps the resulting data structure as close as possible. This mapping is called flattening
whereas mapping the terraform configuration to an API call, e.g. on create is called expanding
.
This example illustrates the flattening
of a nested structure which contains a TypeSet
and TypeMap
.
Considering the following structure as the response from the API:
{"ID":"ozfsuj7dblzwjo8zoguosr1l5","Spec":{"Name":"tftest-service-basic","Labels":{},"Address":"tf-test-address","TaskTemplate":{"ContainerSpec":{"Mounts":[{"Type":"volume","Source":"tftest-volume","Target":"/mount/test","VolumeOptions":{"NoCopy":true,"DriverConfig":{}}}]}}}}
The nested structures are Spec
-> TaskTemplate
-> ContainerSpec
-> Mounts
. There can be multiple Mounts
but they have to be unique, so TypeSet
is the appropriate type.
Due to the limitation of tf-11115 it is not possible to nest maps. So the workaround is to let only the innermost data structure be of the type TypeMap
: in this case driver_options
. The outer data structures are of TypeList
which can only have one item.
/// ...
"task_spec": &schema.Schema{
Type: schema.TypeList,
MaxItems: 1,
Required: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"container_spec": &schema.Schema{
Type: schema.TypeList,
Required: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"mounts": &schema.Schema{
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"target": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"source": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"type": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"volume_options": &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"no_copy": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
},
"labels": &schema.Schema{
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"driver_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"driver_options": &schema.Schema{
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
},
},
},
},
},
},
},
},
},
},
},
}
The resourceServerRead
function now also sets/flattens the nested data structure from the API:
funcresourceServerRead(d*schema.ResourceData,minterface{})error{client:=m.(*MyClient)server,ok:=client.Get(d.Id())if!ok{log.Printf("[WARN] No Server found: %s",d.Id())d.SetId("")returnnil}d.Set("address",server.Address)ifserver.Spec!=nil&&server.Spec.TaskTemplate!=nil{iferr=d.Set("task_spec",flattenTaskSpec(server.Spec.TaskTemplate));err!=nil{returnerr}}returnnil}
The so-called flatteners
are in a separate file structures_server.go
. The outermost data structure is a map[string]interface{}
and each item a []interface{}
:
funcflattenTaskSpec(in*server.TaskSpec)[]interface{}{// NOTE: the top level structure to set is a mapm:=make(map[string]interface{})ifin.ContainerSpec!=nil{m["container_spec"]=flattenContainerSpec(in.ContainerSpec)}/// ...return[]interface{}{m}}funcflattenContainerSpec(in*server.ContainerSpec)[]interface{}{// NOTE: all nested structures are lists of interface{}varout=make([]interface{},0,0)m:=make(map[string]interface{})/// ...iflen(in.Mounts)>0{m["mounts"]=flattenServiceMounts(in.Mounts)}/// ...out=append(out,m)returnout}funcflattenServiceMounts(in[]mount.Mount)[]map[string]interface{}{varout=make([]map[string]interface{},len(in),len(in))fori,v:=rangein{m:=make(map[stringinterface{}])m["target"]=v.Targetm["source"]=v.Sourcem["type"]=v.Typeifv.VolumeOptions!=nil{volumeOptions:=make(map[string]interface{})volumeOptions["no_copy"]=v.VolumeOptions.NoCopy// NOTE: this is an internally written map from map[string]string => map[string]interface{}// because terraform can only store map with interface{} as the type of the valuevolumeOptions["labels"]=mapStringStringToMapStringInterface(v.VolumeOptions.Labels)ifv.VolumeOptions.DriverConfig!=nil{volumeOptions["driver_name"]=v.VolumeOptions.DriverConfig.NamevolumeOptionsItem["driver_options"]=mapStringStringToMapStringInterface(v.VolumeOptions.DriverConfig.Options)}m["volume_options"]=[]interface{}{volumeOptions}}out[i]=m}returnout}
»Next Steps
This guide covers the schema and structure for implementing a Terraform provider using the provider framework. As next steps, reference the internal providers for examples. Terraform also includes a full framework for testing providers.
»General Rules
»Dedicated Upstream Libraries
One of the biggest mistakes new users make is trying to conflate a client library with the Terraform implementation. Terraform should always consume an independent client library which implements the core logic for communicating with the upstream. Do not try to implement this type of logic in the provider itself.
»Data Sources
While not explicitly discussed here, data sources are a special subset of resources which are read-only. They are resolved earlier than regular resources and can be used as part of Terraform's interpolation.
Writing Custom Providers的更多相关文章
- [翻译]Writing Custom Wizards 编写自定义的向导
Writing Custom Wizards 编写自定义的向导 You can extend FastReport's functionality with the help of custom ...
- [翻译]Writing Custom DB Engines 编写定制的DB引擎
Writing Custom DB Engines 编写定制的DB引擎 FastReport can build reports not only with data sourced from ...
- [翻译]Writing Custom Common Controls 编写自定义控件
摘要:介绍如何编写自定义的控件,用在报表的窗体上(如Edit,Button等) Writing Custom Common Controls 编写自定义控件 FastReport contains ...
- [翻译]Writing Custom Report Components 编写自定义报表组件
摘要:简单介绍了如何编写一个FastReport的组件,并且注册到FastReport中使用. Writing Custom Report Components 编写自定义报表组件 FastRep ...
- Writing custom protocol for nanomsg
http://vitiy.info/writing-custom-protocol-for-nanomsg/ nanomsg is next version of ZeroMQ lib, provid ...
- [转]Writing Custom Middleware in ASP.NET Core 1.0
本文转自:https://www.exceptionnotfound.net/writing-custom-middleware-in-asp-net-core-1-0/ One of the new ...
- [译]Writing Custom Middleware in ASP.NET Core 1.0
原文: https://www.exceptionnotfound.net/writing-custom-middleware-in-asp-net-core-1-0/ Middleware是ASP. ...
- Writing and playing with custom Terraform Providers
转自:https://petersouter.xyz/writing-and-playing-with-custom-terraform-providers/ I’ve been digging de ...
- DotNETCore 学习笔记 配置
Configuration var builder = new ConfigurationBuilder(); builder.AddInMemoryCollection(); var config ...
随机推荐
- linux自动更新代码,自动备份数据库,打包应用发布
切换root用户 sudo su - 1.安装svn,mysql yum install subversion yum install mysql 2.安装 maven 下载:百度云盘地址为 http ...
- python模块大全
python模块大全2018年01月25日 13:38:55 mcj1314bb 阅读数:3049 pymatgen multidict yarl regex gvar tifffile jupyte ...
- Python 特殊关系
class Foo: def __init__(self): # 初始化操作 print("我是init, 我是老二") print("初始化操作. 在创建对象的时候自动 ...
- Python 私有
class Person: __qie = "潘潘" # 类变量 def __init__(self, name, mimi): self.name = name self.__m ...
- Centos7防火墙开放8080端口
查看已经开发的端口: firewall-cmd --list-ports 开启端口: firewall-cmd --zone=public --add-port=8080/tcp --permanen ...
- HDU 6143 17多校8 Killer Names(组合数学)
题目传送:Killer Names Problem Description > Galen Marek, codenamed Starkiller, was a male Human appre ...
- vim3
使用vim编辑多个文件 编辑多个文件有两种形式,一种是在进入vim前使用的参数就是多个文件.另一种是在进入vim后再编辑其他文件. 1. vim 1.txt 2.txt 在命令行模式下输入:n编辑2. ...
- 河南省第四届ACM省赛(T1) 序号互换
问题 A: 序号互换 时间限制: 1 Sec 内存限制: 128 MB难度1 题目描述 Dr.Kong设计了一个聪明的机器人卡多,卡多会对电子表格中的单元格坐标快速计算出来.单元格的行坐标是由数字编 ...
- Spring Boot 揭秘与实战(二) 数据存储篇 - ElasticSearch
文章目录 1. 版本须知 2. 环境依赖 3. 数据源 3.1. 方案一 使用 Spring Boot 默认配置 3.2. 方案二 手动创建 4. 业务操作5. 总结 4.1. 实体对象 4.2. D ...
- Spring Boot 揭秘与实战(二) 数据存储篇 - JPA整合
文章目录 1. 环境依赖 2. 数据源 3. 脚本初始化 4. JPA 整合方案一 通过继承 JpaRepository 接口 4.1. 实体对象 4.2. DAO相关 4.3. Service相关 ...