Welcome to go-chassis’s documentation!

Introductions

Introduction

What is Go chassis

Go chassis is a micro service framework for Go developer. you can develop distributed system with go chassis rapidly.

Why use Go chassis

go chassis is designed as a protocol-independent framework, any protocol is able to integrate with go chassis and leverage same function like load balancing, circuit breaker,rate limiting, those function resilient your service

go chassis makes service observable by bring open tracing and prometheus to it.

go chassis is flexible, many different modules can be replaced by other implementation, like registry, metrics, handler chain, config center etc

With many build-in function like route management, circuit breaker, load balancing, monitoring etc, your don’t need to search and integrate a solution yourself

go chassis supports Istio platform, Although Istio is a great platform with a service mesh in data plane, it surely decrease the throughput and increase the latency of your service and cost more CPU usage. go chassis can bring better performance to go program, you can use Istio configurations to control go chassis.

Concepts

Registry
A registry must support both registration and discovery
Registrator
A registrator service must support registration
Service Discovery

A Service Discovery service must support discovery service at least. For example,ServiceComb service center support both registration and discovery Istio and kubernetes only support discovery

_images/registry.PNG
Protocol server and client

go chassis allows you to integrate any protocol into standardized model Invocation, so that any protocol can reuse same function like circuit breaker, load balancing, rate limiting, route management

_images/protocol.PNG

Features

  • Pluggable registrator and discovery service: Support Service center,istio pilot and file based registry,fit both client side discovery and server side discovery pattern
  • Pluggable Protocol: You can custom your own protocol, by default support http and highway(RPC)
  • Circuit breaker: Protect your service in runtime or on-demand
  • Route management: Able to route to different service based on weight and match rule to achieve Canary Release easily
  • Load balancing: Able to custom strategy and filter
  • Rate limiting: Both client side and server side rate limiting
  • Pluggable Cipher: Able to custom your own cipher for AKSK and TLS certs
  • Handler Chain: Able to add your own code during service calling for client and server side
  • Metrics: Able to expose Prometheus metric API automatically and custom metrics reporter
  • Tracing: Use opentracing-go as standard library, easy to integrate tracing impl
  • Logger: You can custom your own writer to sink log, by default support file and stdout
  • Hot-reconfiguraion: A lot of configuration can be reload in runtime, like loadbalancing, circuit breaker, rate limiting
  • Dynamic Configuration framework: Able to develop a service which has hot-reconfiguration feature easily
  • Fault Injection: In consumer side, you can inject faults to bring chaos testing into your system

How it works

_images/how.png

这里解释运行时发生了什么

不同协议请求进入到各协议Server,Server将具体的协议请求转换为Invocation统一抽象模型,并传入Handler chain,在这里Chassis已经默认实现了很多的Handler,比如熔断,限流等,最终再进入Transport handler,使用具体的协议客户端传输到目标。

每次请求生成的监控数据通过http API导出的方式,由Prometheus收集处理

日志可通过扩展,输出到kafka等服务中也可使用华为公有云APM服务收集

注册中心默认对接Service center

Archaius为动态配置框架,可从各种不同的source中读取配置

Get started

Minimize Installation

  1. Install go 1.8+
  2. Clone the project
git clone git@github.com:go-chassis/go-chassis.git
  1. Use use go mod(go 1.11+, experimental but a recommended way)
cd go-chassis
GO111MODULE=on go mod download
#optional
GO111MODULE=on go mod vendor
  1. Install service-center
  2. Write your first http micro service

Use gRPC communication

follow https://developers.google.com/protocol-buffers/docs/gotutorial to install grpc

Write your first grpc micro service

Writing Rest service

Checkout full example in here

Provider

this section show you how to write a http server

Create 1 project or go package as recommended

server/

├── conf

│ ├── chassis.yaml

│ └── microservice.yaml

└── main.go

1.Write a struct to hold http logic and url patterns

type RestFulHello struct {}

func (r *RestFulHello) Sayhello(b *restful.Context) {
    b.Write([]byte("get user id: " + b.ReadPathParameter("userid")))
}

2.Write your url patterns

func (s *RestFulHello) URLPatterns() []restful.Route {
    return []restful.RouteSpec{
        {http.MethodGet, "/sayhello/{userid}", "Sayhello"},
    }
}

3.Modify chassis.yaml to describe the server you need

cse:
  service:
    registry:
      address: http://127.0.0.1:30100 
  protocols: # what kind of server you want to launch
    rest: #launch a http server
      listenAddress: 127.0.0.1:5001

4.Register this struct

the first params means which server you want to register your struct to. Be aware API can separate by different server and ports

chassis.RegisterSchema("rest", &RestFulHello{})

Notice

Must implement URLPatterns, and for other functions must use *restful.Context as the only input, and certainly the method name must start with uppercase

5.Modify microservice.yaml

service_description:
  name: RESTServer # name your provider

6.In main.go init and start the chassis

func main() {
    //start all server you register in server/schemas.
    if err := chassis.Init(); err != nil {
        lager.Logger.Error("Init failed.", err)
        return
    }
    chassis.Run()
}

Consumer

this section show you how to write a http client

Create 1 project or go package as recommended

client/

├── conf

│ ├── chassis.yaml

│ └── microservice.yaml

└── main.go

  1. modify chassis.yaml
cse:
  service:
    registry:
      address: http://127.0.0.1:30100
  1. modify microservice.yaml
service_description:
  name: RESTClient #name your consumer

3.in main.go call your service

func main() {
    //Init framework
    if err := chassis.Init(); err != nil {
        lager.Logger.Error("Init failed.", err)
        return
    }
    req, _ := rest.NewRequest("GET", "cse://RESTServer/sayhello/world")
    defer req.Close()
    resp, err := core.NewRestInvoker().ContextDo(context.TODO(), req)
    if err != nil {
        lager.Logger.Error("error", err)
        return
    }
    defer resp.Close()
    lager.Logger.Info(string(resp.ReadBody()))
}

Notice

if conf folder is not under work dir, plz export CHASSIS_HOME=/path/to/conf/parent_folder or CHASSIS_CONF_DIR==/path/to/conf_folder

Writing gRPC service

Define grpc contract

1个工程或者go package,推荐结构如下

schemas

├── helloworld

│ ├──helloworld.proto

1.定义helloworld.proto文件

syntax = "proto3";

package helloworld;

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

// The request message containing the user's name.
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings
message HelloReply {
  string message = 1;
}

2.通过pb生成go文件 helloworld.pb.go

protoc –go_out=. helloworld.proto 将生成的go文件拷贝到目录中

schemas

├── helloworld

│ ├──helloworld.proto

│ └──helloworld.pb.go

After generated, need to change one variable name

var _Greeter_serviceDesc = grpc.ServiceDesc{
    ServiceName: "helloworld.Greeter",
    HandlerType: (*GreeterServer)(nil),
    Methods: []grpc.MethodDesc{
        {
            MethodName: "SayHello",
            Handler:    _Greeter_SayHello_Handler,
        },
    },
    Streams:  []grpc.StreamDesc{},
    Metadata: "helloworld.proto",
}

change _Greeter_serviceDesc to Greeter_serviceDesc

Provider Side

1个工程或者go package,推荐结构如下

server/

├── conf

│ ├── chassis.yaml

│ └── microservice.yaml

└── main.go

1.编写接口

type Server struct{}

// SayHello implements helloworld.GreeterServer
func (s *Server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
    return &pb.HelloReply{Message: "Hello " + in.Name}, nil
}

2.注册接口

第一个参数表示你要向哪个协议注册,第三个为grpc serivce desc

chassis.RegisterSchema("grpc", &Server{}, server.WithGRPCServiceDesc(&pb.Greeter_serviceDesc))

3.修改配置文件chassis.yaml

cse:
  service:
    registry:
      address: http://127.0.0.1:30100
  protocols:
    grpc:
      listenAddress: 127.0.0.1:5000

4.修改microservice.yaml

service_description:
  name: RPCServer

5.main.go中启动服务

func main() {
    //start all server you register in server/schemas.
    if err := chassis.Init(); err != nil {
        lager.Logger.Errorf("Init failed: %s", err)
        return
    }
    chassis.Run()
}

Consumer Side

1个工程或者go package,推荐结构如下

client/

├── conf

│ ├── chassis.yaml

│ └── microservice.yaml

└── main.go

1.拿到pb文件生成go代码

protoc –go_out=. hello.proto

2.修改配置文件chassis.yaml

cse:
  service:
    registry:
      address: http://127.0.0.1:30100

3.修改microservice.yaml

service_description:
  name: Client

4.main中调用服务端,指定微服务名,schema,operation与参数和返回

//if you use go run main.go instead of binary run, plz export CHASSIS_HOME=/path/to/conf/folder
func main() {
//Init framework
    if err := chassis.Init(); err != nil {
        lager.Logger.Error("Init failed." + err.Error())
        return
    }
    //declare reply struct
    reply := &helloworld.HelloReply{}
    //Invoke with microservice name, schema ID and operation ID
    if err := core.NewRPCInvoker().Invoke(context.Background(), "RPCServer", "helloworld.Greeter", "SayHello",
        &helloworld.HelloRequest{Name: "Peter"}, reply, core.WithProtocol("grpc")); err != nil {
        lager.Logger.Error("error" + err.Error())
    }
    lager.Logger.Info(reply.Message)
}

Notice

if conf folder is not under work dir, plz export CHASSIS_HOME=/path/to/conf/parent_folder or CHASSIS_CONF_DIR==/path/to/conf_folder

User guides

Micro Service Definition

Introduction

Use microservice.yaml to describe your service

Conceptions:

  • instance: one process is a micro service instance, instances belong to one micro service
  • service: service is a static information entity in storage, it has instances

you can consider a project as an micro service, after compile, build and run, it became a micro service instance

Configurations

name

(required, string) Micro service name

hostname

(optional, string) hostname of host, it can be IP or hostname,default is hostname return by os.hostname()

APPLICATION_ID

(optional, string) Application ID, default value is “default”

version

(optional, string) version number default is 0.0.1

properties

(optional, map) micro service metadata ,usually it is defined in project, and never changed

instance_properties

(optional, map) instance metadata, during runtime, if can be different based on environment

Example

service_description:
  name: Server
  hostname: 10.244.1.3
  properties:
    project: X1
  instance_properties:
    nodeIP: 192.168.0.111

Registry

概述

微服务的注册发现默认通过服务中心完成。 用户可以配置与服务中心的通信方式,服务中心地址,以及自身注册到服务中心的信息。 微服务启动过程中,会自动向服务中心进行注册。 在微服务运行过程中,go-chassis会周期从服务中心查询其他服务的实例信息缓存到本地

配置

注册中心相关配置分布在两个yaml文件中,分别为chassis.yaml和microservice.yaml文件。 chassis.yaml中配置使用的注册中心类型、注册中心的地址信息。

  • type: 对接的服务中心类型默认加载servicecenter和file两种,registry也支持用户定制registry并注册。
  • scope: 默认不允许跨应用间访问,只允许本应用间访问,当配置为full时则允许跨应用间访问,且能发现本租户全部微服务。
  • autodiscovery: 是否开启自动发现,开启后将从服务中心注册发现注册的其他服务中心。
  • register: 配置项默认为自动注册,即框架启动时完成实例的自动注册。当配置manual时,框架只会注册配置文件中的微服务,不会注册实例,使用者可以通过服务中心对外的API完成实例注册。
  • api.version: 目前只支持v4版本。

disabled

(optional, bool) 是否开启服务注册发现模块,默认为false

type

(optional, string) 对接服务中心插件类型,默认为servicecenter

scope

(optional, bool) 默认为full,允许跨app发现,填入app以禁止跨应用发现

autodiscovery

(optional, bool) 自动发现 服务中心集群节点 默认为false

address

(optional, bool)服务中心地址 允许配置多个以逗号隔开,默认为空

register

(optional, bool) 是否自动自注册,默认为 auto,可选manual

refreshInterval

(optional, string) 更新实例缓存的时间间隔,格式为数字加单位(s/m/h),如1s/1m/1h,默认为30s

api.version

(optional, string) 访问服务中心的api版本,默认为v4

watch

(optional, bool) 是否watch实例变化事件,默认为false

API

Registry提供以下接口供用户注册微服务及实例。以下方法通过Registry提供的相关接口实现,内置有两种Registry的实现,默认为servicecenter。另外支持用户自行定义实现Registry接口的插件,用于服务注册发现。

注册微服务实例
RegisterMicroserviceInstances() error
注册微服务
RegisterMicroservice() error
自定义Registry插件
InstallPlugin(name string, f func(opts ...Option) Registry)

示例

服务中心最简化配置只需要registry的address,注册的微服务实例通过appId、服务名和版本决定。

APPLICATION_ID: default #optional
cse:
  service:
    registry:
      disabled: false            #optional: 默认开启registry模块
      type: servicecenter        #optional: 默认类型为对接服务中心
      scope: full                #optional: scope为full注册时允许跨app
      autodiscovery: true       
      address: http://10.0.0.1:30100,http://10.0.0.2:30100 
      register: auto             #optional:默认为自动 [auto manual]
      refeshInterval : 30s       
      watch: true                         
      api:
        version: v4

Service Discovery

概述

Service discovery 是关于如何发现服务的配置。 和Registry的区别是,他仅负责发现服务,而不负责注册服务 Service Discovery与Registry只能选择其一进行配置 启用此功能可以与Istio的Pilot集成

配置

服务发现的配置在chassis.yaml。

type

(optional, string) 对接服务中心插件类型,默认为servicecenter,另外可选择pilotv2以及kube

NOTE: 当使用kube registry时,发布的service需要指定port name为以下格式 [protocol]-[suffix],目前protocol只支持rest和highway。

address

(optional, bool)服务中心地址 允许配置多个以逗号隔开,默认为空

refreshInterval

(optional, string) 更新实例缓存的时间间隔,格式为数字加单位(s/m/h),如1s/1m/1h,默认为30s

示例

当registry type为pilotv2时需要指定pilot的地址address,当registry type为kube时需要指定与kube-apiserver交互所需的kubeconfig的配置文件位置,以下分别为registry的最小示例。

cse:
  service:
    Registry:
      serviceDiscovery:
        type: pilotv2
        address: grpc://istio-pilot.istio-system:15010
        refeshInterval : 30s
cse:
  service:
    Registry:
      serviceDiscovery:
        type: kube
        configPath: /etc/.kube/config

Protocol Servers

Introduction

you can extend your own protocol in go chassis, currently support rest(http) and gRPC

Configurations

protocols.{protocol_server_name}

(required, string) the name of the protocol server, it must be protocol name or consist of protocol name and a suffix. the suffix and protocol is connect with hyphen “-” like -{suffix}

protocols.{protocol_server_name}.advertiseAddress

(optional, string) server advertise address, if you use registry like service center, this address will be registered in registry, so that other service can discover your address

protocols.{protocol_server_name}.listenAddress

(required, string) server listen address, recommend to use 0.0.0.0:{port}, then go chassis will automatically generate advertise address, it is convenience to run in container because the internal IP is not sure until container runs

Example

this config will launch 2 http server and 1 grpc server

cse:
  protocols:
    rest:
      listenAddress: 0.0.0.0:5000
    rest-admin:
      listenAddress: 0.0.0.0:5001
    grpc:
      listenAddress: 0.0.0.0:6000

for ipv6, need quotation marks. because [] is object list in yaml format

cse:
  protocols:
    rest:
      listenAddress: "[2407:c080:17ff:ffff::7274:83a]:5000"

Handler chain

概述

处理链中包含一系列handler, 在一次调用中可以通过扩展调用链的方式来插入定制的逻辑处理,如何开发新的Handler,可参考Developer Guide,本章节只讨论如何进行配置,以及框架已经实现的handler有哪些

配置

cse:
  handler:
    Consumer:
      {name}:{handler_names}
    Provider:
      {name}:{handler_names}

Consumer表示,当你要调用别的服务时,会通过的处理链 Provider表示,当你被别人调用人,会通过的处理链 支持在Consumer和Provider中定义多个不同的chain name 如果handler配置为空那么框架会自动为Consumer与Provider加载默认的handlers,chain的名称为default

Consumer的默认chain为

名称 功能

ratelimiter-consumer 客户端限流

router 路由策略

loadbalance 负载均衡

tracing-consumer 客户端调用链追踪

transport 各协议客户端处理请求,如果你使用自定义处理链配置,那么结尾处必须加入这个handler

Provider的默认chain为

名称 功能

ratelimiter-provider 服务端限流

tracing-provider 服务端调用链追踪

API

当处理链配置为空,用户也可自定义自己的默认处理链

//SetDefaultConsumerChains your custom chain map for Consumer,if there is no config, this default chain will take affect
func SetDefaultConsumerChains(c map[string]string)
//SetDefaultProviderChains set your custom chain map for Provider,if there is no config, this default chain will take affect
func SetDefaultProviderChains(c map[string]string)

you can check build-in handler list in handler.go the const part shows handler list

const (
    //consumer chain
    Transport           = "transport"
    Loadbalance         = "loadbalance"
    BizkeeperConsumer   = "bizkeeper-consumer"
    TracingConsumer     = "tracing-consumer"
    RatelimiterConsumer = "ratelimiter-consumer"
    Router              = "router"
    FaultInject         = "fault-inject"

    //provider chain
    RatelimiterProvider = "ratelimiter-provider"
    TracingProvider     = "tracing-provider"
    BizkeeperProvider   = "bizkeeper-provider"
)

实例

handler:
  chain:
    Consumer:
      default: bizkeeper-consumer, router, loadbalance, ratelimiter-consumer,transport
      custom: some-handler

Client side health check

概述

客户端健康检查(Health Check)是指客户端对服务端实例缓存进行健康性的判断。

在网络分区或延时较大的环境下,客户端可能会出现上报心跳到服务中心失败的情况,导致结果是,客户端会将收到的实例下线事件,并移除本地实例缓存,最终影响业务调用。

为防止上述情况发生,go-chassis提供这样的健康检查机制:当客户端监听到某个(或latest)版本的服务端可用实例下降到0个时, 客户端会在同步移除实例缓存前进行一次健康检查,调用服务端暴露的健康检查接口(RESTful或Highway)并校验其返回值来确认是否要移除实例缓存。

配置

服务端注册

go-chassis默认不会主动注册服务端的健康检查接口,需要用户主动import到项目中。

// 注册健康检查接口
import _ "github.com/go-chassis/go-chassis/healthz/provider"

加入上述代码片段后,go-chassis会按照暴露的服务协议类型对应注册健康检查接口,接口描述如下

  • RESTful:

    1. Method: GET
    2. Path: /healthz
    3. Response:
    {
      "appId": "string",
      "serviceName": "string",
      "version": "string"
    }
    
  • Highway:

    1. Schema: _chassis_highway_healthz
    2. Operation: HighwayCheck
    3. Response:
    // The response message containing the microservice key
    message Reply {
      string appId = 1;
      string serviceName = 2;
      string version = 3;
    }
    
客户端配置

客户端健康检查配置在chassis.yaml。

healthCheck

(optional, bool) 允许对服务端的实例做健康检查,默认值为false。
示例
cse:
  service:
    Registry:
      healthCheck: true
      #serviceDiscovery:
      #  healthCheck: true # 同时支持单独开启服务发现能力时的客户端健康检查

Invoker

Introduction


Invoker is the entry point for a developer to call remote service

API

Rest Invoker

使用NewRestInvoker创建一个invoker实例,可接受chain等自定义选项

ContextDo可以接受一个http request作为参数,开发者可通过request的API对request进行操作,并作为参数传入该方法

func NewRestInvoker(opt ...Option) *RestInvoker
func (ri *RestInvoker) ContextDo(ctx context.Context, req *http.Request, options ...InvocationOption) (*rest.Response, error)
RPC Invoker

使用NewRPCInvoker创建invoker实例,可接受chain, filters等自定义选项

指定远端的服务名,struct name,以及func name,以及请求参数和返回接口即可进行调用

最终结果会赋值到reply参数中

func NewRPCInvoker(opt ...Option) *RPCInvoker 
func (ri *RPCInvoker) Invoke(ctx context.Context, microServiceName, schemaID, operationID string, arg interface{}, reply interface{}, options ...InvocationOption) error

无论Rest还是RPC调用方法都能够接受多种选项对一次调用进行控制,参考options.go查看更多选项

Examples

RPC
invoker.Invoke(ctx, "Server", "HelloServer", "SayHello",
    &helloworld.HelloRequest{Name: "Peter"},
    reply,

)
Rest

与普通的http调用不同的是url参数不使用ip:port而是服务名并且http://变为cse://

在初始化invoker时还指定了这次请求要经过的处理链名称custom

req, _ := rest.NewRequest("GET", "cse://RESTServer/sayhello/world")
defer req.Close()
resp, err := core.NewRestInvoker(core.ChainName("custom")).ContextDo(context.TODO(), req)
Multiple Port

if you define different port for the same protocol, like below

cse:
  protocols:
    rest:
      listenAddress: 0.0.0.0:5000
    rest-admin:
      listenAddress: 0.0.0.0:5001

then you can use suffix “admin” as port to access rest-admin server

req, _ := rest.NewRequest("GET", "cse://RESTServer:admin/sayhello/world")

use only service name to access rest server

req, _ := rest.NewRequest("GET", "cse://RESTServer/sayhello/world")

Load balancing

概述

用户可以通过配置选择不同的负载均衡策略,当前支持轮询、随机、基于响应时间的权值、会话保持等多种负载均衡策略。

负载均衡功能作用于客户端,且依赖注册中心。

配置

负载均衡的配置项为cse.loadbalance.[MicroServiceName].[PropertyName],其中若省略MicroServiceName,则为全局配置;若指定MicroServiceName,则为针对特定微服务的配置。优先级:针对特定微服务的配置 > 全局配置。

为便于描述,以下配置项说明仅针对PropertyName字段

strategy.name

(optional, bool) RoundRobin | 策略,可选值:RoundRobin,Random,SessionStickiness,WeightedResponse

注意:

  1. **使用SessionStickiness策略,配置即可以使用会话策略,可以在metadata将namespace传入以控制不同请求会话保持 如
*options = core.InvokeOptions{
    Metadata: map[string]interface{}{
        common.SessionNameSpace: "go-chassis",
    },
}
  1. 使用 WeightedResponse策略,启用后30s 策略会计算好数据并生效,80%左右的请求会被发送到延迟最低的实例里

API

除了通过配置文件传入负载均衡策略,还支持用户客户端调用传入WithStrategy的方式。

invoker.Invoke(ctx, "Server", "HelloServer", "SayHello",
    &helloworld.HelloRequest{Name: "Peter"},
    reply,
    core.WithContentType("application/json"),
    core.WithProtocol("highway"),
    core.WithStrategy(loadbalance.StrategyRoundRobin),
)

示例

配置chassis.yaml的负载均衡部分,以及添加处理链。

cse:
  loadbalance:                 # 全局负载均衡配置
    strategy:
      name: RoundRobin
    microserviceA:              # 微服务级别的负载均衡配置
      strategy:
        name: SessionStickiness

Filter

概述

负载均衡过滤器实现在负载均衡模块中,它允许开发者定制自己的过滤器,使得经过负载均衡策略选择实例前,预先对实例进行筛选。在一次请求调用过程中可以使用多个过滤策略,对从本地cache或服务中心获取的实例组进行过滤,将经过精简的实例组交给负载均衡策略做后续处理。

配置

目前可配的filter只有根据Available Zone Filter。可根据微服务实例的region以及AZ信息进行过滤,优先寻找同Region与AZ的实例。

cse:
  loadbalance:
    serverListFilters: zoneaware

需要配置实例的Datacenter信息

region:
  name: us-east
  availableZone: us-east-1

API

Go-chassis支持多种实现Filter接口的过滤器。FilterEndpoint支持通过实例访问地址过滤,FilterMD支持通过元数据过滤,FilterProtocol支持通过协议过滤,FilterAvailableZoneAffinity支持根据Zone过滤。

type Filter func([]*registry.MicroServiceInstance) []*registry.MicroServiceInstance

示例

客户端实例过滤器Filter的使用支持用户通过API调用传入,并且可以一次传入多个Filter,

invoker.Invoke(ctx, "Server", "HelloServer", "SayHello",
    &helloworld.HelloRequest{Name: "Peter"},
    reply,
    core.WithProtocol("highway"),
    core.WithStrategy(loadbalance.StrategyRoundRobin),
    core.WithFilters(
     "zoneaware"
    ),
)

Dynamic Configuration

概述

go-chassis提供动态配置管理能力,支持CSE配置中心,本地文件,环境变量及命令行等多种配置管理,并由archaius包提供统一的接口获取配置。

API

archaius包提供获取全部配置和四种获取指定配置值的Get方法,同时提供默认值,即若指定配置值未配置则使用默认值。另外提供Exist方法判断指定配置值是否存在。用户可使用UnmarshalConfig方法提供反序列化配置到结构体的方法。go-archaius内置5个配置源,获取配置时优先从优先级高的配置源获取指定配置项,若无则按优先级高低依次从后续配置源获取,直到遍历所有配置源。内置配置源生效的优先级由高到底分别是配置中心,命令行,环境变量,文件,外部配置源。

获取指定配置值
GetBool(key string, defaultValue bool) bool
GetFloat64(key string, defaultValue float64) float64
GetInt(key string, defaultValue int) int
GetString(key string, defaultValue string) string
Get(key string) interface{}
Exist(key string) bool
获取全部配置
GetConfigs() map[string]interface{}
反序列化配置到结构体
UnmarshalConfig(obj interface{}) error

在go-archaius默认纳入动态管理的配置文件外,提供了AddFile方法允许用户添加其他文件到动态配置管理中。AddKeyValue可额外为外部配置源添加配置对。除默认加载的配置源外,允许用户实现自己的配置源,并通过RegisterListener注册到动态配置管理框架中。

添加文件源
AddFile(file string) error
外部配置源添加配置对
AddKeyValue(key string, value interface{}) error
注册/注销动态监听
RegisterListener(listenerObj core.EventListener, key ...string) error
UnRegisterListener(listenerObj core.EventListener, key ...string) error

在对接config center配置中心时请求中需指定demensionsInfo信息来确定获取配置的实例。该接口允许为配置项分区域DI配置和查询。

添加DI及获取指定DI的配置值
GetConfigsByDI(dimensionInfo string) map[string]interface{}
GetStringByDI(dimensionInfo, key string, defaultValue string) string
AddDI(dimensionInfo string) (map[string]string, error)

示例

示例中文件配置如下,可通过archaius包的Get方法读取指定文件配置项。

cse:
  fallback:
    Consumer:
      enabled: true
      maxConcurrentRequests: 20
archaius.GetInt("cse.fallback.Consumer.maxConcurrentRequests", 10)
archaius.GetBool("cse.fallback.Consumer.enabled", false)

Using Ctrip Apollo as a Configuration Center

Ctrip Apollo

Ctrip Apollo is a Configuration Server which can be used to store your configurations. Go-Chassis supports retrieving the configurations from Apollo and can be used for dynamic configurations of microservices. In this guide we will explain you how to configure Go-Chassis to use Apollo as a configuration server.

Configurations

Use can use this guide to start up the Ctrip Apollo and make the Project, NamesSpace and add Configurations to it. Once your Apollo Server is setup then you can do the following modification in Go-Chassis to make it work with Apollo.Update the chassis.yaml of your microservices with the following configuration.

cse:
  config:
    client:
      serverUri: http://127.0.0.1:8080          # This should be the address of your Apollo Server
      type: apollo                              # The type should be Apollo
      refreshMode: 1                            # Refresh Mode should be 1 so that Chassis-pulls the Configuration periodically
      refreshInterval: 10                       # Chassis retrives the configurations from Apollo at this interval
      serviceName: apollo-chassis-demo          # This the name of the project in Apollo Server
      env: DEV                                  # This is the name of environment to which configurations belong in Apollo
      cluster: demo                             # This is the name of cluster to which your Project belongs in Apollo
      namespace: application                    # This is the NameSpace to which your configurations belong in the project.

Once these configurations are set the Chassis can retrieve the configurations from Apollo Server.To see the detailed use case of how to use Ctrip Apollo with Chassis please refer to this example.

Router

概述

路由策略可应用于AB测试场景和新版本的灰度升级,主要通过路由规则来根据请求的来源、目标服务、Http Header及权重将服务访问请求分发到不同版本的微服务实例中。

比如通过路由管理你可以简单的实现新老版本服务切换

配置

路由规则当前只支持在配置文件配置,支持rest和highway协议。

Consumer配置

灰度发布的路由规则只在服务的消费端配置使用,用于将特定的请求,按一定权重,分发至同一服务名的不同分组。用户可在conf/router.yaml 文件中设置:

routeRule:  
  {targetServiceName}: # 服务名
    - precedence: {number} #优先级
      match:        #匹配策略
        source: {sourceServiceName} #匹配某个服务名
        headers:          #header匹配
          {key0}:            
            regex: {regex}
          {key1}         
            exact: {=?}   
      route: #路由规则
        - weight: {percent} #权重值
          tags:
            version: {version1}
            app: {appId}
    - precedence: {number1}
      match:        
        refer: {sourceTemplateName} #参考某个source模板ID
      route:
        - weight: {percent}
          tags:
            version: {version2}
            app: {appId}        
sourceTemplate:  #定义source模板
  {templateName}: # source 模板ID
    source: {sourceServiceName} 
    sourceTags:
      {tag}:{value}
    headers:
      {key0}:
        regex: {regex}
      {key1}
        exact: {=?}
      {key2}:
        noEqu: {!=?}
      {key3}
        greater: {>?}    
      {key4}:
        less: {<?}
      {key5}
        noLess: {>=?}      
      {key6}:
        noGreater: {<=?}

路由规则说明:

  • 匹配特定请求由match配置,匹配条件是:source(源服务名)、source tags 及headers,另外也可以使用refer字段来使用source模板进行匹配。
  • Match中的Source Tags用于和服务调用请求中的sourceInfo中的tags 进行逐一匹配。
  • Header中的字段的匹配支持正则, 等于, 小于, 大, 于不等于等匹配方式。
  • 如果未定义match,则可匹配任何请求。
  • 转发权重定义在routeRule.{targetServiceName}.route下,由weight配置。
  • 服务分组定义在routeRule.{targetServiceName}.route下,由tags配置,配置内容有version和app。

API

设置Router Rules

这个接口会彻底覆盖运行时的路由规则

router.SetRouteRule(rr map[string][]*config.RouteRule)
获取Router Rules
router.GetRouteRule() 返回值 map[string][]*config.RouteRule

例子

目标服务

每个路由规则的目标服务名称都由routeRule中的Key值指定。例如下表所示,所有以“Carts”服务为目标服务的路由规则均被包含在以“Carts”为Key值的列表中。

routeRule:
  Carts:
    - precedence: 1
      route:
        - weight: 100 #percent          
          tags:            
            version: 0.0.1

Key值(目标服务名称)应该满足是一个合法的域名称。例如,一个在服务中心中注册的服务名称。

规则优先级

针对某个特定的目标服务可以定义多条路由规则,在路由规则匹配过程中的匹配顺序按照各个规则的“precedence”字段的值来确定。“precedence”字段是可选配置,默认为0,值越大则优先级越高。如果两个规则的“precedence”配置值相同,则它们的实际匹配顺序是不确定的。

一种常用的模式是为指定的目标服务提供一条或多条具有高优先级的匹配请求源/Header的路由规则,并同时提供一条低优先级的只依照版本权重分发请求的路由规则,且这条规则不设置任何匹配条件以处理剩余的其他请求。

以下面的路由规则为例,对所有访问“Carts“服务的请求,如果满足header中包含”Foo:bar“,则将请求分发到服务的”2.0“版本的实例中,剩余的其他请求全部分发到”1.0“版本的实例中。

routeRule:
  Carts:
    - precedence: 2
      match:
        headers:
          Foo:
            exact: bar
      route:
        - weight: 100           
          tags:            
            version: 2.0
    - precedence: 1
      route:
        - weight: 100   
          tags:            
            version: 1.0
请求匹配规则
match:
  refer: {templateName}
  source: {sourceServiceName}
  headers:
    {key}:
      regex: {regex}
    {key1}:
      exact: {exact}

请求的匹配规则属性配置如下:

refer

(optional, string) 引用的匹配规则模板名称,用户可选择在sourceTemplate中定义匹配规则模板,并在此处引用。若引用了匹配规则模板,则其他配置项不用配置。

source

(optional, string) 表示发送请求的服务,和consumer是一个意义。

headers

(optional, string) 匹配headers。 如果配置了多条Header 字段校验规则,则需要同时满足所有规则才可完成路由规则匹配。 匹配方式有以下几种:精确匹配(exact):header必须等于配置; 正则(regex):按正则匹配header内容;不等于(noEqu):header不等于配置值;大于等于(noLess): header不小于配置值;小于等于(noGreater):header不大于配置值;大于(greater):header大于配置值;小于(less): header小于配置值

示例:

match:
  source: vmall
  headers:
    cookie:
      regex: "^(.*?;)?(user=jason)(;.*)?$"

仅适用于来自vmall,header中的“cookie”字段包含“user=jason”的服务访问请求。

分发规则

每个路由规则中都会定义一个或多个具有权重标识的后端服务,这些后端服务对应于用标签标识的不同版本的目标服务的实例。如果某个标签对应的注册服务实例有多个,则指向该标签版本的服务请求会按照用户配置的负责均衡策略进行分发,默认会采用round-robin策略。

分发规则的属性配置如下: weight

(optional, string) 本条规则的分发比重,配置为1-100的整数,表示百分比。

tags

(optional, string) 可定义任意多的tags,用于区分相同服务名的不同服务分组,可以基于version app进行路由,也可以基于微服务实例的元数据进行定义进行路由。比如元数据中定义了project=x, 那么如果想要路由到带有遮个特定字段和值的实例中,只需要在tag中定义project等于x

下面的例子表示75%的访问请求会被分流到具有“version:2.0”标签的服务实例中,其余25%的访问请求会被分发到1.0版本的实例中。

route:
  - weight: 75
    tags:
      version: 2.0
  - weight: 25
    tags:
      version: 1.0

下面例子演示完整的元数据路由例子

微服务定义中定义了元数据

service_description:
  name: Server
  hostname: 10.244.1.3
  instance_properties:
    modelVersion: 1.1

那么可以定义路由规则进行分流

route:
  - weight: 75
    tags:
      modelVersion: 2.0
  - weight: 25
    tags:
      modelVersion: 1.1
定义匹配模板

我们可以通过预定义源模板(模板中的结构为一个Match结构),并在match部分引用该模板来进行路由规则的匹配。在下面的例子中,“vmall-with-special-header”是一个预定义的源模板的Key值,并在Carts的请求匹配规则中被引用。

routeRule:
  Carts:
    - precedence: 2
      match:
        refer: vmall-with-special-header
      route:
        - weight: 100           
          tags:            
            version: 2.0
sourceTemplate:
  vmall-with-special-header:
    source: vmall
    headers:
      cookie:
        regex: "^(.*?;)?(user=jason)(;.*)?$"

Rate limiting

概述

用户可以通过配置限流策略限制provider端或consumer端的请求频率,使每秒请求数限制在最大请求量的大小。其中provider端的配置可限制接收处理请求的频率,consumer端的配置可限制发往指定微服务的请求的频率。

配置

限流配置在rate_limiting.yaml中,同时需要在chassis.yaml的handler chain中添加handler。其中qps.limit.[service] 是指限制从service 发来的请求的处理频率,若该项未配置则global.limit生效。Consumer端不支持global全局配置,其他配置项与Provider端一致。

flowcontrol.qps.enabled

(optional, bool) 是否开启限流,默认true

flowcontrol.qps.global.limit

(optional, int) 每秒允许的请求数,默认2147483647max int)

flowcontrol.qps.limit.{service}

(optional, string) 针对某微服务每秒允许的请求数 ,默认2147483647max int)
Provider示例

provider端需要在chassis.yaml添加ratelimiter-provider。同时在rate_limiting.yaml中配置具体的请求数。

cse:
  handler:
    chain:
      Provider:
        default: ratelimiter-provider
cse:
  flowcontrol
    Provider:
      qps:
        enabled: true  # enable rate limiting or not
        global:
          limit: 100   # default limit of provider
        limit:
          Server: 100  # rate limit for request from a provider
Consumer示例

在consumer端需要添加ratelimiter-consumer这个handler。同时在rate_limiting.yaml中配置具体的请求数。

cse:
  handler:
    chain:
      Consumer:
        default: ratelimiter-consumer
cse:
  flowcontrol:
    Consumer:
      qps:
        enabled: true  # enable rate limiting or not
        limit:
          Server: 100  # rate limit for request to a provider

API

qpslimiter提供获取流控实例的接口GetQpsTrafficLimiter和相关的处理接口。其中ProcessQpsTokenReq根据目标qpsRate在handler chain当中sleep相应时间实现限流,UpdateRateLimit提供更新qpsRate限制的接口,DeleteRateLimiter提供了删除流控实例的接口。

对请求流控
qpslimiter.GetQpsTrafficLimiter().ProcessQpsTokenReq(key string, qpsRate int)
更新流控限制
qpslimiter.GetQpsTrafficLimiter().UpdateRateLimit(key string, value interface{})
删除流控实例
qpslimiter.GetQpsTrafficLimiter().DeleteRateLimiter(key string)

Fault Tolerance

Introduction

go-chassis support fault-tolerance to resilient your service

Configuration

the fault-tolerace related configurations is all in load_balancing.yaml, prefix is cse.loadbalance. set retryEnabled to true to enable it

retryEnabled

(optional, bool) Enable fault tolerance, default is false

retryOnSame

(optional, int) if failed, then retry on same instance, default is 0

retryOnNext

(optional, int) if failed, then call load balancing again to get next instance, default is 0

backoff.kind

(optional, string) backoff policy: [jittered|constant|zero] default is zero
  • zero: do not wait for any time。
  • constant: after each failed retry, wait for constant time. Use backoff.minMs to set the time。
  • jittered: time wil exponential growth after each retry, till this time reach to MaxMs. Use backoff.minMs to set the the first wait time

backoff.MinMs

(optional, int) minimum wait time between each retry, unit is ms, default is 0

backoff.MaxMs

(optional, int) maximum wait time between each retry, unit is ms, default is 0

example

edit load_balancing.yaml.

cse:
  loadbalance:
    retryEnabled: true
    retryOnNext: 2
    retryOnSame: 3
    backoff:
      kind: jittered
      MinMs: 200
      MaxMs: 400

Circuit breaker

Introduction

Circuit breaker help to isolate upstream services during runtime, all of invocation will be executed by circuit, under its protection, if there is too much error, time out or concurrency, circuit will open to stop network communication it also monitor each service call to make service observable

Configuration

Configuration Format as below:

cse.{namespace}.Consumer.{serviceName}.{property} explanation:

{namespace}:it can be isolation|circuitBreaker|fallback|fallbackpolicy.

{serviceName}: it is optional. it is the service level configuration, it represent the target service name

{property}: configuration items

cse.isolation.timeoutInMilliseconds

(optional, int) if delay for a time, this call will be considered as failure, default is 30000

cse.isolation.maxConcurrentRequests

(optional, int) max concurrency, default is 1000

cse.circuitBreaker.scope

(optional, string) service or api, default is api, go chassis create a dedicated circuit for every api, invocation will be isolated based on api if set to service, all of API for each service share one circuit, it will isolate the service.

cse.circuitBreaker.enabled

(optional, bool) enable circuit breaker or not, default is false

cse.circuitBreaker.forceOpen

(optional, bool) if it is true, will forcely open the circuit, default is false

cse.circuitBreaker.forceClosed

(optional, bool) ignore all configurations forcely close crcuit all the time, default is false

cse.circuitBreaker.sleepWindowInMilliseconds

(optional, int) after a circuit open, how long it should wait for next retry, if retry failed, circuit will open again. default is 15000

cse.circuitBreaker.requestVolumeThreshold

(optional, int) it means in 10 seconds after how many request fails, circuit breaker should open default is 20

cse.circuitBreaker.errorThresholdPercentage

(optional, int) it means how many err percentage met, circuit breaker should open, default is 50

cse.fallback.enabled

(optional, bool) enable fallback or not, default is true

cse.fallbackpolicy.policy

(optional, string) fallback policy [returnnull| throwexception],default is returnnull

examples

cse:
  isolation:
    Consumer:
      timeoutInMilliseconds: 1
      maxConcurrentRequests: 100
      ServerA: # service level config
        timeoutInMilliseconds: 1000
        maxConcurrentRequests: 1000
  circuitBreaker:
    scope: api
    Consumer:
      enabled: false
      forceOpen: false
      forceClosed: true
      sleepWindowInMilliseconds: 10000
      requestVolumeThreshold: 20
      errorThresholdPercentage: 10
      ServerB: # service level config
        enabled: true
        forceOpen: false
        forceClosed: false
        sleepWindowInMilliseconds: 10000
        requestVolumeThreshold: 20
        errorThresholdPercentage: 5
  fallback:
    Consumer:
      enabled: true
      maxConcurrentRequests: 20
  fallbackpolicy:
    Consumer:
      policy: throwexception

you must set bizkeeper-consumer handler in chain before load balancing and transport here is a example

handler:
  chain:
    Consumer:
      default: bizkeeper-consumer, router, loadbalance, ratelimiter-consumer,transport

Transport

Introduction

you can define what can be considered as failure and make it count in circuit breaker and fault-tolerance module

Configurations

transport.failure.{protocol_name}

(required, string) the name of the protocol client, now only support rest failure. a string list connect with comma,each string starts with http_, combine with http status code.

Example

The cases of http_500,http_502 are considered as unsuccessful attempts

cse:
  transport:
    failure:
      rest: http_500,http_502

Tracing

Introduction

Go chassis use opentacing-go to trace distributed system call

Configuration

the config is in monitoring.yaml

tracing.tracer

(optional, string) what kind of opentracing impl go chassis should use, default is zipkin

tracing.settings

(optional, map) options like URI, batchSize, BatchInterval can be custom in here go chassis tracing pkg is highly extensible, to deal with varies of different tracer settings, it use map to define options, so that developers can freely custom options for tracer

Example

this config means send data to zipkin, tracing-provider must to be added in handler chain

cse:
  handler:
    chain:
      Provider:
        default: tracing-provider,bizkeeper-provider
tracing:
  tracer: zipkin
  settings:
    URI: http://127.0.0.1:9411/api/v1/spans
    batchSize: 1

When you have more than 2-levels service calling like A->B->C

in B client you must deliver ctx to C, so that go chassis can keep tracing,

//Trace is a method
func (r *TracingHello) Trace(b *rf.Context) {
    req, err := rest.NewRequest("GET", "cse://RESTServerB/sayhello/world")
    if err != nil {
        b.WriteError(500, err)
        return
    }
    defer req.Close()
    // must set b.Ctx as input for next calling
    resp, err := core.NewRestInvoker().ContextDo(b.Ctx, req)
    if err != nil {
        b.WriteError(500, err)
        return
    }
    b.Write(resp.ReadBody())
}

Metrics

概述

Metrics用于度量服务性能指标。开发者可通过配置文件来将框架自动手机的metrics导出并让prometheus收集。

如果有业务代码定制的metrics,也可以通过API来调用,来定制自己的的metrics

配置

cse.metrics.enable

(optional, bool) if it is true, a new http API defined in “cse.metrics.apipath” will serve for client default is false

cse.metrics.apipath

(optional, string) metrics接口,默认为/metrics

cse.metrics.enableGoRuntimeMetrics

(optional, bool) 是否开启go runtime监测,默认为true

cse.metrics.enableCircuitMetrics

(optional, bool) report circuit breaker metrics to go-metrics, default is true

cse.metrics.flushInterval

(optional, string) interval flush metrics from go-metrics to prometheus exporter, for example 10s, 1m

cse.metrics.circuitMetricsConsumerNum

(optional, int) should be careful about this option, default is 3, there is 3 go routines consume metrics, if there is so many consumers, during high concurrency, it will affect service performance

API

包路径

import "github.com/go-chassis/go-chassis/metrics"

获取go-chassis的metrics registry,用户定制的metrics,可以通过这个registry来添加,最终也会自动导出到API的返回中

func GetSystemRegistry() metrics.Registry

获取go-chassis使用的prometheus registry,允许用户直接对Prometheus registry进行操作

func GetSystemPrometheusRegistry() *prometheus.Registry

创建一个特定名称的metrics registry

func GetOrCreateRegistry(name string) metrics.Registry

使用特定metrics registry向prometheus汇报metrics数据

func ReportMetricsToPrometheus(r metrics.Registry)

汇报metrics数据的http handler

func MetricsHandleFunc(req *restful.Request, rep *restful.Response)

示例

cse:
  metrics:
    apiPath: /metrics      # we can also give api path having prefix "/" ,like /adas/metrics
    enable: true
    enableGoRuntimeMetrics: true
    enableCircuitMetrics: true

若rest监听在127.0.0.1:8080,则作上述配置后,可通过 http://127.0.0.1:8080/metrics 获取metrics数据。

Log

概述

用户可配置微服务的运行日志的相关属性,比如输出方式,日志级别,文件路径以及日志转储相关属性。

配置

日志配置文件为lager.yaml,配置模板如下:

  • logger_level表示日志级别,由低到高分别为 DEBUG, INFO, WARN, ERROR, FATAL 共5个级别,这里设置的级别是日志输出的最低级别,只有不低于该级别的日志才会输出。
  • writers表示日志的输出方式,默认为文件和标准输出。
  • logger_file表示日志输出文件。
  • log_format_text: 默认为false,即设定日志的输出格式为 json。若为true则输出格式为plaintext,类似log4j。建议使用json格式输出的日志。
  • rollingPolicy: 默认为size,即根据大小进行日志rotate操作;若配置为daily则基于事件做日志rotate。
  • log_rotate_date: 日志rotate时间配置,单位”day”,范围为(0, 10)。
  • log_rotate_size: 日志rotate文件大小配置,单位”MB”,范围为(0,50)。
  • log_backup_count: 日志最大存储数量,单位“个”,范围为[0,100)。
---
writers: file,stdout
# LoggerLevel: |DEBUG|INFO|WARN|ERROR|FATAL
logger_level: DEBUG
logger_file: log/chassis.log
log_format_text: false

#rollingPolicy daily/size
rollingPolicy: size
#log rotate and backup settings
log_rotate_date: 1
log_rotate_size: 10
log_backup_count: 7

TLS

概述

用户可以通过配置SSL/TLS启动HTTPS通信,保障数据的安全传输。包括客户端与服务端TLS,通过配置来自动启用Consumer与Provider的TLS配置。

配置

tls可配置于chassis.yaml文件或单独的tls.yaml文件。在如下格式中,tag指明服务名以及服务类型,key指定对应configuration的配置项。

ssl:
  [tag].[key]: [configuration]
TAG

tag为空时ssl配置为公共配置。registry.consumer及configcenter.consumer是作为消费者访问服务中心和配置中心时的ssl配置。protocol.serviceType允许协议和类型的任意组合。name.protocol.serviceType在协议和类型的基础上可定制服务名。

registry.Consumer

服务注册中心TLS配置

serviceDiscovery.Consumer

服务发现TLS配置

contractDiscovery.Consumer

契约发现TLS配置

registrator.Consumer

服务注册中心TLS配置

configcenter.Consumer

配置中心TLS配置 |

{protocol}.{serviceType}

协议为任意协议目前包括 highwayrest,用户扩展协议后,即可使用新的协议配置。 类型为Consumer,Provider |

{name}.{protocol}.{serviceType}

定制某微服务的独有的TLS配置 name为微服务名
KEY

ssl支持以下配置项,其中若私钥KEY文件加密,则需要指定加解密插件及密码套件等信息进行解密。

cipherPlugin

(optional, string) 指定加解密插件 内部插件支持 default aes, 默认default |

verifyPeer

(optional, bool) | 是否验证对端,默认false

cipherSuits

(optional, string) TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 密码套件 |

protocol

(optional, string) TLS协议的最小版本,默认为TLSv1.2

caFile

(optional, string) ca文件路径

certFile

(optional, string) 私钥cert文件路径

keyFile

(optional, string) 私钥key文件路径

certPwdFile

(optional, string) 私钥key加密的密码文件

API

通过为Provider和Consumer配置ssl,go-chassis会自动为其加载相关配置。用户也可以通过chassis暴露的接口直接使用相关API。以下API主要用于获取ssl配置以及tls.Config。

获取默认SSL配置
GetDefaultSSLConfig() *common.SSLConfig
获取指定SSL配置
GetSSLConfigByService(svcName, protocol, svcType string) (*common.SSLConfig, error)
获取指定TLSConfig
GetTLSConfigByService(svcName, protocol, svcType string) (*tls.Config, *common.SSLConfig, error)

示例

Provider配置

以下为rest类型provider提供HTTPS访问的ssl配置,其中tag为protocol.serviceType的形式。

ssl:
  rest.Provider.cipherPlugin: default
  rest.Provider.verifyPeer: true
  rest.Provider.cipherSuits: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  rest.Provider.protocol: TLSv1.2
  rest.Provider.keyFile: /etc/ssl/server_key.pem
  rest.Provider.certFile: /etc/ssl/server.cer
  rest.Provider.certPwdFile: /etc/ssl/cert_pwd_plain
  rest.Provider.caFile: /etc/ssl/trust.cer
Consumer配置

以下为访问rest类型服务的消费者的ssl配置。tag为name.protocol.serviceType的形式,其中Server为要访问的服务名,rest为协议。verifyPeer若配置为true将启动双向认证,否则客户端将忽略对服务端的校验。

ssl:
  Server.rest.Consumer.cipherPlugin: default
  Server.rest.Consumer.verifyPeer: true
  Server.rest.Consumer.cipherSuits: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  Server.rest.Consumer.protocol: TLSv1.2
  Server.rest.Consumer.keyFile: /etc/ssl/server_key.pem
  Server.rest.Consumer.certFile: /etc/ssl/server.cer
  Server.rest.Consumer.certPwdFile: /etc/ssl/cert_pwd_plain
  Server.rest.Consumer.caFile: /etc/ssl/trust.cer

Contract management

概述

go-chassis读取服务契约并将其内容上传至注册中心。

配置

契约文件必须为yaml格式文件,契约文件应放置于go-chassis的schema目录。

schema目录位于:

1,conf/{serviceName}/schema,其中conf表示go-chassis的conf文件夹

2,${SCHEMA_ROOT}

2的优先级高于1。

API

包路径

import "github.com/go-chassis/go-chassis/core/config/schema"

契约字典,key值为契约文件名,value为契约文件内容

var DefaultSchemaIDsMap map[string]string

示例

conf
`-- myservice
    `-- schema
        |-- myschema1.yaml
        `-- myschema2.yaml

Communication between GO consumer and JAVA provider

this guide demonstrate how to use highway to communicate with java chassis GO consumer ~~~~~~~~~~~ Go consumer uses invoker.Invoker() call to make highway communication

Go Consumer
`
Parameters of Invoke:
  1. Context
  2. MicroserviceName
  3. SchemaID
  4. operationID
  5. Input argument
  6. Response argument

`

In the employ.bp.go file the structure EmployeeStruct is been used as the input argument and Response argument

EmployeeStruct

Java provider:

Microservicename is the name provider in the microservice.yaml file . In this example it is “springboot”.

microservice.yaml

SchemaId is the schemaID defined in the java provider. In this example it is “hello”.

OperationId is the OperationName in the java provider. In this example it is “addAndShowEmploy”.

helloservice.png

Employ class which has the member variables “Name” and “Phone” is used as input parameter for the operation and also response for this api.

Employ

Use Istio as control plane

Get started

go-chassis can be integrated with Istio for service discovery and routing. To enable Istio pilot support in go-chassis, the simple 2 steps are needed during development:

  • Import the istiov2 registry plugin from mesher
import _ "github.com/go-mesh/mesher/plugins/registry/istiov2"

This will install the istiov2 service discovery plugin at runtime.

  • Configure service discovery in chassis.yaml
cse:
  service:
    registry:
      registrator:
        disabled: true
      serviceDiscovery:
        type: pilotv2
        address: grpc://istio-pilot.istio-system:15010

Disable the registrator(since we don’t have to register the service to Pilot explicitly) and change serviceDiscovery type to pilotv2(indicates the Pilot that provides xDS v2 API, the xDS v1 API is already deprecated), configure the address, typically istio-pilot.istio-sytem:15010 in a Istio environment.

Then when deploying the micro services in Istio, make sure the Kubernetes Services’ name and go-chassis service name is exactly the same, then go-chassis will discovery the service instances as expected from Pilot.

The routing tags in Istio

In the original go-chassis configuration, user can specify tag based route rules, as described below:

## router.yaml
router:
  infra: cse
routeRule:
  targetService:
    - precedence: 2
      route:
      - tags:
          version: v1
        weight: 40
      - tags:
          version: v2
          debug: true
        weight: 40
      - tags:
          version: v3
        weight: 20

Then in a typical Istio environment, which is likely to be Kubernetes cluster, user can specify the DestinationRules for targetService with the same tags:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: targetService
spec:
  host: targetService
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
      debug: "true"
  - name: v3
    labels:
      version: v3

Notice that the subsets’ tags are the same with those in router.yaml, then go-chassis’s tag based load balancing strategy works as it originally does.

Discovery


Introduction

go-chassis is able to use Istio Pilot as discovery service.

Configuration

edit chassis.yaml.

registrator.disabled

Must disable registrator, because registrator is is used in client side discovery. go-chassis leverage server side discovery which supported by kubernetes

serviceDiscovery.type

specify the plugin type to pilotv2

serviceDiscovery.address

the pilot address

example

cse:
  service:
    Registry:
      registrator:
        disabled: true
      serviceDiscovery:
        type: pilotv2
        address: grpc://istio-pilot.istio-system:15010

Route Rule

Instead of using CSE and route config to manage route, go-chassis supports istio as a control plane to set route rule and follows the envoy API reference to manage route. This page gives the examples to show how requests are routed between micro services.

Go-chassis Configurations

In Consumer router.yaml, you can set router.infra to define which router plugin go-chassis fetches from. The default router.infra is cse, which means the routerule comes from route config in CSE config-center. If router.infra is set to be pilotv2, the router.address is necessary, such as the in-cluster istio-pilot grpc address.

router:
  infra: pilotv2 # pilotv2 or cse
  address: grpc://istio-pilot.istio-system:15010

In Both consumer and provider registry configurations, the recommended one shows below.

cse:
  service:
    registry:
      registrator:
        disabled: true
      serviceDiscovery:
        type: pilotv2
        address: grpc://istio-pilot.istio-system:15010

Kubernetes Configurations

The provider applications of v1, v2 and v3 version could be deployed in kubernetes cluster as Deployment with differenent labels. The labels of version is necessary now, and you need to set env to generate nodeID in istio system, such as POD_NAMESPACE, POD_NAME and INSTANCE_IP.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    version: v1
    app: pilot
    name: istioserver
  name: istioserver-v1
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: pilot
      version: v1
      name: istioserver
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: pilot
        version: v1
        name: istioserver
    spec:
      containers:
      - image: gosdk-istio-server:latest
        imagePullPolicy: Always
        name: istioserver-v1
        ports:
        - containerPort: 8084
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        env:
        - name: CSE_SERVICE_CENTER
          value: grpc://istio-pilot.istio-system:15010
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        volumeMounts:
        - mountPath: /etc/certs/
          name: istio-certs
          readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: istio-certs
        secret:
          defaultMode: 420
          optional: true
          secretName: istio.default

Istio v1alpha3 router configurations

Traffic-management gives references and examples of istio new router rule schema. First, subsets is defined according to labels. Then you can set route rule of differenent weight for virtual services.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: istioserver
spec:
  host: istioserver
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  - name: v3
    labels:
      version: v3
NOTICE: The subsets only support labels of version to distinguish differenent virtual services, this constrains will canceled later.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istioserver
spec:
  hosts:
    - istioserver
  http:
  - route:
    - destination:
        host: istioserver
        subset: v1
      weight: 25
    - destination:
        host: istioserver
        subset: v2
      weight: 25
    - destination:
        host: istioserver
        subset: v3
      weight: 50

Discovery Plugins

Kubernetes

Kubernetes discovery is a service discovery choice, it implements ServiceDiscovery Plugin, which leads go-chassis to do service discovery in kubernetes cluster according to Services.

Import Path

kube discovery is a service discovery plugin that should import in your application code explicitly.

import _ "github.com/go-chassis/go-chassis-plugins/registry/kube"

Configurations

If you set cse.service.Registry.serviceDiscovery.type as “kube”, then “configPath” is necessary to comminucate with kubernetes cluster. The go-chassis consumer applications would find Endpoints and Services in cluster that provider applications deployed.

NOTE: Provider applications with go-chassis must deploy itself as a Pod asscociate with Services. The Service ports must be named and the port name must be the form <protocol>[-<suffix>]. protocol can set to be rest or highway now.
cse:
  service:
    Registry:
      serviceDiscovery:
        type: kube
        configPath: /etc/.kube/config

To see the detailed use case of how to use kube discovery with chassis please refer to this example.

ServiceComb

ServiceComb service center is the default plugin of go chassis, it support client side discovery, so need to set registry service. it implements both ServiceDiscovery and Registrator plugin

Configurations

cse:
  service:
    registry:
      type: servicecenter
      address: http://10.0.0.1:30100,http://10.0.0.2:30100 
      refeshInterval : 30s       
      watch: true                         
      api:
        version: v4

Tracing Plugins

Zipkin

Zipkin tracer is a plugin of go chassis, it reports tracing data to zipkin server

Configurations

tracing.settings.URI

(optional, string) zipkin api url

tracing.settings.batchSize

(optional, string) after how many data collected, a tracer will report to server, default is 10000

tracing.settings.batchInterval

(optional, string) after how long the tracer running, a tracer will report to server, default is 10s

tracing.settings.collector

(optional, string) support http, namedPipe, default is http

Example

tracing:
  tracer: zipkin
  settings:
    URI: http://127.0.0.1:9411/api/v1/spans
    batchSize: 10000
    batchInterval: 10s
    collector: http
    

Development guides

Handler

概述

Go chassis以插件的形式支持在一次请求调用中,插入自己的处理逻辑。

实现

实现handler需要以下三个步骤:实现Handler接口,并根据其名称注册,最后在chassis.yaml中添加handler chain相关配置。其中[service_type] 可配置为Provider或Consumer,[chain_name]默认为default。

注册处理逻辑
RegisterHandler(name string, f func() Handler) error
实现处理接口
type Handler interface {
    Handle(*Chain, *invocation.Invocation, invocation.ResponseCallBack)
    Name() string
}
添加配置
cse:
  handler:
    chain:
      [service_type]:
        [chain_name]: [your_handler_name]

示例

示例中注册的是名为fake-handler的处理链,其实现的Handle方法仅记录inv的endpoint信息。

package handler
import (
    "github.com/go-chassis/go-chassis/core/handler"
    "github.com/go-chassis/go-chassis/core/invocation"
    "log"
)
const Name = "fake-handler"
type FakeHandler struct{}

func init()                         { handler.RegisterHandler(Name, New) }
func New() handler.Handler          { return &FakeHandler{} }
func (h *FakeHandler) Name() string { return Name }

func (h *FakeHandler) Handle(chain *handler.Chain, inv *invocation.Invocation,
    cb invocation.ResponseCallBack) {
    log.Printf("fake handler running for %v", inv.Endpoint)
    chain.Next(inv, func(r *invocation.InvocationResponse) error {
        return cb(r)
    })
}

chassis.yaml配置示例如下

cse:
  handler:
    chain:
      Provider:
        default: fake-handler

Archaius

概述

go-archaius是go-chassis的动态配置框架,目前支持CSE 配置中心,本地文件,ENV,CMD等配置管理。如果用户希望接入自己的配置服务中,可以参考本章节实现。

使用说明

go-archaius支持同时配置多种源,包括命令行,环境变量,外部源及文件等。用户可通过ConfigurationFactory接口的AddSource方法添加自己的配置源,并通过RegisterListener方法注册EventListener。

AddSource(core.ConfigSource) error
RegisterListener(listenerObj core.EventListener, key ...string) error

其中配置源需要实现ConfigSource接口。其中GetPriority和GetSourceName必须实现且有有效返回,分别用于获取配置源优先级和配置源名称。GetConfigurations和GetConfigurationByKey方法用于获取全部配置和指定配置项,需要用户实现。其他方法可以返回空。

  • GetPriority方法用于确定配置源的优先级,go-archaius内置的5个配置源优先级由高到底分别是配置中心,命令行,环境变量,文件,外部配置源,对应着0到4五个整数值。用户自己接入的配置源可自行配置优先级级别,数值越小则优先级越高。GetSourceName方法用于返回配置源名称。
  • 若没有区域区分的集中式配置中心,DemensionInfo相关接口可返回nil不实现。
  • Cleanup用于清空本地缓存的配置。
  • DynamicConfigHandler接口可根据需要实现,用于实现动态配置动态更新的回调方法。
type ConfigSource interface {
    GetSourceName() string
    GetConfigurations() (map[string]interface{}, error)
    GetConfigurationsByDI(dimensionInfo string) (map[string]interface{}, error)
    GetConfigurationByKey(string) (interface{}, error)
    GetConfigurationByKeyAndDimensionInfo(key, dimensionInfo string) (interface{}, error)
    AddDimensionInfo(dimensionInfo string) (map[string]string, error)
    DynamicConfigHandler(DynamicConfigCallback) error
    GetPriority() int
    Cleanup() error
}

注册EventListener用于在配置源更新时由Dispatcher分发事件,由注册的listener处理。

type EventListener interface {
    Event(event *Event)
}

示例

实现configSource
type fakeSource struct {
    Configuration  map[string]interface{}
    changeCallback core.DynamicConfigCallback
    sync.Mutex
}

func (*fakeSource) GetSourceName() string { return "TestingSource" }
func (*fakeSource) GetPriority() int      { return 0 }

func (f *fakeSource) GetConfigurations() (map[string]interface{}, error) {
    config := make(map[string]interface{})
    f.Lock()
    defer f.Unlock()
    for key, value := range f.Configuration {
        config[key] = value
    }
    return config, nil
}

func (f *fakeSource) GetConfigurationByKey(key string) (interface{}, error) {
    f.Lock()
    defer f.Unlock()
    configValue, ok := f.Configuration[key]
    if !ok {
        return nil, errors.New("invalid key")
    }
    return configValue, nil
}

func (f *fakeSource) DynamicConfigHandler(callback core.DynamicConfigCallback) error {
    f.Lock()
    defer f.Unlock()
    f.changeCallback = callback
    return nil
}

func (f *fakeSource) Cleanup() error {
    f.Lock()
    defer f.Unlock()
    f.Configuration = make(map[string]interface{})
    f.changeCallback = nil
    return nil
}

func (*fakeSource) AddDimensionInfo(d string) (map[string]string, error) { return nil, nil }
func (*fakeSource) GetConfigurationByKeyAndDimensionInfo(k, d string) (interface{}, error) { return nil, nil }
func (*fakeSource) GetConfigurationsByDI(d string) (map[string]interface{}, error) { return nil, nil }
添加configSource
func NewConfigSource() core.ConfigSource {
    return &fakeSource{
      Configuration: make(map[string]interface{}),
    }
}
factory, _ := goarchaius.NewConfigFactory(lager.Logger)
err := factory.AddSource(fakeSource.NewConfigSource())
注册evnetListener
type EventHandler struct{
    Factory goarchaius.ConfigurationFactory
}
func (h EventHandler) Event(e *core.Event) { 
  value := h.Factory.GetConfigurationByKey(e.Key)
  log.Printf("config value after change %s | %s", e.Key, value)
}
factory.RegisterListener(&EventHandler{Factory: factory}, "a*")

Archaius Config Source Plugin

Config Source Plugin

Config Source Plugin let’s you write your own the Config-Center client implementation for the different types of Config Source.

Instructions

Go-Chassis can support pulling the configuration from different types of config centers, currently there are 2 implementation available for Config-Client Plugin (Go-Archaius Config-center, Ctrip Apollo Config-center). If you want to implement any new client for another config-center then you have to implement the following ConfigClient Interface.

//ConfigClient is the interface of config server client, it has basic func to interact with config server
type ConfigClient interface {
    //Init the Configuration for the Server
    Init()
    //PullConfigs pull all configs from remote
    PullConfigs(serviceName, version, app, env string) (map[string]interface{}, error)
    //PullConfig pull one config from remote
    PullConfig(serviceName, version, app, env, key, contentType string) (interface{}, error)
    //PullConfigsByDI pulls the configurations with customized DimensionInfo/Project
    PullConfigsByDI(dimensionInfo , diInfo string)(map[string]map[string]interface{}, error)
}

Once you implement the above interface then you need to define the type of your configClient

config.GlobalDefinition.Cse.Config.Client.Type
cse:
  config:
    client:
      type: your_client_name   #config_center/apollo/your_client_name

Based on this type you need to load the plugin in your init()

 func init(){
    client.InstallConfigClientPlugin("NameOfYourPLugin", InitConfigYourPlugin)
 }

You need to import the package path in your application to Enable your plugin. Once the plugin is enabled then you can pull configuration using the ConfigClient

client.DefaultClient.PullConfigs(serviceName, versionName, appName, env)

Cipher

概述


Go chassis以插件的形式提供加解密组件功能,用户可以自己定制,Chassis默认提供一种华为PAAS成熟的加解密组件,该功能目前以二进制的so文件形式提供。

配置


AES Cipher 配置

1、把so文件拷贝到项目${CHASSIS_HOME}/lib目录下,或者直接放到系统/usr/lib目录下面。优先读取${CHASSIS_HOME}/lib,然后再在/usr/lib下面查找。

2、通过环境变量PAAS_CRYPTO_PATH指定物料路径(root.key, common_shared.key)

3、引入aes包,使用加解密方法

自定义Cipher

可通过实现Cipher接口,自定义Cipher

type Cipher interface {
    Encrypt(src string) (string, error)
    Decrypt(src string) (string, error)
}

API


加密
Encrypt(src string) (string, error)
解密
Decrypt(src string) (string, error)

例子


使用AES Cipher示例
import (
    _ "github.com/servicecomb/security/plugins/aes"
    "testing"
    "github.com/servicecomb/security"
    "github.com/stretchr/testify/assert"
    "log"
)

func TestAESCipher_Decrypt(t *testing.T) {
    aesFunc := security.CipherPlugins["aes"]
    cipher := aesFunc()
    s, err := cipher.Encrypt("tian")
    assert.NoError(t, err)
    log.Println(s)
    a, _ := cipher.Decrypt(s)
    assert.Equal(t, "tian", a)
}
自定义Cipher 示例
package plain

import "github.com/servicecomb/security"

type DefaultCipher struct {
}

// register self-defined plugin in the cipherPlugin map
func init() {
    security.CipherPlugins["default"] = new
}

// define a method of newing a plugin object, and register this method
func new() security.Cipher {
    return &DefaultCipher{
    }
}

// implement the Encrypt(string) method, to encrypt the clear text
func (c *DefaultCipher)Encrypt(src string) (string, error) {
    return src, nil
}

// implement the Decrypt(string) method, to decrypt the cipher text
func (c *DefaultCipher)Decrypt(src string) (string, error) {
    return src, nil
}

Protocol

概述


框架默认支持http协议以及highway RPC 协议,用户可扩展自己的RPC协议,并使用RPCInvoker调用

如何实现


客户端
  • 实现协议的客户端接口
type Client interface
  • 实现以下接口来返回客户端插件
func(...clientOption.Option) Client
  • 安装客户端插件
func InstallPlugin(protocol string, f ClientNewFunc)
  • 处理链默认自带名为transport的handler,他将根据协议名加载对应的协议客户端,指定协议的方式如下
invoker.Invoke(ctx, "Server", "HelloServer", "SayHello",
    &helloworld.HelloRequest{Name: "Peter"},
    reply,
    core.WithProtocol("grpc"),
)
服务端
  • 实现协议的Server端
type Server interface
  • 修改配置文件以启动协议监听
cse:
  protocols:
    grpc:
      listenAddress: 127.0.0.1:5000
      advertiseAddress: 127.0.0.1:5000

BootstrapPlugin

Introduction

go-chassis gives your a way to load any custom plugin you write before start go chassis, so that you don’t need to add code in go-chassis project

you can use bootstrap plugin to load your custom logic to manipulate go-chassis modules, for example change Registry, router etc.

Usage

func InstallPlugin(name string, plugin BootstrapPlugin)

Design guides

Config Client Plugin

Config Client

Go-Chassis provides the functionality to pull the configs from different config-centers, to keep the go-chassis extensible to support multiple config-center this Client was implemented as a plugin.

More Details of the plugin can be found here

Currently Go-Chassis has two implementation of this plugin for Huawei Config-Center and Ctrip Apollo Config-Center.

Basic Sequence diagram for this plugin is given below.

CC Plugin Sequence Diagram