Compare commits

...

72 Commits
v3.10.93 ... v3

Author SHA1 Message Date
8d747c64a8 Merge pull request 'tracer: minimize overhead on noop tracer usage' (#381) from tr into v3
All checks were successful
test / test (push) Successful in 4m20s
Reviewed-on: #381
2024-12-19 19:08:22 +03:00
94beb5ed3b tracer: minimize overhead on noop tracer usage
All checks were successful
lint / lint (pull_request) Successful in 1m5s
test / test (pull_request) Successful in 3m29s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-19 18:47:03 +03:00
98981ba86c Merge pull request 'metadata: add Copy method, fix old methods' (#379) from md into v3
All checks were successful
test / test (push) Successful in 3m23s
Reviewed-on: #379
2024-12-19 16:21:09 +03:00
1013f50d0e metadata: add Copy method, fix old methods
All checks were successful
lint / lint (pull_request) Successful in 1m7s
test / test (pull_request) Successful in 3m30s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-19 16:16:07 +03:00
0b190997b1 metadata: add Copy method, fix old methods
Some checks failed
lint / lint (pull_request) Failing after 55s
test / test (pull_request) Successful in 3m33s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-19 16:03:23 +03:00
69a44eb190 correcting hooks calling (#376)
All checks were successful
test / test (push) Successful in 3m30s
Reviewed-on: #376
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-12-18 20:31:07 +03:00
0476028f69 Merge pull request 'add context Must methods' (#377) from context into v3
All checks were successful
test / test (push) Successful in 3m46s
Reviewed-on: #377
2024-12-18 01:38:25 +03:00
330d8b149a add context Must methods
All checks were successful
lint / lint (pull_request) Successful in 1m39s
test / test (pull_request) Successful in 4m12s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-18 01:31:21 +03:00
19b04fe070 metadata: add MustGet func
All checks were successful
test / test (push) Successful in 1h34m51s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-15 23:45:17 +03:00
4cd55875c6 add Must*Context methods
All checks were successful
test / test (push) Successful in 11m55s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-13 17:02:57 +03:00
a7896cc728 Merge pull request 'logger/slog: fix dedup keys' (#374) from loggerfix into v3
All checks were successful
test / test (push) Successful in 11m49s
Reviewed-on: #374
2024-12-13 01:05:22 +03:00
ff991bf49c logger/slog: fix dedup keys
All checks were successful
lint / lint (pull_request) Successful in 1m32s
test / test (pull_request) Successful in 12m16s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-13 01:04:55 +03:00
5a6551b703 Merge pull request 'logger: improvements' (#373) from logger-dedup into v3
All checks were successful
test / test (push) Successful in 12m27s
Reviewed-on: #373
2024-12-13 00:28:28 +03:00
9406a33d60 logger: improvements
All checks were successful
lint / lint (pull_request) Successful in 1m51s
test / test (pull_request) Successful in 12m57s
* logger: add WithDedupKeys option

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-13 00:24:11 +03:00
8f185abd9d Обновить README.md
All checks were successful
test / test (push) Successful in 1m49s
2024-12-11 00:25:16 +03:00
86492e0644 Обновить README.md
All checks were successful
test / test (push) Successful in 1m42s
2024-12-11 00:23:14 +03:00
b21972964a Merge pull request 'micro-tests' (#372) from micro-tests into v3
All checks were successful
test / test (push) Successful in 1m42s
Reviewed-on: #372
2024-12-10 23:45:54 +03:00
f5ee065d09 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 1m7s
test / test (pull_request) Successful in 2m47s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:21:45 +03:00
8cb02f2b08 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 51s
test / test (pull_request) Failing after 1m55s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:17:51 +03:00
bc926cd6bd add micro-tests trigger
All checks were successful
test / test (pull_request) Successful in 39s
lint / lint (pull_request) Successful in 43s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:15:40 +03:00
356abfd818 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 48s
test / test (pull_request) Failing after 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:08:10 +03:00
18444d3f98 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Failing after 36s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:05:10 +03:00
d5f07922e8 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 1m19s
test / test (pull_request) Successful in 1m16s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 19:57:27 +03:00
675e717410 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Successful in 40s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 18:06:17 +03:00
7b6aea235a add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 46s
test / test (pull_request) Successful in 43s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:39:36 +03:00
2cb7200467 add micro-tests trigger
All checks were successful
test / test (pull_request) Successful in 36s
lint / lint (pull_request) Successful in 44s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:36:23 +03:00
f430f97a97 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:30:03 +03:00
0060c4377a add micro-tests trigger
All checks were successful
test / test (pull_request) Successful in 39s
lint / lint (pull_request) Successful in 45s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:27:28 +03:00
e46579fe9a add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 44s
test / test (pull_request) Failing after 37s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:14:54 +03:00
ca52973194 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:08:14 +03:00
5bb33c7e1d add micro-tests trigger
Some checks failed
test / test (pull_request) Failing after 33s
lint / lint (pull_request) Successful in 44s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:52:15 +03:00
b71fc25328 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 44s
test / test (pull_request) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:46:40 +03:00
9345dd075a add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 46s
test / test (pull_request) Failing after 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:16:16 +03:00
1c1b9c0a28 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 1m9s
test / test (pull_request) Failing after 1m38s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:02:43 +03:00
2969494c5a Merge branch 'v3' into micro-tests
Some checks failed
test / test (pull_request) Failing after 43s
lint / lint (pull_request) Failing after 14m11s
2024-12-10 15:59:30 +03:00
cbd3fa38ba add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Failing after 58s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 15:58:29 +03:00
569a36383d workflow fix (#371)
All checks were successful
test / test (push) Successful in 30s
Reviewed-on: #371
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 01:40:56 +03:00
90bed77526 workflow improve
All checks were successful
test / test (pull_request) Successful in 33s
lint / lint (pull_request) Successful in 1m11s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 01:38:30 +03:00
ba4478a5e0 workflow improve
Some checks failed
lint / lint (pull_request) Failing after 18m59s
test / test (pull_request) Successful in 1m20s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 00:59:54 +03:00
6dc76cdfea workflow improve
Some checks failed
lint / lint (pull_request) Has been cancelled
test / test (pull_request) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 00:56:52 +03:00
e266683d96 Merge branch 'v3' of https://git.unistack.org/unistack-org/micro into v3 2024-12-10 00:51:37 +03:00
2b62ad04f2 metadata: fix for grpc case (#370)
All checks were successful
test / test (push) Successful in 42s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #370
2024-12-09 19:06:49 +03:00
275b0a64e5 metadata: fix for grpc case
Some checks failed
test / test (pull_request) Successful in 46s
lint / lint (pull_request) Failing after 10m4s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-09 18:02:09 +03:00
38c5fe8b5a fixed struct alignment && refactor linter (#369)
All checks were successful
test / test (push) Successful in 42s
## Pull Request template
Please, go through these steps before clicking submit on this PR.

1. Give a descriptive title to your PR.
2. Provide a description of your changes.
3. Make sure you have some relevant tests.
4. Put `closes #XXXX` in your comment to auto-close the issue that your PR fixes (if applicable).

**PLEASE REMOVE THIS TEMPLATE BEFORE SUBMITTING**

Reviewed-on: #369
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-12-09 16:23:25 +03:00
b6a0e4d983 add metrics for dns
All checks were successful
test / test (push) Successful in 46s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-09 00:41:08 +03:00
d9b822deff logger/slog: add ability to pass func that creates slog.Handler compatible interface
All checks were successful
test / test (push) Successful in 42s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-07 16:16:45 +03:00
0e66688f8f logger/slog: add option to pass slog.Handler
All checks were successful
test / test (push) Successful in 45s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-07 14:19:29 +03:00
9213fd212f Обновить .gitea/workflows/job_lint.yml
All checks were successful
test / test (push) Successful in 51s
2024-12-06 23:13:24 +03:00
aa360dcf51 Обновить .gitea/workflows/job_lint.yml
All checks were successful
test / test (push) Successful in 50s
2024-12-06 23:11:48 +03:00
2df259b5b8 Обновить .gitea/workflows/job_test.yml
Some checks failed
test / test (push) Has been cancelled
2024-12-06 23:11:32 +03:00
15e9310368 Merge pull request 'Update actions' (#368) from atolstikhin/micro:v3 into v3
All checks were successful
test / test (push) Successful in 1m1s
Reviewed-on: #368
2024-12-06 23:08:50 +03:00
Aleksandr Tolstikhin
16d8cf3434 Update actions
All checks were successful
test / test (pull_request) Successful in 1m4s
lint / lint (pull_request) Successful in 10m11s
2024-12-07 02:37:12 +07:00
9704ef2e5e fix pipeline (#365)
Co-authored-by: Aleksandr Tolstikhin <atolstikhin@mtsbank.ru>
Reviewed-on: #365
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-06 19:05:27 +03:00
94e8f90f00 Merge pull request 'removed create empty out/ingoing metadata' (#364) from devstigneev/micro:v3 into v3
Reviewed-on: #364
Reviewed-by: Василий Толстов <v.tolstov@unistack.org>
2024-12-06 12:49:25 +03:00
34d1587881 removed create empty out/ingoing metadata
Some checks failed
lint / lint (pull_request) Has been cancelled
pr / test (pull_request) Has been cancelled
2024-12-06 00:08:03 +03:00
bf4143cde5 replace default go resolver with caching resolver
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-03 01:11:08 +03:00
36b7b9f5fb add Live/Ready/Health methods
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-02 13:20:13 +03:00
ae97023092 store: updates for Watcher
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-01 19:54:38 +03:00
115ca6a018 logger: add WithAddFields option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-29 15:34:02 +03:00
89cf4ef8af store: add missin LazyConnect option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-26 17:48:09 +03:00
2a6ce6d4da add using lazy connect (#361)
#357

Co-authored-by: Василий Толстов <v.tolstov@unistack.org>
Reviewed-on: #361
Reviewed-by: Василий Толстов <v.tolstov@unistack.org>
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-11-26 12:18:17 +03:00
ad19fe2b90 logger/slog: fix race condigtion with Enabled and Level
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-24 23:40:54 +03:00
49055a28ea logger/slog: wrap handler
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-24 23:28:15 +03:00
d1c6e121c1 logger/slog: fix Clone and Fields methods
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-24 15:31:40 +03:00
7cd7fb0c0a disable logging for automaxprocs
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-20 22:35:36 +03:00
77eb5b5264 add yaml support
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-01 11:23:29 +03:00
929e46c087 improve slog
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-01 00:56:40 +03:00
1fb5673d27 fixup graceful stop
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-25 17:21:54 +03:00
3bbb0cbc72 update slog/logger (#351)
Изменено (методы logger без форматирования):

- Добавлена подготовка и выравнивание аттрибутов для logger
- Выравнивание за счет добавления !BADKEY до процессинга log/slog
- Добавлено переиспользование метода Log
- Удалены методы [Logf, Infof, Debugf, Errorf, Warnf, Fatalf, Tracef]
- Обновлены юниттесты
- Удален wrapper в пакете logger
- Изменен интерфейс logger
- Отрефакторены вызовы logger'a в micro

Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #351
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-10-12 12:37:43 +03:00
71fe0df73f use automaxproc and automemlimit
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-06 13:50:59 +03:00
f1b8ecbdb3 store: add new ErrNotConnected error
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-05 14:46:22 +03:00
fd2b2762e9 fixup missing xpool dep
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-30 09:57:07 +03:00
119 changed files with 2769 additions and 2074 deletions

View File

@ -0,0 +1,29 @@
name: lint
on:
pull_request:
types: [opened, reopened, synchronize]
branches:
- master
- v3
- v4
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup deps
run: go get -v ./...
- name: run lint
uses: https://github.com/golangci/golangci-lint-action@v6
with:
version: 'latest'

View File

@ -0,0 +1,34 @@
name: test
on:
pull_request:
types: [opened, reopened, synchronize]
branches:
- master
- v3
- v4
push:
branches:
- master
- v3
- v4
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup deps
run: go get -v ./...
- name: run test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...

View File

@ -0,0 +1,53 @@
name: test
on:
pull_request:
types: [opened, reopened, synchronize]
branches:
- master
- v3
- v4
push:
branches:
- master
- v3
- v4
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: checkout tests
uses: actions/checkout@v4
with:
ref: master
filter: 'blob:none'
repository: unistack-org/micro-tests
path: micro-tests
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup go work
env:
GOWORK: /workspace/${{ github.repository_owner }}/go.work
run: |
go work init
go work use .
go work use micro-tests
- name: setup deps
env:
GOWORK: /workspace/${{ github.repository_owner }}/go.work
run: go get -v ./...
- name: run tests
env:
INTEGRATION_TESTS: yes
GOWORK: /workspace/${{ github.repository_owner }}/go.work
run: |
cd micro-tests
go test -mod readonly -v ./... || true

View File

@ -1,24 +0,0 @@
name: lint
on:
pull_request:
branches:
- master
- v3
jobs:
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: setup-go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: checkout
uses: actions/checkout@v3
- name: deps
run: go get -v -d ./...
- name: lint
uses: https://github.com/golangci/golangci-lint-action@v3.4.0
continue-on-error: true
with:
version: v1.52

View File

@ -1,23 +0,0 @@
name: pr
on:
pull_request:
branches:
- master
- v3
jobs:
test:
name: test
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
- name: setup-go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: deps
run: go get -v -t -d ./...
- name: test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...

View File

@ -1,44 +1,5 @@
run:
concurrency: 4
concurrency: 8
deadline: 5m
issues-exit-code: 1
tests: true
linters-settings:
govet:
check-shadowing: true
enable:
- fieldalignment
linters:
enable:
- govet
- deadcode
- errcheck
- govet
- ineffassign
- staticcheck
- structcheck
- typecheck
- unused
- varcheck
- bodyclose
- gci
- goconst
- gocritic
- gosimple
- gofmt
- gofumpt
- goimports
- revive
- gosec
- makezero
- misspell
- nakedret
- nestif
- nilerr
- noctx
- prealloc
- unconvert
- unparam
disable-all: false

View File

@ -1,4 +1,4 @@
# Micro [![License](https://img.shields.io/:license-apache-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Doc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/unistack-org/micro/v3?tab=overview) [![Status](https://github.com/unistack-org/micro/workflows/build/badge.svg?branch=master)](https://github.com/unistack-org/micro/actions?query=workflow%3Abuild+branch%3Amaster+event%3Apush) [![Lint](https://goreportcard.com/badge/go.unistack.org/micro/v3)](https://goreportcard.com/report/go.unistack.org/micro/v3) [![Coverage](https://codecov.io/gh/unistack-org/micro/branch/v3/graph/badge.svg?token=OZPO2LP7VS)](https://codecov.io/gh/unistack-org/micro)
# Micro [![License](https://img.shields.io/:license-apache-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Doc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/unistack-org/micro/v3?tab=overview) [![Status](https://git.unistack.org/unistack-org/micro/actions/workflows/job_tests.yml/badge.svg?branch=v3)](https://git.unistack.org/unistack-org/micro/actions?query=workflow%3Abuild+branch%3Av3+event%3Apush) [![Lint](https://goreportcard.com/badge/go.unistack.org/micro/v3)](https://goreportcard.com/report/go.unistack.org/micro/v3)
Micro is a standard library for microservices.

View File

@ -46,6 +46,12 @@ type Broker interface {
BatchSubscribe(ctx context.Context, topic string, h BatchHandler, opts ...SubscribeOption) (Subscriber, error)
// String type of broker
String() string
// Live returns broker liveness
Live() bool
// Ready returns broker readiness
Ready() bool
// Health returns broker health
Health() bool
}
type (

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Broker, bool) {
return c, ok
}
// MustContext returns broker from passed context
func MustContext(ctx context.Context) Broker {
b, ok := FromContext(ctx)
if !ok {
panic("missing broker")
}
return b
}
// NewContext savess broker in context
func NewContext(ctx context.Context, s Broker) context.Context {
if ctx == nil {

View File

@ -109,7 +109,7 @@ func (m *memoryBroker) Init(opts ...broker.Option) error {
m.funcSubscribe = m.fnSubscribe
m.funcBatchSubscribe = m.fnBatchSubscribe
m.opts.Hooks.EachNext(func(hook options.Hook) {
m.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case broker.HookPublish:
m.funcPublish = h(m.funcPublish)
@ -206,7 +206,7 @@ func (m *memoryBroker) publish(ctx context.Context, msgs []*broker.Message, opts
}
} else if sub.opts.AutoAck {
if err = ms.Ack(); err != nil {
m.opts.Logger.Errorf(m.opts.Context, "ack failed: %v", err)
m.opts.Logger.Error(m.opts.Context, "broker ack error", err)
}
}
// single processing
@ -217,11 +217,11 @@ func (m *memoryBroker) publish(ctx context.Context, msgs []*broker.Message, opts
if eh != nil {
_ = eh(p)
} else if m.opts.Logger.V(logger.ErrorLevel) {
m.opts.Logger.Error(m.opts.Context, err.Error())
m.opts.Logger.Error(m.opts.Context, "broker handler error", err)
}
} else if sub.opts.AutoAck {
if err = p.Ack(); err != nil {
m.opts.Logger.Errorf(m.opts.Context, "ack failed: %v", err)
m.opts.Logger.Error(m.opts.Context, "broker ack error", err)
}
}
}
@ -339,6 +339,18 @@ func (m *memoryBroker) Name() string {
return m.opts.Name
}
func (m *memoryBroker) Live() bool {
return true
}
func (m *memoryBroker) Ready() bool {
return true
}
func (m *memoryBroker) Health() bool {
return true
}
func (m *memoryEvent) Topic() string {
return m.topic
}

View File

@ -74,7 +74,7 @@ func TestMemoryBroker(t *testing.T) {
topic := "test"
count := 10
fn := func(p broker.Event) error {
fn := func(_ broker.Event) error {
return nil
}

View File

@ -25,6 +25,18 @@ func NewBroker(opts ...Option) *NoopBroker {
return b
}
func (b *NoopBroker) Health() bool {
return true
}
func (b *NoopBroker) Live() bool {
return true
}
func (b *NoopBroker) Ready() bool {
return true
}
func (b *NoopBroker) Name() string {
return b.opts.Name
}
@ -47,7 +59,7 @@ func (b *NoopBroker) Init(opts ...Option) error {
b.funcSubscribe = b.fnSubscribe
b.funcBatchSubscribe = b.fnBatchSubscribe
b.opts.Hooks.EachNext(func(hook options.Hook) {
b.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookPublish:
b.funcPublish = h(b.funcPublish)

View File

@ -16,6 +16,9 @@ import (
// Options struct
type Options struct {
// Name holds the broker name
Name string
// Tracer used for tracing
Tracer tracer.Tracer
// Register can be used for clustering
@ -28,23 +31,25 @@ type Options struct {
Meter meter.Meter
// Context holds external options
Context context.Context
// Wait waits for a collection of goroutines to finish
Wait *sync.WaitGroup
// TLSConfig holds tls.TLSConfig options
TLSConfig *tls.Config
// ErrorHandler used when broker can't unmarshal incoming message
ErrorHandler Handler
// BatchErrorHandler used when broker can't unmashal incoming messages
BatchErrorHandler BatchHandler
// Name holds the broker name
Name string
// Addrs holds the broker address
Addrs []string
// Wait waits for a collection of goroutines to finish
Wait *sync.WaitGroup
// GracefulTimeout contains time to wait to finish in flight requests
GracefulTimeout time.Duration
// Hooks can be run before broker Publish/BatchPublish and
// Subscribe/BatchSubscribe methods
Hooks options.Hooks
// GracefulTimeout contains time to wait to finish in flight requests
GracefulTimeout time.Duration
}
// NewOptions create new Options

View File

@ -17,13 +17,13 @@ func BackoffExp(_ context.Context, _ Request, attempts int) (time.Duration, erro
}
// BackoffInterval specifies randomization interval for backoff func
func BackoffInterval(min time.Duration, max time.Duration) BackoffFunc {
func BackoffInterval(minTime time.Duration, maxTime time.Duration) BackoffFunc {
return func(_ context.Context, _ Request, attempts int) (time.Duration, error) {
td := time.Duration(math.Pow(float64(attempts), math.E)) * time.Millisecond * 100
if td < min {
return min, nil
} else if td > max {
return max, nil
if td < minTime {
return minTime, nil
} else if td > maxTime {
return maxTime, nil
}
return td, nil
}

View File

@ -34,23 +34,23 @@ func TestBackoffExp(t *testing.T) {
}
func TestBackoffInterval(t *testing.T) {
min := 100 * time.Millisecond
max := 300 * time.Millisecond
minTime := 100 * time.Millisecond
maxTime := 300 * time.Millisecond
r := &testRequest{
service: "test",
method: "test",
}
fn := BackoffInterval(min, max)
fn := BackoffInterval(minTime, maxTime)
for i := 0; i < 5; i++ {
d, err := fn(context.TODO(), r, i)
if err != nil {
t.Fatal(err)
}
if d < min || d > max {
t.Fatalf("Expected %v < %v < %v", min, d, max)
if d < minTime || d > maxTime {
t.Fatalf("Expected %v < %v < %v", minTime, d, maxTime)
}
}
}

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Client, bool) {
return c, ok
}
// MustContext get client from context
func MustContext(ctx context.Context) Client {
c, ok := FromContext(ctx)
if !ok {
panic("missing client")
}
return c
}
// NewContext put client in context
func NewContext(ctx context.Context, c Client) context.Context {
if ctx == nil {

View File

@ -194,7 +194,7 @@ func (n *noopClient) Init(opts ...Option) error {
n.funcPublish = n.fnPublish
n.funcBatchPublish = n.fnBatchPublish
n.opts.Hooks.EachNext(func(hook options.Hook) {
n.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookCall:
n.funcCall = h(n.funcCall)
@ -222,7 +222,7 @@ func (n *noopClient) Call(ctx context.Context, req Request, rsp interface{}, opt
ts := time.Now()
n.opts.Meter.Counter(semconv.ClientRequestInflight, "endpoint", req.Endpoint()).Inc()
var sp tracer.Span
ctx, sp = n.opts.Tracer.Start(ctx, req.Endpoint()+" rpc-client",
ctx, sp = n.opts.Tracer.Start(ctx, "rpc-client",
tracer.WithSpanKind(tracer.SpanKindClient),
tracer.WithSpanLabels("endpoint", req.Endpoint()),
)
@ -298,7 +298,7 @@ func (n *noopClient) fnCall(ctx context.Context, req Request, rsp interface{}, o
// call backoff first. Someone may want an initial start delay
t, err := callOpts.Backoff(ctx, req, i)
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
return errors.InternalServerError("go.micro.client", "%s", err)
}
// only sleep if greater than 0
@ -312,7 +312,7 @@ func (n *noopClient) fnCall(ctx context.Context, req Request, rsp interface{}, o
// TODO apply any filtering here
routes, err = n.opts.Lookup(ctx, req, callOpts)
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
return errors.InternalServerError("go.micro.client", "%s", err)
}
// balance the list of nodes
@ -372,7 +372,7 @@ func (n *noopClient) fnCall(ctx context.Context, req Request, rsp interface{}, o
return gerr
}
func (n *noopClient) NewRequest(service, endpoint string, req interface{}, opts ...RequestOption) Request {
func (n *noopClient) NewRequest(service, endpoint string, _ interface{}, _ ...RequestOption) Request {
return &noopRequest{service: service, endpoint: endpoint}
}
@ -385,7 +385,7 @@ func (n *noopClient) Stream(ctx context.Context, req Request, opts ...CallOption
ts := time.Now()
n.opts.Meter.Counter(semconv.ClientRequestInflight, "endpoint", req.Endpoint()).Inc()
var sp tracer.Span
ctx, sp = n.opts.Tracer.Start(ctx, req.Endpoint()+" rpc-client",
ctx, sp = n.opts.Tracer.Start(ctx, "rpc-client",
tracer.WithSpanKind(tracer.SpanKindClient),
tracer.WithSpanLabels("endpoint", req.Endpoint()),
)
@ -466,7 +466,7 @@ func (n *noopClient) fnStream(ctx context.Context, req Request, opts ...CallOpti
// call backoff first. Someone may want an initial start delay
t, cerr := callOpts.Backoff(ctx, req, i)
if cerr != nil {
return nil, errors.InternalServerError("go.micro.client", cerr.Error())
return nil, errors.InternalServerError("go.micro.client", "%s", cerr)
}
// only sleep if greater than 0
@ -480,7 +480,7 @@ func (n *noopClient) fnStream(ctx context.Context, req Request, opts ...CallOpti
// TODO apply any filtering here
routes, err = n.opts.Lookup(ctx, req, callOpts)
if err != nil {
return nil, errors.InternalServerError("go.micro.client", err.Error())
return nil, errors.InternalServerError("go.micro.client", "%s", err)
}
// balance the list of nodes
@ -546,7 +546,7 @@ func (n *noopClient) fnStream(ctx context.Context, req Request, opts ...CallOpti
return nil, grr
}
func (n *noopClient) stream(ctx context.Context, addr string, req Request, opts CallOptions) (Stream, error) {
func (n *noopClient) stream(ctx context.Context, _ string, _ Request, _ CallOptions) (Stream, error) {
return &noopStream{ctx: ctx}, nil
}
@ -609,13 +609,13 @@ func (n *noopClient) publish(ctx context.Context, ps []Message, opts ...PublishO
// use codec for payload
cf, err := n.newCodec(p.ContentType())
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
return errors.InternalServerError("go.micro.client", "%s", err)
}
// set the body
b, err := cf.Marshal(p.Payload())
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
return errors.InternalServerError("go.micro.client", "%s", err)
}
body = b
}

View File

@ -11,7 +11,6 @@ import (
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/network/transport"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/register"
"go.unistack.org/micro/v3/router"
@ -22,8 +21,16 @@ import (
// Options holds client options
type Options struct {
// Transport used for transfer messages
Transport transport.Transport
// Codecs map
Codecs map[string]codec.Codec
// Proxy is used for proxy requests
Proxy string
// ContentType is used to select codec
ContentType string
// Name is the client name
Name string
// Selector used to select needed address
Selector selector.Selector
// Logger used to log messages
@ -38,31 +45,28 @@ type Options struct {
Context context.Context
// Router used to get route
Router router.Router
// TLSConfig specifies tls.Config for secure connection
TLSConfig *tls.Config
// Codecs map
Codecs map[string]codec.Codec
// Lookup func used to get destination addr
Lookup LookupFunc
// Proxy is used for proxy requests
Proxy string
// ContentType is used to select codec
ContentType string
// Name is the client name
Name string
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
// Wrappers contains wrappers
Wrappers []Wrapper
// Hooks can be run before broker Publish/BatchPublish and
// Subscribe/BatchSubscribe methods
Hooks options.Hooks
// CallOptions contains default CallOptions
CallOptions CallOptions
// PoolSize connection pool size
PoolSize int
// PoolTTL connection pool ttl
PoolTTL time.Duration
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
// Hooks can be run before broker Publish/BatchPublish and
// Subscribe/BatchSubscribe methods
Hooks options.Hooks
}
// NewCallOptions creates new call options struct
@ -76,6 +80,16 @@ func NewCallOptions(opts ...CallOption) CallOptions {
// CallOptions holds client call options
type CallOptions struct {
// RequestMetadata holds additional metadata for call
RequestMetadata metadata.Metadata
// Network name
Network string
// Content-Type
ContentType string
// AuthToken string
AuthToken string
// Selector selects addr
Selector selector.Selector
// Context used for deadline
@ -83,33 +97,30 @@ type CallOptions struct {
// Router used for route
Router router.Router
// Retry func used for retries
// ResponseMetadata holds additional metadata from call
ResponseMetadata *metadata.Metadata
Retry RetryFunc
// Backoff func used for backoff when retry
Backoff BackoffFunc
// Network name
Network string
// Content-Type
ContentType string
// AuthToken string
AuthToken string
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
// Address specifies static addr list
Address []string
// SelectOptions selector options
SelectOptions []selector.SelectOption
// StreamTimeout stream timeout
StreamTimeout time.Duration
// RequestTimeout request timeout
RequestTimeout time.Duration
// RequestMetadata holds additional metadata for call
RequestMetadata metadata.Metadata
// ResponseMetadata holds additional metadata from call
ResponseMetadata *metadata.Metadata
// DialTimeout dial timeout
DialTimeout time.Duration
// Retries specifies retries num
Retries int
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
}
// ContextDialer pass ContextDialer to client
@ -194,18 +205,16 @@ func NewOptions(opts ...Option) Options {
Retry: DefaultRetry,
Retries: DefaultRetries,
RequestTimeout: DefaultRequestTimeout,
DialTimeout: transport.DefaultDialTimeout,
},
Lookup: LookupRoute,
PoolSize: DefaultPoolSize,
PoolTTL: DefaultPoolTTL,
Selector: random.NewSelector(),
Logger: logger.DefaultLogger,
Broker: broker.DefaultBroker,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
Router: router.DefaultRouter,
Transport: transport.DefaultTransport,
Lookup: LookupRoute,
PoolSize: DefaultPoolSize,
PoolTTL: DefaultPoolTTL,
Selector: random.NewSelector(),
Logger: logger.DefaultLogger,
Broker: broker.DefaultBroker,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
Router: router.DefaultRouter,
}
for _, o := range opts {
@ -278,13 +287,6 @@ func PoolTTL(d time.Duration) Option {
}
}
// Transport to use for communication e.g http, rabbitmq, etc
func Transport(t transport.Transport) Option {
return func(o *Options) {
o.Transport = t
}
}
// Register sets the routers register
func Register(r register.Register) Option {
return func(o *Options) {
@ -334,14 +336,6 @@ func TLSConfig(t *tls.Config) Option {
return func(o *Options) {
// set the internal tls
o.TLSConfig = t
// set the default transport if one is not
// already set. Required for Init call below.
// set the transport tls
_ = o.Transport.Init(
transport.TLSConfig(t),
)
}
}
@ -507,13 +501,6 @@ func WithAuthToken(t string) CallOption {
}
}
// WithNetwork is a CallOption which sets the network attribute
func WithNetwork(n string) CallOption {
return func(o *CallOptions) {
o.Network = n
}
}
// WithRouter sets the router to use for this call
func WithRouter(r router.Router) CallOption {
return func(o *CallOptions) {

View File

@ -38,4 +38,10 @@ type Cluster interface {
Broadcast(ctx context.Context, msg Message, filter ...string) error
// Unicast send message to single member in cluster
Unicast(ctx context.Context, node Node, msg Message) error
// Live returns cluster liveness
Live() bool
// Ready returns cluster readiness
Ready() bool
// Health returns cluster health
Health() bool
}

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Codec, bool) {
return c, ok
}
// MustContext returns codec from context
func MustContext(ctx context.Context) Codec {
c, ok := FromContext(ctx)
if !ok {
panic("missing codec")
}
return c
}
// NewContext put codec in context
func NewContext(ctx context.Context, c Codec) context.Context {
if ctx == nil {

View File

@ -13,7 +13,7 @@ type Validator interface {
}
// DefaultConfig default config
var DefaultConfig Config = NewConfig()
var DefaultConfig = NewConfig()
// DefaultWatcherMinInterval default min interval for poll changes
var DefaultWatcherMinInterval = 5 * time.Second
@ -138,7 +138,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s BeforeLoad err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" BeforeLoad error", err)
if !c.Options().AllowFail {
return err
}
@ -153,7 +153,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s AfterLoad err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" AfterLoad error", err)
if !c.Options().AllowFail {
return err
}
@ -168,7 +168,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s BeforeSave err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" BeforeSave error", err)
if !c.Options().AllowFail {
return err
}
@ -183,7 +183,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s AfterSave err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" AfterSave error", err)
if !c.Options().AllowFail {
return err
}
@ -198,7 +198,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s BeforeInit err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" BeforeInit error", err)
if !c.Options().AllowFail {
return err
}
@ -213,7 +213,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s AfterInit err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" AfterInit error", err)
if !c.Options().AllowFail {
return err
}

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Config, bool) {
return c, ok
}
// MustContext returns store from context
func MustContext(ctx context.Context) Config {
c, ok := FromContext(ctx)
if !ok {
panic("missing config")
}
return c
}
// NewContext put store in context
func NewContext(ctx context.Context, c Config) context.Context {
if ctx == nil {

View File

@ -37,7 +37,7 @@ func (c *defaultConfig) Init(opts ...Option) error {
c.funcLoad = c.fnLoad
c.funcSave = c.fnSave
c.opts.Hooks.EachNext(func(hook options.Hook) {
c.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookLoad:
c.funcLoad = h(c.funcLoad)

View File

@ -3,6 +3,7 @@ package config_test
import (
"context"
"fmt"
"reflect"
"testing"
"time"
@ -12,15 +13,17 @@ import (
)
type cfg struct {
StringValue string `default:"string_value"`
IgnoreValue string `json:"-"`
StructValue *cfgStructValue
IntValue int `default:"99"`
DurationValue time.Duration `default:"10s"`
MDurationValue mtime.Duration `default:"10s"`
MapValue map[string]bool `default:"key1=true,key2=false"`
UUIDValue string `default:"micro:generate uuid"`
IDValue string `default:"micro:generate id"`
MapValue map[string]bool `default:"key1=true,key2=false"`
StructValue *cfgStructValue
StringValue string `default:"string_value"`
IgnoreValue string `json:"-"`
UUIDValue string `default:"micro:generate uuid"`
IDValue string `default:"micro:generate id"`
DurationValue time.Duration `default:"10s"`
MDurationValue mtime.Duration `default:"10s"`
IntValue int `default:"99"`
}
type cfgStructValue struct {
@ -134,3 +137,13 @@ func TestValidate(t *testing.T) {
t.Fatal(err)
}
}
func Test_SizeOf(t *testing.T) {
st := cfg{}
tVal := reflect.TypeOf(st)
for i := 0; i < tVal.NumField(); i++ {
field := tVal.Field(i)
fmt.Printf("Field: %s, Offset: %d, Size: %d\n", field.Name, field.Offset, field.Type.Size())
}
}

View File

@ -41,14 +41,16 @@ type Options struct {
BeforeInit []func(context.Context, Config) error
// AfterInit contains slice of funcs that runs after Init
AfterInit []func(context.Context, Config) error
// AllowFail flag to allow fail in config source
AllowFail bool
// SkipLoad runs only if condition returns true
SkipLoad func(context.Context, Config) bool
// SkipSave runs only if condition returns true
SkipSave func(context.Context, Config) bool
// Hooks can be run before/after config Save/Load
Hooks options.Hooks
// AllowFail flag to allow fail in config source
AllowFail bool
}
// Option function signature
@ -278,10 +280,10 @@ func WatchCoalesce(b bool) WatchOption {
}
// WatchInterval specifies min and max time.Duration for pulling changes
func WatchInterval(min, max time.Duration) WatchOption {
func WatchInterval(minTime, maxTime time.Duration) WatchOption {
return func(o *WatchOptions) {
o.MinInterval = min
o.MaxInterval = max
o.MinInterval = minTime
o.MaxInterval = maxTime
}
}

View File

@ -2,7 +2,7 @@ package errors
import (
"encoding/json"
er "errors"
"errors"
"fmt"
"net/http"
"testing"
@ -26,7 +26,7 @@ func TestMarshalJSON(t *testing.T) {
func TestEmpty(t *testing.T) {
msg := "test"
var err *Error
err = FromError(fmt.Errorf(msg))
err = FromError(errors.New(msg))
if err.Detail != msg {
t.Fatalf("invalid error %v", err)
}
@ -42,7 +42,7 @@ func TestFromError(t *testing.T) {
if merr.ID != "go.micro.test" || merr.Code != 404 {
t.Fatalf("invalid conversation %v != %v", err, merr)
}
err = er.New(err.Error())
err = errors.New(err.Error())
merr = FromError(err)
if merr.ID != "go.micro.test" || merr.Code != 404 {
t.Fatalf("invalid conversation %v != %v", err, merr)
@ -57,7 +57,7 @@ func TestEqual(t *testing.T) {
t.Fatal("errors must be equal")
}
err3 := er.New("my test err")
err3 := errors.New("my test err")
if Equal(err1, err3) {
t.Fatal("errors must be not equal")
}

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Flow, bool) {
return c, ok
}
// MustContext returns Flow from context
func MustContext(ctx context.Context) Flow {
f, ok := FromContext(ctx)
if !ok {
panic("missing flow")
}
return f
}
// NewContext stores Flow to context
func NewContext(ctx context.Context, f Flow) context.Context {
if ctx == nil {

View File

@ -188,7 +188,7 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
steps, err := w.getSteps(options.Start, options.Reverse)
if err != nil {
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusPending.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
w.opts.Logger.Error(w.opts.Context, "store write error", werr)
}
return "", err
}
@ -212,7 +212,7 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
done := make(chan struct{})
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
w.opts.Logger.Error(w.opts.Context, "store write error", werr)
return eid, werr
}
for idx := range steps {
@ -237,7 +237,7 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
return
}
if w.opts.Logger.V(logger.TraceLevel) {
w.opts.Logger.Tracef(nctx, "will be executed %v", steps[idx][nidx])
w.opts.Logger.Trace(nctx, fmt.Sprintf("will be executed %v", steps[idx][nidx]))
}
cstep := steps[idx][nidx]
// nolint: nestif
@ -257,21 +257,21 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
if serr != nil {
step.SetStatus(StatusFailure)
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"rsp", serr); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
}
cherr <- serr
return
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"rsp", rsp); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
cherr <- werr
return
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
cherr <- werr
return
}
@ -290,16 +290,16 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
if serr != nil {
cstep.SetStatus(StatusFailure)
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"rsp", serr); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
}
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
}
cherr <- serr
return
}
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"rsp", rsp); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
w.opts.Logger.Error(ctx, "store write error", werr)
cherr <- werr
return
}
@ -317,7 +317,7 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
return eid, nil
}
logger.Tracef(ctx, "wait for finish or error")
logger.DefaultLogger.Trace(ctx, "wait for finish or error")
select {
case <-nctx.Done():
err = nctx.Err()
@ -333,15 +333,15 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
switch {
case nctx.Err() != nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusAborted.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
w.opts.Logger.Error(w.opts.Context, "store write error", werr)
}
case err == nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
w.opts.Logger.Error(w.opts.Context, "store write error", werr)
}
case err != nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
w.opts.Logger.Error(w.opts.Context, "store write error", werr)
}
}

View File

@ -32,7 +32,7 @@ type fsm struct {
// NewFSM creates a new finite state machine having the specified initial state
// with specified options
func NewFSM(opts ...Option) *fsm {
func NewFSM(opts ...Option) FSM {
return &fsm{
statesMap: map[string]StateFunc{},
opts: NewOptions(opts...),

View File

@ -17,7 +17,7 @@ func TestFSMStart(t *testing.T) {
wrapper := func(next StateFunc) StateFunc {
return func(sctx context.Context, s State, opts ...StateOption) (State, error) {
sctx = logger.NewContext(sctx, logger.Fields("state", s.Name()))
sctx = logger.NewContext(sctx, logger.DefaultLogger.Fields("state", s.Name()))
return next(sctx, s, opts...)
}
}

41
go.mod
View File

@ -1,20 +1,43 @@
module go.unistack.org/micro/v3
go 1.20
go 1.22.0
toolchain go1.23.4
require (
dario.cat/mergo v1.0.0
dario.cat/mergo v1.0.1
github.com/DATA-DOG/go-sqlmock v1.5.0
github.com/google/uuid v1.3.0
github.com/KimMachineGun/automemlimit v0.6.1
github.com/google/uuid v1.6.0
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/silas/dag v0.0.0-20220518035006-a7e85ada93c5
golang.org/x/sync v0.3.0
google.golang.org/grpc v1.57.0
google.golang.org/protobuf v1.31.0
go.uber.org/automaxprocs v1.6.0
go.unistack.org/micro-proto/v3 v3.4.1
golang.org/x/sync v0.10.0
google.golang.org/grpc v1.68.1
google.golang.org/protobuf v1.35.2
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/golang/protobuf v1.5.3 // indirect
golang.org/x/net v0.14.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230526203410-71b5a4ffd15e // indirect
github.com/cilium/ebpf v0.16.0 // indirect
github.com/containerd/cgroups/v3 v3.0.4 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/opencontainers/runtime-spec v1.2.0 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/stretchr/testify v1.10.0 // indirect
go.uber.org/goleak v1.3.0 // indirect
golang.org/x/exp v0.0.0-20241210194714-1829a127f884 // indirect
golang.org/x/net v0.32.0 // indirect
golang.org/x/sys v0.28.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
)

112
go.sum
View File

@ -1,33 +1,97 @@
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/KimMachineGun/automemlimit v0.6.1 h1:ILa9j1onAAMadBsyyUJv5cack8Y1WT26yLj/V+ulKp8=
github.com/KimMachineGun/automemlimit v0.6.1/go.mod h1:T7xYht7B8r6AG/AqFcUdc7fzd2bIdBKmepfP2S1svPY=
github.com/cilium/ebpf v0.16.0 h1:+BiEnHL6Z7lXnlGUsXQPPAE7+kenAd4ES8MQ5min0Ok=
github.com/cilium/ebpf v0.16.0/go.mod h1:L7u2Blt2jMM/vLAVgjxluxtBKlz3/GWjB0dMOEngfwE=
github.com/containerd/cgroups/v3 v3.0.4 h1:2fs7l3P0Qxb1nKWuJNFiwhp2CqiKzho71DQkDrHJIo4=
github.com/containerd/cgroups/v3 v3.0.4/go.mod h1:SA5DLYnXO8pTGYiAHXz94qvLQTKfVM5GEVisn4jpins=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI=
github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/josharian/native v1.1.0 h1:uuaP0hAbW7Y4l0ZRQ6C9zfb7Mg1mbFKry/xzDAfmtLA=
github.com/josharian/native v1.1.0/go.mod h1:7X/raswPFr05uY3HiLlYeyQntB6OO7E/d2Cu7qoaN2w=
github.com/jsimonetti/rtnetlink/v2 v2.0.1 h1:xda7qaHDSVOsADNouv7ukSuicKZO7GgVUCXxpaIEIlM=
github.com/jsimonetti/rtnetlink/v2 v2.0.1/go.mod h1:7MoNYNbb3UaDHtF8udiJo/RH6VsTKP1pqKLUTVCvToE=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mdlayher/netlink v1.7.2 h1:/UtM3ofJap7Vl4QWCPDGXY8d3GIY2UGSDbK+QWmY8/g=
github.com/mdlayher/netlink v1.7.2/go.mod h1:xraEF7uJbxLhc5fpHL4cPe221LI2bdttWlU+ZGLfQSw=
github.com/mdlayher/socket v0.4.1 h1:eM9y2/jlbs1M615oshPQOHZzj6R6wMT7bX5NPiQvn2U=
github.com/mdlayher/socket v0.4.1/go.mod h1:cAqeGjoufqdxWkD7DkpyS+wcefOtmu5OQ8KuoJGIReA=
github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
github.com/opencontainers/runtime-spec v1.2.0 h1:z97+pHb3uELt/yiAWD691HNHQIF07bE7dzrbT927iTk=
github.com/opencontainers/runtime-spec v1.2.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/silas/dag v0.0.0-20220518035006-a7e85ada93c5 h1:G/FZtUu7a6NTWl3KUHMV9jkLAh/Rvtf03NWMHaEDl+E=
github.com/silas/dag v0.0.0-20220518035006-a7e85ada93c5/go.mod h1:7RTUFBdIRC9nZ7/3RyRNH1bdqIShrDejd1YbLwgPS+I=
golang.org/x/net v0.14.0 h1:BONx9s002vGdD9umnlX1Po8vOZmrgH34qlHcD1MfK14=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230526203410-71b5a4ffd15e h1:NumxXLPfHSndr3wBBdeKiVHjGVFzi9RX2HwwQke94iY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230526203410-71b5a4ffd15e/go.mod h1:66JfowdXAEgad5O9NnYcsNPLCPZJD++2L9X0PCMODrA=
google.golang.org/grpc v1.57.0 h1:kfzNeI/klCGD2YPMUlaGNT3pxvYfga7smW3Vth8Zsiw=
google.golang.org/grpc v1.57.0/go.mod h1:Sd+9RMTACXwmub0zcNY2c4arhtrbBYD1AUHI/dt16Mo=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.unistack.org/micro-proto/v3 v3.4.1 h1:UTjLSRz2YZuaHk9iSlVqqsA50JQNAEK2ZFboGqtEa9Q=
go.unistack.org/micro-proto/v3 v3.4.1/go.mod h1:okx/cnOhzuCX0ggl/vToatbCupi0O44diiiLLsZ93Zo=
golang.org/x/exp v0.0.0-20241210194714-1829a127f884 h1:Y/Mj/94zIQQGHVSv1tTtQBDaQaJe62U9bkDZKKyhPCU=
golang.org/x/exp v0.0.0-20241210194714-1829a127f884/go.mod h1:qj5a5QZpwLU2NLQudwIN5koi3beDhSAlJwa67PuM98c=
golang.org/x/net v0.32.0 h1:ZqPmj8Kzc+Y6e0+skZsuACbx+wzMgo5MQsJh9Qd6aYI=
golang.org/x/net v0.32.0/go.mod h1:CwU0IoeOlnQQWJ6ioyFrfRuomB8GKF6KbYXZVyeXNfs=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 h1:8ZmaLZE4XWrtU3MyClkYqqtl6Oegr3235h7jxsDyqCY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576/go.mod h1:5uTbfoYQed2U9p3KIj2/Zzm02PYhndfdmML0qC3q3FU=
google.golang.org/grpc v1.68.1 h1:oI5oTa11+ng8r8XMMN7jAOmWfPZWbYpCFaMUTACxkM0=
google.golang.org/grpc v1.68.1/go.mod h1:+q1XYFJjShcqn0QZHvCyeR4CXPA+llXIeUIfIe00waw=
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@ -4,17 +4,6 @@ import "context"
type loggerKey struct{}
// MustContext returns logger from passed context or DefaultLogger if empty
func MustContext(ctx context.Context) Logger {
if ctx == nil {
return DefaultLogger
}
if l, ok := ctx.Value(loggerKey{}).(Logger); ok && l != nil {
return l
}
return DefaultLogger
}
// FromContext returns logger from passed context
func FromContext(ctx context.Context) (Logger, bool) {
if ctx == nil {
@ -24,6 +13,15 @@ func FromContext(ctx context.Context) (Logger, bool) {
return l, ok
}
// MustContext returns logger from passed context or DefaultLogger if empty
func MustContext(ctx context.Context) Logger {
l, ok := FromContext(ctx)
if !ok {
panic("missing logger")
}
return l
}
// NewContext stores logger into passed context
func NewContext(ctx context.Context, l Logger) context.Context {
if ctx == nil {

View File

@ -11,11 +11,9 @@ var DefaultContextAttrFuncs []ContextAttrFunc
var (
// DefaultLogger variable
DefaultLogger Logger = NewLogger()
DefaultLogger = NewLogger()
// DefaultLevel used by logger
DefaultLevel = InfoLevel
// DefaultCallerSkipCount used by logger
DefaultCallerSkipCount = 2
)
// Logger is a generic logging interface
@ -33,33 +31,19 @@ type Logger interface {
// Fields set fields to always be logged with keyval pairs
Fields(fields ...interface{}) Logger
// Info level message
Info(ctx context.Context, args ...interface{})
Info(ctx context.Context, msg string, args ...interface{})
// Trace level message
Trace(ctx context.Context, args ...interface{})
Trace(ctx context.Context, msg string, args ...interface{})
// Debug level message
Debug(ctx context.Context, args ...interface{})
Debug(ctx context.Context, msg string, args ...interface{})
// Warn level message
Warn(ctx context.Context, args ...interface{})
Warn(ctx context.Context, msg string, args ...interface{})
// Error level message
Error(ctx context.Context, args ...interface{})
Error(ctx context.Context, msg string, args ...interface{})
// Fatal level message
Fatal(ctx context.Context, args ...interface{})
// Infof level message
Infof(ctx context.Context, msg string, args ...interface{})
// Tracef level message
Tracef(ctx context.Context, msg string, args ...interface{})
// Debug level message
Debugf(ctx context.Context, msg string, args ...interface{})
// Warn level message
Warnf(ctx context.Context, msg string, args ...interface{})
// Error level message
Errorf(ctx context.Context, msg string, args ...interface{})
// Fatal level message
Fatalf(ctx context.Context, msg string, args ...interface{})
Fatal(ctx context.Context, msg string, args ...interface{})
// Log logs message with needed level
Log(ctx context.Context, level Level, args ...interface{})
// Logf logs message with needed level
Logf(ctx context.Context, level Level, msg string, args ...interface{})
Log(ctx context.Context, level Level, msg string, args ...interface{})
// Name returns broker instance name
Name() string
// String returns the type of logger
@ -68,108 +52,3 @@ type Logger interface {
// Field contains keyval pair
type Field interface{}
// Info writes msg to default logger on info level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Info(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Info(ctx, args...)
}
// Error writes msg to default logger on error level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Error(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Error(ctx, args...)
}
// Debug writes msg to default logger on debug level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Debug(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Debug(ctx, args...)
}
// Warn writes msg to default logger on warn level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Warn(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Warn(ctx, args...)
}
// Trace writes msg to default logger on trace level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Trace(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Trace(ctx, args...)
}
// Fatal writes msg to default logger on fatal level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Fatal(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Fatal(ctx, args...)
}
// Infof writes formatted msg to default logger on info level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Infof(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Infof(ctx, msg, args...)
}
// Errorf writes formatted msg to default logger on error level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Errorf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Errorf(ctx, msg, args...)
}
// Debugf writes formatted msg to default logger on debug level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Debugf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Debugf(ctx, msg, args...)
}
// Warnf writes formatted msg to default logger on warn level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Warnf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Warnf(ctx, msg, args...)
}
// Tracef writes formatted msg to default logger on trace level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Tracef(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Tracef(ctx, msg, args...)
}
// Fatalf writes formatted msg to default logger on fatal level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Fatalf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Fatalf(ctx, msg, args...)
}
// V returns true if passed level enabled in default logger
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func V(level Level) bool {
return DefaultLogger.V(level)
}
// Init initialize logger
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Init(opts ...Option) error {
return DefaultLogger.Init(opts...)
}
// Fields create logger with specific fields
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Fields(fields ...interface{}) Logger {
return DefaultLogger.Fields(fields...)
}

View File

@ -4,12 +4,17 @@ import (
"context"
)
const (
defaultCallerSkipCount = 2
)
type noopLogger struct {
opts Options
}
func NewLogger(opts ...Option) Logger {
options := NewOptions(opts...)
options.CallerSkipCount = defaultCallerSkipCount
return &noopLogger{opts: options}
}
@ -51,44 +56,23 @@ func (l *noopLogger) String() string {
return "noop"
}
func (l *noopLogger) Log(ctx context.Context, lvl Level, attrs ...interface{}) {
func (l *noopLogger) Log(ctx context.Context, lvl Level, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Info(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Info(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Debug(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Debug(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Error(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Error(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Trace(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Trace(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Warn(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Warn(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Fatal(ctx context.Context, attrs ...interface{}) {
}
func (l *noopLogger) Logf(ctx context.Context, lvl Level, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Infof(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Debugf(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Errorf(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Tracef(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Warnf(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Fatalf(ctx context.Context, msg string, attrs ...interface{}) {
func (l *noopLogger) Fatal(ctx context.Context, msg string, attrs ...interface{}) {
}

View File

@ -5,6 +5,7 @@ import (
"io"
"log/slog"
"os"
"slices"
"time"
"go.unistack.org/micro/v3/meter"
@ -15,18 +16,6 @@ type Option func(*Options)
// Options holds logger options
type Options struct {
// Out holds the output writer
Out io.Writer
// Context holds exernal options
Context context.Context
// Name holds the logger name
Name string
// Fields holds additional metadata
Fields []interface{}
// CallerSkipCount number of frmaes to skip
CallerSkipCount int
// ContextAttrFuncs contains funcs that executed before log func on context
ContextAttrFuncs []ContextAttrFunc
// TimeKey is the key used for the time of the log call
TimeKey string
// LevelKey is the key used for the level of the log call
@ -39,16 +28,33 @@ type Options struct {
SourceKey string
// StacktraceKey is the key used for the stacktrace
StacktraceKey string
// AddStacktrace controls writing of stacktaces on error
AddStacktrace bool
// AddSource enabled writing source file and position in log
AddSource bool
// The logging level the logger should log
Level Level
// TimeFunc used to obtain current time
TimeFunc func() time.Time
// Name holds the logger name
Name string
// Out holds the output writer
Out io.Writer
// Context holds exernal options
Context context.Context
// Meter used to count logs for specific level
Meter meter.Meter
// TimeFunc used to obtain current time
TimeFunc func() time.Time
// Fields holds additional metadata
Fields []interface{}
// ContextAttrFuncs contains funcs that executed before log func on context
ContextAttrFuncs []ContextAttrFunc
// callerSkipCount number of frmaes to skip
CallerSkipCount int
// The logging level the logger should log
Level Level
// AddSource enabled writing source file and position in log
AddSource bool
// AddStacktrace controls writing of stacktaces on error
AddStacktrace bool
// DedupKeys deduplicate keys in log output
DedupKeys bool
}
// NewOptions creates new options struct
@ -57,7 +63,6 @@ func NewOptions(opts ...Option) Options {
Level: DefaultLevel,
Fields: make([]interface{}, 0, 6),
Out: os.Stderr,
CallerSkipCount: DefaultCallerSkipCount,
Context: context.Background(),
ContextAttrFuncs: DefaultContextAttrFuncs,
AddSource: true,
@ -81,6 +86,35 @@ func WithContextAttrFuncs(fncs ...ContextAttrFunc) Option {
}
}
// WithDedupKeys dont log duplicate keys
func WithDedupKeys(b bool) Option {
return func(o *Options) {
o.DedupKeys = b
}
}
// WithAddFields add fields for the logger
func WithAddFields(fields ...interface{}) Option {
return func(o *Options) {
if o.DedupKeys {
for i := 0; i < len(o.Fields); i += 2 {
for j := 0; j < len(fields); j += 2 {
iv, iok := o.Fields[i].(string)
jv, jok := fields[j].(string)
if iok && jok && iv == jv {
fields = slices.Delete(fields, j, j+2)
}
}
}
if len(fields) > 0 {
o.Fields = append(o.Fields, fields...)
}
} else {
o.Fields = append(o.Fields, fields...)
}
}
}
// WithFields set default fields for the logger
func WithFields(fields ...interface{}) Option {
return func(o *Options) {
@ -102,27 +136,20 @@ func WithOutput(out io.Writer) Option {
}
}
// WitAddStacktrace controls writing stacktrace on error
// WithAddStacktrace controls writing stacktrace on error
func WithAddStacktrace(v bool) Option {
return func(o *Options) {
o.AddStacktrace = v
}
}
// WitAddSource controls writing source file and pos in log
// WithAddSource controls writing source file and pos in log
func WithAddSource(v bool) Option {
return func(o *Options) {
o.AddSource = v
}
}
// WithCallerSkipCount set frame count to skip
func WithCallerSkipCount(c int) Option {
return func(o *Options) {
o.CallerSkipCount = c
}
}
// WithContext set context
func WithContext(ctx context.Context) Option {
return func(o *Options) {
@ -154,8 +181,8 @@ func WithTimeFunc(fn func() time.Time) Option {
func WithZapKeys() Option {
return func(o *Options) {
o.TimeKey = "@timestamp"
o.LevelKey = "level"
o.MessageKey = "msg"
o.LevelKey = slog.LevelKey
o.MessageKey = slog.MessageKey
o.SourceKey = "caller"
o.StacktraceKey = "stacktrace"
o.ErrorKey = "error"
@ -164,8 +191,8 @@ func WithZapKeys() Option {
func WithZerologKeys() Option {
return func(o *Options) {
o.TimeKey = "time"
o.LevelKey = "level"
o.TimeKey = slog.TimeKey
o.LevelKey = slog.LevelKey
o.MessageKey = "message"
o.SourceKey = "caller"
o.StacktraceKey = "stacktrace"
@ -187,8 +214,8 @@ func WithSlogKeys() Option {
func WithMicroKeys() Option {
return func(o *Options) {
o.TimeKey = "timestamp"
o.LevelKey = "level"
o.MessageKey = "msg"
o.LevelKey = slog.LevelKey
o.MessageKey = slog.MessageKey
o.SourceKey = "caller"
o.StacktraceKey = "stacktrace"
o.ErrorKey = "error"
@ -198,6 +225,8 @@ func WithMicroKeys() Option {
// WithAddCallerSkipCount add skip count for copy logger
func WithAddCallerSkipCount(n int) Option {
return func(o *Options) {
o.CallerSkipCount += n
if n > 0 {
o.CallerSkipCount += n
}
}
}

View File

@ -2,19 +2,26 @@ package slog
import (
"context"
"fmt"
"log/slog"
"os"
"reflect"
"regexp"
"runtime"
"strconv"
"sync"
"sync/atomic"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/semconv"
"go.unistack.org/micro/v3/tracer"
)
const (
badKey = "!BADKEY"
// defaultCallerSkipCount used by logger
defaultCallerSkipCount = 3
)
var reTrace = regexp.MustCompile(`.*/slog/logger\.go.*\n`)
var (
@ -26,6 +33,27 @@ var (
fatalValue = slog.StringValue("fatal")
)
type wrapper struct {
h slog.Handler
level atomic.Int64
}
func (h *wrapper) Enabled(ctx context.Context, level slog.Level) bool {
return level >= slog.Level(int(h.level.Load()))
}
func (h *wrapper) Handle(ctx context.Context, rec slog.Record) error {
return h.h.Handle(ctx, rec)
}
func (h *wrapper) WithAttrs(attrs []slog.Attr) slog.Handler {
return h.h.WithAttrs(attrs)
}
func (h *wrapper) WithGroup(name string) slog.Handler {
return h.h.WithGroup(name)
}
func (s *slogLogger) renameAttr(_ []string, a slog.Attr) slog.Attr {
switch a.Key {
case slog.SourceKey:
@ -62,8 +90,7 @@ func (s *slogLogger) renameAttr(_ []string, a slog.Attr) slog.Attr {
}
type slogLogger struct {
leveler *slog.LevelVar
handler slog.Handler
handler *wrapper
opts logger.Options
mu sync.RWMutex
}
@ -77,51 +104,53 @@ func (s *slogLogger) Clone(opts ...logger.Option) logger.Logger {
o(&options)
}
l := &slogLogger{
opts: options,
if len(options.ContextAttrFuncs) == 0 {
options.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
l.leveler = new(slog.LevelVar)
handleOpt := &slog.HandlerOptions{
ReplaceAttr: l.renameAttr,
Level: l.leveler,
AddSource: l.opts.AddSource,
attrs, _ := s.argsAttrs(options.Fields)
l := &slogLogger{
handler: &wrapper{h: s.handler.h.WithAttrs(attrs)},
opts: options,
}
l.leveler.Set(loggerToSlogLevel(l.opts.Level))
l.handler = slog.New(slog.NewJSONHandler(options.Out, handleOpt)).With(options.Fields...).Handler()
l.handler.level.Store(int64(loggerToSlogLevel(options.Level)))
return l
}
func (s *slogLogger) V(level logger.Level) bool {
return s.opts.Level.Enabled(level)
s.mu.Lock()
v := s.opts.Level.Enabled(level)
s.mu.Unlock()
return v
}
func (s *slogLogger) Level(level logger.Level) {
s.leveler.Set(loggerToSlogLevel(level))
s.mu.Lock()
s.opts.Level = level
s.handler.level.Store(int64(loggerToSlogLevel(level)))
s.mu.Unlock()
}
func (s *slogLogger) Options() logger.Options {
return s.opts
}
func (s *slogLogger) Fields(attrs ...interface{}) logger.Logger {
func (s *slogLogger) Fields(fields ...interface{}) logger.Logger {
s.mu.RLock()
level := s.leveler.Level()
options := s.opts
s.mu.RUnlock()
l := &slogLogger{opts: options}
l.leveler = new(slog.LevelVar)
l.leveler.Set(level)
logger.WithAddFields(fields...)(&l.opts)
handleOpt := &slog.HandlerOptions{
ReplaceAttr: l.renameAttr,
Level: l.leveler,
AddSource: l.opts.AddSource,
if len(options.ContextAttrFuncs) == 0 {
options.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
l.handler = slog.New(slog.NewJSONHandler(l.opts.Out, handleOpt)).With(attrs...).Handler()
attrs, _ := s.argsAttrs(fields)
l.handler = &wrapper{h: s.handler.h.WithAttrs(attrs)}
l.handler.level.Store(int64(loggerToSlogLevel(l.opts.Level)))
return l
}
@ -129,407 +158,77 @@ func (s *slogLogger) Fields(attrs ...interface{}) logger.Logger {
func (s *slogLogger) Init(opts ...logger.Option) error {
s.mu.Lock()
if len(s.opts.ContextAttrFuncs) == 0 {
s.opts.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
for _, o := range opts {
o(&s.opts)
}
s.leveler = new(slog.LevelVar)
if len(s.opts.ContextAttrFuncs) == 0 {
s.opts.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
handleOpt := &slog.HandlerOptions{
ReplaceAttr: s.renameAttr,
Level: s.leveler,
Level: loggerToSlogLevel(logger.TraceLevel),
AddSource: s.opts.AddSource,
}
s.leveler.Set(loggerToSlogLevel(s.opts.Level))
s.handler = slog.New(slog.NewJSONHandler(s.opts.Out, handleOpt)).With(s.opts.Fields...).Handler()
attrs, _ := s.argsAttrs(s.opts.Fields)
var h slog.Handler
if s.opts.Context != nil {
if v, ok := s.opts.Context.Value(handlerKey{}).(slog.Handler); ok && v != nil {
h = v
}
if fn := s.opts.Context.Value(handlerFnKey{}); fn != nil {
if rfn := reflect.ValueOf(fn); rfn.Kind() == reflect.Func {
if ret := rfn.Call([]reflect.Value{reflect.ValueOf(s.opts.Out), reflect.ValueOf(handleOpt)}); len(ret) == 1 {
if iface, ok := ret[0].Interface().(slog.Handler); ok && iface != nil {
h = iface
}
}
}
}
}
if h == nil {
h = slog.NewJSONHandler(s.opts.Out, handleOpt)
}
s.handler = &wrapper{h: h.WithAttrs(attrs)}
s.handler.level.Store(int64(loggerToSlogLevel(s.opts.Level)))
s.mu.Unlock()
return nil
}
func (s *slogLogger) Log(ctx context.Context, lvl logger.Level, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", lvl.String()).Inc()
if !s.V(lvl) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), loggerToSlogLevel(lvl), fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace && lvl == logger.ErrorLevel {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String(s.opts.StacktraceKey, traceLines[len(traceLines)-1]))
}
}
}
r.Add(attrs[1:]...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Log(ctx context.Context, lvl logger.Level, msg string, attrs ...interface{}) {
s.printLog(ctx, lvl, msg, attrs...)
}
func (s *slogLogger) Logf(ctx context.Context, lvl logger.Level, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", lvl.String()).Inc()
if !s.V(lvl) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), loggerToSlogLevel(lvl), msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace && lvl == logger.ErrorLevel {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, (slog.String(s.opts.StacktraceKey, traceLines[len(traceLines)-1])))
}
}
}
r.Add(attrs[1:]...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Info(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.InfoLevel, msg, attrs...)
}
func (s *slogLogger) Info(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.InfoLevel.String()).Inc()
if !s.V(logger.InfoLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelInfo, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Debug(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.DebugLevel, msg, attrs...)
}
func (s *slogLogger) Infof(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.InfoLevel.String()).Inc()
if !s.V(logger.InfoLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelInfo, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Trace(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.TraceLevel, msg, attrs...)
}
func (s *slogLogger) Debug(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.DebugLevel.String()).Inc()
if !s.V(logger.DebugLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Error(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.ErrorLevel, msg, attrs...)
}
func (s *slogLogger) Debugf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.DebugLevel.String()).Inc()
if !s.V(logger.DebugLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Trace(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.TraceLevel.String()).Inc()
if !s.V(logger.TraceLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug-1, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Tracef(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.TraceLevel.String()).Inc()
if !s.V(logger.TraceLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug-1, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Error(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.ErrorLevel.String()).Inc()
if !s.V(logger.ErrorLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String("stacktrace", traceLines[len(traceLines)-1]))
}
}
}
r.Add(attrs[1:]...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Errorf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.ErrorLevel.String()).Inc()
if !s.V(logger.ErrorLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String("stacktrace", traceLines[len(traceLines)-1]))
}
}
}
r.Add(attrs...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Fatal(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.FatalLevel.String()).Inc()
if !s.V(logger.FatalLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError+1, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Fatal(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.FatalLevel, msg, attrs...)
os.Exit(1)
}
func (s *slogLogger) Fatalf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.FatalLevel.String()).Inc()
if !s.V(logger.FatalLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError+1, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs...)
_ = s.handler.Handle(ctx, r)
os.Exit(1)
}
func (s *slogLogger) Warn(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.WarnLevel.String()).Inc()
if !s.V(logger.WarnLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelWarn, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Warnf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.WarnLevel.String()).Inc()
if !s.V(logger.WarnLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelWarn, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Warn(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.WarnLevel, msg, attrs...)
}
func (s *slogLogger) Name() string {
@ -540,10 +239,59 @@ func (s *slogLogger) String() string {
return "slog"
}
func (s *slogLogger) printLog(ctx context.Context, lvl logger.Level, msg string, args ...interface{}) {
if !s.V(lvl) {
return
}
var argError error
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", lvl.String()).Inc()
attrs, err := s.argsAttrs(args)
if err != nil {
argError = err
}
if argError != nil {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, argError.Error())
}
}
for _, fn := range s.opts.ContextAttrFuncs {
ctxAttrs, err := s.argsAttrs(fn(ctx))
if err != nil {
argError = err
}
attrs = append(attrs, ctxAttrs...)
}
if argError != nil {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, argError.Error())
}
}
if s.opts.AddStacktrace && lvl == logger.ErrorLevel {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String(s.opts.StacktraceKey, traceLines[len(traceLines)-1]))
}
}
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, printLog, LogLvlMethod]
r := slog.NewRecord(s.opts.TimeFunc(), loggerToSlogLevel(lvl), msg, pcs[0])
r.AddAttrs(attrs...)
_ = s.handler.Handle(ctx, r)
}
func NewLogger(opts ...logger.Option) logger.Logger {
s := &slogLogger{
opts: logger.NewOptions(opts...),
}
s.opts.CallerSkipCount = defaultCallerSkipCount
return s
}
@ -581,3 +329,39 @@ func slogToLoggerLevel(level slog.Level) logger.Level {
return logger.InfoLevel
}
}
func (s *slogLogger) argsAttrs(args []interface{}) ([]slog.Attr, error) {
attrs := make([]slog.Attr, 0, len(args))
var err error
for idx := 0; idx < len(args); idx++ {
switch arg := args[idx].(type) {
case slog.Attr:
attrs = append(attrs, arg)
case string:
if idx+1 < len(args) {
attrs = append(attrs, slog.Any(arg, args[idx+1]))
idx++
} else {
attrs = append(attrs, slog.String(badKey, arg))
}
case error:
attrs = append(attrs, slog.String(s.opts.ErrorKey, arg.Error()))
err = arg
}
}
return attrs, err
}
type handlerKey struct{}
func WithHandler(h slog.Handler) logger.Option {
return logger.SetOption(handlerKey{}, h)
}
type handlerFnKey struct{}
func WithHandlerFunc(fn any) logger.Option {
return logger.SetOption(handlerFnKey{}, fn)
}

View File

@ -3,13 +3,165 @@ package slog
import (
"bytes"
"context"
"errors"
"fmt"
"log"
"log/slog"
"strings"
"testing"
"github.com/google/uuid"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
)
func TestWithFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithDedupKeys(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
l = l.Fields("key1", "val2")
l.Info(ctx, "msg2")
if !bytes.Contains(buf.Bytes(), []byte(`msg=msg2 key1=val1`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithDedupKeysWithAddFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithDedupKeys(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
if err := l.Init(logger.WithAddFields("key2", "val2")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg2")
if err := l.Init(logger.WithAddFields("key2", "val3", "key1", "val4")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg3")
if !bytes.Contains(buf.Bytes(), []byte(`msg=msg3 key1=val1 key2=val2`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithHandlerFunc(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
)
if err := l.Init(); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
if !bytes.Contains(buf.Bytes(), []byte(`msg=msg1`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithAddFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
if err := l.Init(logger.WithAddFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg2")
if err := l.Init(logger.WithAddFields("key2", "val2")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg3")
if !bytes.Contains(buf.Bytes(), []byte(`"key1"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"key2"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestMultipleFieldsWithLevel(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l = l.Fields("key", "val")
l.Info(ctx, "msg1")
nl := l.Clone(logger.WithLevel(logger.DebugLevel))
nl.Debug(ctx, "msg2")
l.Debug(ctx, "msg3")
if !bytes.Contains(buf.Bytes(), []byte(`"key":"val"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"msg1"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"msg2"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if bytes.Contains(buf.Bytes(), []byte(`"msg3"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestMultipleFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l = l.Fields("key", "val")
l = l.Fields("key1", "val1")
l.Info(ctx, "msg")
if !bytes.Contains(buf.Bytes(), []byte(`"key":"val"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"key1":"val1"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestError(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
@ -29,13 +181,22 @@ func TestError(t *testing.T) {
func TestErrorf(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.ErrorLevel), logger.WithOutput(buf), logger.WithAddStacktrace(true))
if err := l.Init(); err != nil {
if err := l.Init(logger.WithContextAttrFuncs(func(_ context.Context) []interface{} {
return nil
})); err != nil {
t.Fatal(err)
}
l.Errorf(ctx, "message", fmt.Errorf("error message"))
l.Log(ctx, logger.ErrorLevel, "message", errors.New("error msg"))
l.Log(ctx, logger.ErrorLevel, "", errors.New("error msg"))
if !bytes.Contains(buf.Bytes(), []byte(`"error":"error msg"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"stacktrace":"`)) {
t.Fatalf("logger stacktrace not works, buf contains: %s", buf.Bytes())
}
@ -99,6 +260,11 @@ func TestFromContextWithFields(t *testing.T) {
if !bytes.Contains(buf.Bytes(), []byte(`"key":"val"`)) {
t.Fatalf("logger fields not works, buf contains: %s", buf.Bytes())
}
l.Info(ctx, "test", "uncorrected number attributes")
if !bytes.Contains(buf.Bytes(), []byte(`"!BADKEY":"uncorrected number attributes"`)) {
t.Fatalf("logger fields not works, buf contains: %s", buf.Bytes())
}
}
func TestClone(t *testing.T) {
@ -174,3 +340,52 @@ func TestLogger(t *testing.T) {
t.Fatalf("logger warn, buf %s", buf.Bytes())
}
}
func Test_WithContextAttrFunc(t *testing.T) {
loggerContextAttrFuncs := []logger.ContextAttrFunc{
func(ctx context.Context) []interface{} {
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return nil
}
attrs := make([]interface{}, 0, 10)
for k, v := range md {
switch k {
case "X-Request-Id", "Phone", "External-Id", "Source-Service", "X-App-Install-Id", "Client-Id", "Client-Ip":
attrs = append(attrs, strings.ToLower(k), v)
}
}
return attrs
},
}
logger.DefaultContextAttrFuncs = append(logger.DefaultContextAttrFuncs, loggerContextAttrFuncs...)
ctx := context.TODO()
ctx = metadata.AppendIncomingContext(ctx, "X-Request-Id", uuid.New().String(),
"Source-Service", "Test-System")
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.TraceLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l.Info(ctx, "test message")
if !(bytes.Contains(buf.Bytes(), []byte(`"level":"info"`)) && bytes.Contains(buf.Bytes(), []byte(`"msg":"test message"`))) {
t.Fatalf("logger info, buf %s", buf.Bytes())
}
if !(bytes.Contains(buf.Bytes(), []byte(`"x-request-id":"`))) {
t.Fatalf("logger info, buf %s", buf.Bytes())
}
if !(bytes.Contains(buf.Bytes(), []byte(`"source-service":"Test-System"`))) {
t.Fatalf("logger info, buf %s", buf.Bytes())
}
buf.Reset()
imd, _ := metadata.FromIncomingContext(ctx)
l.Info(ctx, "test message1")
imd.Set("Source-Service", "Test-System2")
l.Info(ctx, "test message2")
// t.Logf("xxx %s", buf.Bytes())
}

View File

@ -36,14 +36,14 @@ var (
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
filteredBytes = []byte("<filtered>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("{")
closeMapBytes = []byte("}")
// openBracketBytes = []byte("[")
// closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("{")
closeMapBytes = []byte("}")
)
type protoMessage interface {
@ -52,13 +52,15 @@ type protoMessage interface {
}
type Wrapper struct {
val interface{}
s fmt.State
pointers map[uintptr]int
opts *Options
pointers map[uintptr]int
takeMap map[int]bool
val interface{}
s fmt.State
opts *Options
depth int
ignoreNextType bool
takeMap map[int]bool
protoWrapperType bool
sqlWrapperType bool
}

View File

@ -82,12 +82,12 @@ func TestTagged(t *testing.T) {
func TestTaggedNested(t *testing.T) {
type val struct {
key string `logger:"take"`
val string `logger:"omit"`
// val string `logger:"omit"`
unk string
}
type str struct {
key string `logger:"omit"`
val *val `logger:"take"`
// key string `logger:"omit"`
val *val `logger:"take"`
}
var iface interface{}

View File

@ -1,399 +0,0 @@
// Package wrapper provides wrapper for Logger
package wrapper
import (
"context"
"fmt"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/server"
)
var (
// DefaultClientCallObserver called by wrapper in client Call
DefaultClientCallObserver = func(ctx context.Context, req client.Request, rsp interface{}, opts []client.CallOption, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultClientStreamObserver called by wrapper in client Stream
DefaultClientStreamObserver = func(ctx context.Context, req client.Request, opts []client.CallOption, stream client.Stream, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultClientPublishObserver called by wrapper in client Publish
DefaultClientPublishObserver = func(ctx context.Context, msg client.Message, opts []client.PublishOption, err error) []string {
labels := []string{"endpoint", msg.Topic()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultServerHandlerObserver called by wrapper in server Handler
DefaultServerHandlerObserver = func(ctx context.Context, req server.Request, rsp interface{}, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultServerSubscriberObserver called by wrapper in server Subscriber
DefaultServerSubscriberObserver = func(ctx context.Context, msg server.Message, err error) []string {
labels := []string{"endpoint", msg.Topic()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultClientCallFuncObserver called by wrapper in client CallFunc
DefaultClientCallFuncObserver = func(ctx context.Context, addr string, req client.Request, rsp interface{}, opts client.CallOptions, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultSkipEndpoints wrapper not called for this endpoints
DefaultSkipEndpoints = []string{"Meter.Metrics", "Health.Live", "Health.Ready", "Health.Version"}
)
type lWrapper struct {
client.Client
serverHandler server.HandlerFunc
serverSubscriber server.SubscriberFunc
clientCallFunc client.CallFunc
opts Options
}
type (
// ClientCallObserver func signature
ClientCallObserver func(context.Context, client.Request, interface{}, []client.CallOption, error) []string
// ClientStreamObserver func signature
ClientStreamObserver func(context.Context, client.Request, []client.CallOption, client.Stream, error) []string
// ClientPublishObserver func signature
ClientPublishObserver func(context.Context, client.Message, []client.PublishOption, error) []string
// ClientCallFuncObserver func signature
ClientCallFuncObserver func(context.Context, string, client.Request, interface{}, client.CallOptions, error) []string
// ServerHandlerObserver func signature
ServerHandlerObserver func(context.Context, server.Request, interface{}, error) []string
// ServerSubscriberObserver func signature
ServerSubscriberObserver func(context.Context, server.Message, error) []string
)
// Options struct for wrapper
type Options struct {
// Logger that used for log
Logger logger.Logger
// ServerHandlerObservers funcs
ServerHandlerObservers []ServerHandlerObserver
// ServerSubscriberObservers funcs
ServerSubscriberObservers []ServerSubscriberObserver
// ClientCallObservers funcs
ClientCallObservers []ClientCallObserver
// ClientStreamObservers funcs
ClientStreamObservers []ClientStreamObserver
// ClientPublishObservers funcs
ClientPublishObservers []ClientPublishObserver
// ClientCallFuncObservers funcs
ClientCallFuncObservers []ClientCallFuncObserver
// SkipEndpoints
SkipEndpoints []string
// Level for logger
Level logger.Level
// Enabled flag
Enabled bool
}
// Option func signature
type Option func(*Options)
// NewOptions creates Options from Option slice
func NewOptions(opts ...Option) Options {
options := Options{
Logger: logger.DefaultLogger,
Level: logger.TraceLevel,
ClientCallObservers: []ClientCallObserver{DefaultClientCallObserver},
ClientStreamObservers: []ClientStreamObserver{DefaultClientStreamObserver},
ClientPublishObservers: []ClientPublishObserver{DefaultClientPublishObserver},
ClientCallFuncObservers: []ClientCallFuncObserver{DefaultClientCallFuncObserver},
ServerHandlerObservers: []ServerHandlerObserver{DefaultServerHandlerObserver},
ServerSubscriberObservers: []ServerSubscriberObserver{DefaultServerSubscriberObserver},
SkipEndpoints: DefaultSkipEndpoints,
}
for _, o := range opts {
o(&options)
}
return options
}
// WithEnabled enable/diable flag
func WithEnabled(b bool) Option {
return func(o *Options) {
o.Enabled = b
}
}
// WithLevel log level
func WithLevel(l logger.Level) Option {
return func(o *Options) {
o.Level = l
}
}
// WithLogger logger
func WithLogger(l logger.Logger) Option {
return func(o *Options) {
o.Logger = l
}
}
// WithClientCallObservers funcs
func WithClientCallObservers(ob ...ClientCallObserver) Option {
return func(o *Options) {
o.ClientCallObservers = ob
}
}
// WithClientStreamObservers funcs
func WithClientStreamObservers(ob ...ClientStreamObserver) Option {
return func(o *Options) {
o.ClientStreamObservers = ob
}
}
// WithClientPublishObservers funcs
func WithClientPublishObservers(ob ...ClientPublishObserver) Option {
return func(o *Options) {
o.ClientPublishObservers = ob
}
}
// WithClientCallFuncObservers funcs
func WithClientCallFuncObservers(ob ...ClientCallFuncObserver) Option {
return func(o *Options) {
o.ClientCallFuncObservers = ob
}
}
// WithServerHandlerObservers funcs
func WithServerHandlerObservers(ob ...ServerHandlerObserver) Option {
return func(o *Options) {
o.ServerHandlerObservers = ob
}
}
// WithServerSubscriberObservers funcs
func WithServerSubscriberObservers(ob ...ServerSubscriberObserver) Option {
return func(o *Options) {
o.ServerSubscriberObservers = ob
}
}
// SkipEndpoins
func SkipEndpoints(eps ...string) Option {
return func(o *Options) {
o.SkipEndpoints = append(o.SkipEndpoints, eps...)
}
}
func (l *lWrapper) Call(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {
err := l.Client.Call(ctx, req, rsp, opts...)
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ClientCallObservers {
labels = append(labels, o(ctx, req, rsp, opts, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
func (l *lWrapper) Stream(ctx context.Context, req client.Request, opts ...client.CallOption) (client.Stream, error) {
stream, err := l.Client.Stream(ctx, req, opts...)
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return stream, err
}
}
if !l.opts.Enabled {
return stream, err
}
var labels []string
for _, o := range l.opts.ClientStreamObservers {
labels = append(labels, o(ctx, req, opts, stream, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return stream, err
}
func (l *lWrapper) Publish(ctx context.Context, msg client.Message, opts ...client.PublishOption) error {
err := l.Client.Publish(ctx, msg, opts...)
endpoint := msg.Topic()
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ClientPublishObservers {
labels = append(labels, o(ctx, msg, opts, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
func (l *lWrapper) ServerHandler(ctx context.Context, req server.Request, rsp interface{}) error {
err := l.serverHandler(ctx, req, rsp)
endpoint := req.Endpoint()
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ServerHandlerObservers {
labels = append(labels, o(ctx, req, rsp, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
func (l *lWrapper) ServerSubscriber(ctx context.Context, msg server.Message) error {
err := l.serverSubscriber(ctx, msg)
endpoint := msg.Topic()
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ServerSubscriberObservers {
labels = append(labels, o(ctx, msg, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
// NewClientWrapper accepts an open options and returns a Client Wrapper
func NewClientWrapper(opts ...Option) client.Wrapper {
return func(c client.Client) client.Client {
options := NewOptions()
for _, o := range opts {
o(&options)
}
return &lWrapper{opts: options, Client: c}
}
}
// NewClientCallWrapper accepts an options and returns a Call Wrapper
func NewClientCallWrapper(opts ...Option) client.CallWrapper {
return func(h client.CallFunc) client.CallFunc {
options := NewOptions()
for _, o := range opts {
o(&options)
}
l := &lWrapper{opts: options, clientCallFunc: h}
return l.ClientCallFunc
}
}
func (l *lWrapper) ClientCallFunc(ctx context.Context, addr string, req client.Request, rsp interface{}, opts client.CallOptions) error {
err := l.clientCallFunc(ctx, addr, req, rsp, opts)
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ClientCallFuncObservers {
labels = append(labels, o(ctx, addr, req, rsp, opts, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
// NewServerHandlerWrapper accepts an options and returns a Handler Wrapper
func NewServerHandlerWrapper(opts ...Option) server.HandlerWrapper {
return func(h server.HandlerFunc) server.HandlerFunc {
options := NewOptions()
for _, o := range opts {
o(&options)
}
l := &lWrapper{opts: options, serverHandler: h}
return l.ServerHandler
}
}
// NewServerSubscriberWrapper accepts an options and returns a Subscriber Wrapper
func NewServerSubscriberWrapper(opts ...Option) server.SubscriberWrapper {
return func(h server.SubscriberFunc) server.SubscriberFunc {
options := NewOptions()
for _, o := range opts {
o(&options)
}
l := &lWrapper{opts: options, serverSubscriber: h}
return l.ServerSubscriber
}
}

View File

@ -24,6 +24,17 @@ func FromIncomingContext(ctx context.Context) (Metadata, bool) {
return md.md, ok
}
// MustIncomingContext returns metadata from incoming ctx
// returned metadata shoud not be modified or race condition happens.
// If metadata not exists panics.
func MustIncomingContext(ctx context.Context) Metadata {
md, ok := FromIncomingContext(ctx)
if !ok {
panic("missing metadata")
}
return md
}
// FromOutgoingContext returns metadata from outgoing ctx
// returned metadata shoud not be modified or race condition happens
func FromOutgoingContext(ctx context.Context) (Metadata, bool) {
@ -37,6 +48,17 @@ func FromOutgoingContext(ctx context.Context) (Metadata, bool) {
return md.md, ok
}
// MustOutgoingContext returns metadata from outgoing ctx
// returned metadata shoud not be modified or race condition happens.
// If metadata not exists panics.
func MustOutgoingContext(ctx context.Context) Metadata {
md, ok := FromOutgoingContext(ctx)
if !ok {
panic("missing metadata")
}
return md
}
// FromContext returns metadata from the given context
// returned metadata shoud not be modified or race condition happens
func FromContext(ctx context.Context) (Metadata, bool) {
@ -50,15 +72,22 @@ func FromContext(ctx context.Context) (Metadata, bool) {
return md.md, ok
}
// MustContext returns metadata from the given context
// returned metadata shoud not be modified or race condition happens
func MustContext(ctx context.Context) Metadata {
md, ok := FromContext(ctx)
if !ok {
panic("missing metadata")
}
return md
}
// NewContext creates a new context with the given metadata
func NewContext(ctx context.Context, md Metadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
ctx = context.WithValue(ctx, mdKey{}, &rawMetadata{md})
ctx = context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{})
ctx = context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{})
return ctx
return context.WithValue(ctx, mdKey{}, &rawMetadata{md})
}
// SetOutgoingContext modify outgoing context with given metadata
@ -90,11 +119,7 @@ func NewIncomingContext(ctx context.Context, md Metadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
ctx = context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{md})
if v, ok := ctx.Value(mdOutgoingKey{}).(*rawMetadata); !ok || v == nil {
ctx = context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{})
}
return ctx
return context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{md})
}
// NewOutgoingContext creates a new context with outcoming metadata attached
@ -102,11 +127,7 @@ func NewOutgoingContext(ctx context.Context, md Metadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
ctx = context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{md})
if v, ok := ctx.Value(mdIncomingKey{}).(*rawMetadata); !ok || v == nil {
ctx = context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{})
}
return ctx
return context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{md})
}
// AppendOutgoingContext apends new md to context
@ -122,7 +143,7 @@ func AppendOutgoingContext(ctx context.Context, kv ...string) context.Context {
for k, v := range md {
omd.Set(k, v)
}
return NewOutgoingContext(ctx, omd)
return ctx
}
// AppendIncomingContext apends new md to context
@ -138,5 +159,21 @@ func AppendIncomingContext(ctx context.Context, kv ...string) context.Context {
for k, v := range md {
omd.Set(k, v)
}
return NewIncomingContext(ctx, omd)
return ctx
}
// AppendContext apends new md to context
func AppendContext(ctx context.Context, kv ...string) context.Context {
md, ok := Pairs(kv...)
if !ok {
return ctx
}
omd, ok := FromContext(ctx)
if !ok {
return NewContext(ctx, md)
}
for k, v := range md {
omd.Set(k, v)
}
return ctx
}

View File

@ -4,6 +4,7 @@ package metadata
import (
"net/textproto"
"sort"
"strings"
)
var (
@ -66,6 +67,14 @@ func (md Metadata) Iterator() *Iterator {
return iter
}
func (md Metadata) MustGet(key string) string {
val, ok := md.Get(key)
if !ok {
panic("missing metadata key")
}
return val
}
// Get returns value from metadata by key
func (md Metadata) Get(key string) (string, bool) {
// fast path
@ -73,6 +82,9 @@ func (md Metadata) Get(key string) (string, bool) {
if !ok {
// slow path
val, ok = md[textproto.CanonicalMIMEHeaderKey(key)]
if !ok {
val, ok = md[strings.ToLower(key)]
}
}
return val, ok
}
@ -94,14 +106,23 @@ func (md Metadata) Del(keys ...string) {
delete(md, key)
// slow path
delete(md, textproto.CanonicalMIMEHeaderKey(key))
// very slow path
delete(md, strings.ToLower(key))
}
}
// Copy makes a copy of the metadata
func (md Metadata) CopyTo(dst Metadata) {
for k, v := range md {
dst[k] = v
}
}
// Copy makes a copy of the metadata
func Copy(md Metadata, exclude ...string) Metadata {
nmd := New(len(md))
for key, val := range md {
nmd.Set(key, val)
for k, v := range md {
nmd[k] = v
}
nmd.Del(exclude...)
return nmd
@ -125,7 +146,7 @@ func Merge(omd Metadata, mmd Metadata, overwrite bool) Metadata {
case ok && !overwrite:
continue
case val != "":
nmd.Set(key, val)
nmd[key] = val
case ok && val == "":
nmd.Del(key)
}
@ -139,6 +160,8 @@ func Pairs(kv ...string) (Metadata, bool) {
return nil, false
}
md := New(len(kv) / 2)
md.Set(kv...)
for idx := 0; idx < len(kv); idx += 2 {
md[kv[idx]] = kv[idx+1]
}
return md, true
}

View File

@ -5,6 +5,37 @@ import (
"testing"
)
func TestLowercase(t *testing.T) {
md := New(1)
md["x-request-id"] = "12345"
v, ok := md.Get("X-Request-Id")
if !ok || v == "" {
t.Fatalf("metadata invalid %#+v", md)
}
}
func TestMultipleUsage(t *testing.T) {
ctx := context.TODO()
md := New(0)
md.Set("key1_1", "val1_1", "key1_2", "val1_2", "key1_3", "val1_3")
ctx = NewIncomingContext(ctx, Copy(md))
ctx = NewOutgoingContext(ctx, Copy(md))
imd, _ := FromIncomingContext(ctx)
omd, _ := FromOutgoingContext(ctx)
_ = func(x context.Context) context.Context {
m, _ := FromIncomingContext(x)
m.Del("key1_2")
return ctx
}(ctx)
_ = func(x context.Context) context.Context {
m, _ := FromIncomingContext(x)
m.Del("key1_3")
return ctx
}(ctx)
_ = imd
_ = omd
}
func TestMetadataSetMultiple(t *testing.T) {
md := New(4)
md.Set("key1", "val1", "key2", "val2", "key3")
@ -57,6 +88,13 @@ func TestPassing(t *testing.T) {
ctx = NewIncomingContext(ctx, md1)
testCtx(ctx)
_, ok := FromOutgoingContext(ctx)
if ok {
t.Fatalf("create outgoing context")
}
ctx = NewOutgoingContext(ctx, New(1))
testCtx(ctx)
md, ok := FromOutgoingContext(ctx)
if !ok {
t.Fatalf("missing metadata from outgoing context")
@ -80,7 +118,7 @@ func TestMerge(t *testing.T) {
}
}
func TestIterator(t *testing.T) {
func TestIterator(_ *testing.T) {
md := Metadata{
"1Last": "last",
"2First": "first",

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Meter, bool) {
return c, ok
}
// MustContext get meter from context
func MustContext(ctx context.Context) Meter {
m, ok := FromContext(ctx)
if !ok {
panic("missing meter")
}
return m
}
// NewContext put meter in context
func NewContext(ctx context.Context, c Meter) context.Context {
if ctx == nil {

View File

@ -11,7 +11,7 @@ import (
var (
// DefaultMeter is the default meter
DefaultMeter Meter = NewMeter()
DefaultMeter = NewMeter()
// DefaultAddress data will be made available on this host:port
DefaultAddress = ":9090"
// DefaultPath the meter endpoint where the Meter data will be made available

View File

@ -37,32 +37,32 @@ func (r *noopMeter) Init(opts ...Option) error {
}
// Counter implements the Meter interface
func (r *noopMeter) Counter(name string, labels ...string) Counter {
func (r *noopMeter) Counter(_ string, labels ...string) Counter {
return &noopCounter{labels: labels}
}
// FloatCounter implements the Meter interface
func (r *noopMeter) FloatCounter(name string, labels ...string) FloatCounter {
func (r *noopMeter) FloatCounter(_ string, labels ...string) FloatCounter {
return &noopFloatCounter{labels: labels}
}
// Gauge implements the Meter interface
func (r *noopMeter) Gauge(name string, f func() float64, labels ...string) Gauge {
func (r *noopMeter) Gauge(_ string, _ func() float64, labels ...string) Gauge {
return &noopGauge{labels: labels}
}
// Summary implements the Meter interface
func (r *noopMeter) Summary(name string, labels ...string) Summary {
func (r *noopMeter) Summary(_ string, labels ...string) Summary {
return &noopSummary{labels: labels}
}
// SummaryExt implements the Meter interface
func (r *noopMeter) SummaryExt(name string, window time.Duration, quantiles []float64, labels ...string) Summary {
func (r *noopMeter) SummaryExt(_ string, _ time.Duration, _ []float64, labels ...string) Summary {
return &noopSummary{labels: labels}
}
// Histogram implements the Meter interface
func (r *noopMeter) Histogram(name string, labels ...string) Histogram {
func (r *noopMeter) Histogram(_ string, labels ...string) Histogram {
return &noopHistogram{labels: labels}
}
@ -77,7 +77,7 @@ func (r *noopMeter) Set(opts ...Option) Meter {
return m
}
func (r *noopMeter) Write(w io.Writer, opts ...Option) error {
func (r *noopMeter) Write(_ io.Writer, _ ...Option) error {
return nil
}

View File

@ -65,6 +65,8 @@ func As(b any, target any) bool {
break
case targetType.Implements(routerType):
break
case targetType.Implements(tracerType):
break
default:
return false
}
@ -76,19 +78,21 @@ func As(b any, target any) bool {
return false
}
var brokerType = reflect.TypeOf((*broker.Broker)(nil)).Elem()
var loggerType = reflect.TypeOf((*logger.Logger)(nil)).Elem()
var clientType = reflect.TypeOf((*client.Client)(nil)).Elem()
var serverType = reflect.TypeOf((*server.Server)(nil)).Elem()
var codecType = reflect.TypeOf((*codec.Codec)(nil)).Elem()
var flowType = reflect.TypeOf((*flow.Flow)(nil)).Elem()
var fsmType = reflect.TypeOf((*fsm.FSM)(nil)).Elem()
var meterType = reflect.TypeOf((*meter.Meter)(nil)).Elem()
var registerType = reflect.TypeOf((*register.Register)(nil)).Elem()
var resolverType = reflect.TypeOf((*resolver.Resolver)(nil)).Elem()
var routerType = reflect.TypeOf((*router.Router)(nil)).Elem()
var selectorType = reflect.TypeOf((*selector.Selector)(nil)).Elem()
var storeType = reflect.TypeOf((*store.Store)(nil)).Elem()
var syncType = reflect.TypeOf((*sync.Sync)(nil)).Elem()
var tracerType = reflect.TypeOf((*tracer.Tracer)(nil)).Elem()
var serviceType = reflect.TypeOf((*Service)(nil)).Elem()
var (
brokerType = reflect.TypeOf((*broker.Broker)(nil)).Elem()
loggerType = reflect.TypeOf((*logger.Logger)(nil)).Elem()
clientType = reflect.TypeOf((*client.Client)(nil)).Elem()
serverType = reflect.TypeOf((*server.Server)(nil)).Elem()
codecType = reflect.TypeOf((*codec.Codec)(nil)).Elem()
flowType = reflect.TypeOf((*flow.Flow)(nil)).Elem()
fsmType = reflect.TypeOf((*fsm.FSM)(nil)).Elem()
meterType = reflect.TypeOf((*meter.Meter)(nil)).Elem()
registerType = reflect.TypeOf((*register.Register)(nil)).Elem()
resolverType = reflect.TypeOf((*resolver.Resolver)(nil)).Elem()
routerType = reflect.TypeOf((*router.Router)(nil)).Elem()
selectorType = reflect.TypeOf((*selector.Selector)(nil)).Elem()
storeType = reflect.TypeOf((*store.Store)(nil)).Elem()
syncType = reflect.TypeOf((*sync.Sync)(nil)).Elem()
tracerType = reflect.TypeOf((*tracer.Tracer)(nil)).Elem()
serviceType = reflect.TypeOf((*Service)(nil)).Elem()
)

View File

@ -18,26 +18,27 @@ func TestAs(t *testing.T) {
testCases := []struct {
b any
target any
match bool
want any
match bool
}{
{
broTarget,
&b,
true,
broTarget,
b: broTarget,
target: &b,
match: true,
want: broTarget,
},
{
nil,
&b,
false,
nil,
b: nil,
target: &b,
match: false,
want: nil,
},
{
fsmTarget,
&b,
false,
nil,
b: fsmTarget,
target: &b,
match: false,
want: nil,
},
}
for i, tc := range testCases {
@ -66,7 +67,13 @@ type bro struct {
func (p *bro) Name() string { return p.name }
func (p *bro) Init(opts ...broker.Option) error { return nil }
func (p *bro) Live() bool { return true }
func (p *bro) Ready() bool { return true }
func (p *bro) Health() bool { return true }
func (p *bro) Init(_ ...broker.Option) error { return nil }
// Options returns broker options
func (p *bro) Options() broker.Options { return broker.Options{} }
@ -75,28 +82,28 @@ func (p *bro) Options() broker.Options { return broker.Options{} }
func (p *bro) Address() string { return "" }
// Connect connects to broker
func (p *bro) Connect(ctx context.Context) error { return nil }
func (p *bro) Connect(_ context.Context) error { return nil }
// Disconnect disconnect from broker
func (p *bro) Disconnect(ctx context.Context) error { return nil }
func (p *bro) Disconnect(_ context.Context) error { return nil }
// Publish message, msg can be single broker.Message or []broker.Message
func (p *bro) Publish(ctx context.Context, topic string, msg *broker.Message, opts ...broker.PublishOption) error {
func (p *bro) Publish(_ context.Context, _ string, _ *broker.Message, _ ...broker.PublishOption) error {
return nil
}
// BatchPublish messages to broker with multiple topics
func (p *bro) BatchPublish(ctx context.Context, msgs []*broker.Message, opts ...broker.PublishOption) error {
func (p *bro) BatchPublish(_ context.Context, _ []*broker.Message, _ ...broker.PublishOption) error {
return nil
}
// BatchSubscribe subscribes to topic messages via handler
func (p *bro) BatchSubscribe(ctx context.Context, topic string, h broker.BatchHandler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
func (p *bro) BatchSubscribe(_ context.Context, _ string, _ broker.BatchHandler, _ ...broker.SubscribeOption) (broker.Subscriber, error) {
return nil, nil
}
// Subscribe subscribes to topic message via handler
func (p *bro) Subscribe(ctx context.Context, topic string, handler broker.Handler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
func (p *bro) Subscribe(_ context.Context, _ string, _ broker.Handler, _ ...broker.SubscribeOption) (broker.Subscriber, error) {
return nil, nil
}
@ -107,9 +114,9 @@ type fsmT struct {
name string
}
func (f *fsmT) Start(ctx context.Context, a interface{}, o ...Option) (interface{}, error) {
func (f *fsmT) Start(_ context.Context, _ interface{}, _ ...Option) (interface{}, error) {
return nil, nil
}
func (f *fsmT) Current() string { return f.name }
func (f *fsmT) Reset() {}
func (f *fsmT) State(s string, sf fsm.StateFunc) {}
func (f *fsmT) Current() string { return f.name }
func (f *fsmT) Reset() {}
func (f *fsmT) State(_ string, _ fsm.StateFunc) {}

View File

@ -8,19 +8,20 @@ import (
// CertificateOptions holds options for x509.CreateCertificate
type CertificateOptions struct {
Organization []string
OrganizationalUnit []string
CommonName string
OCSPServer []string
IssuingCertificateURL []string
SerialNumber *big.Int
NotAfter time.Time
NotBefore time.Time
SignatureAlgorithm x509.SignatureAlgorithm
PublicKeyAlgorithm x509.PublicKeyAlgorithm
CommonName string
Organization []string
OrganizationalUnit []string
OCSPServer []string
IssuingCertificateURL []string
ExtKeyUsage []x509.ExtKeyUsage
KeyUsage x509.KeyUsage
IsCA bool
SignatureAlgorithm x509.SignatureAlgorithm
PublicKeyAlgorithm x509.PublicKeyAlgorithm
KeyUsage x509.KeyUsage
IsCA bool
}
// CertificateOrganizationalUnit set OrganizationalUnit in certificate subject

View File

@ -10,7 +10,7 @@ import (
var (
// DefaultTransport is the global default transport
DefaultTransport Transport = NewTransport()
DefaultTransport = NewTransport()
// DefaultDialTimeout the default dial timeout
DefaultDialTimeout = time.Second * 5
)

View File

@ -45,6 +45,18 @@ type (
tunnelAddr struct{}
)
func (t *tunBroker) Live() bool {
return true
}
func (t *tunBroker) Ready() bool {
return true
}
func (t *tunBroker) Health() bool {
return true
}
func (t *tunBroker) Init(opts ...broker.Option) error {
for _, o := range opts {
o(&t.opts)
@ -72,7 +84,7 @@ func (t *tunBroker) Disconnect(ctx context.Context) error {
return t.tunnel.Close(ctx)
}
func (t *tunBroker) BatchPublish(ctx context.Context, msgs []*broker.Message, opts ...broker.PublishOption) error {
func (t *tunBroker) BatchPublish(ctx context.Context, msgs []*broker.Message, _ ...broker.PublishOption) error {
// TODO: this is probably inefficient, we might want to just maintain an open connection
// it may be easier to add broadcast to the tunnel
topicMap := make(map[string]tunnel.Session)
@ -102,7 +114,7 @@ func (t *tunBroker) BatchPublish(ctx context.Context, msgs []*broker.Message, op
return nil
}
func (t *tunBroker) Publish(ctx context.Context, topic string, m *broker.Message, opts ...broker.PublishOption) error {
func (t *tunBroker) Publish(ctx context.Context, topic string, m *broker.Message, _ ...broker.PublishOption) error {
// TODO: this is probably inefficient, we might want to just maintain an open connection
// it may be easier to add broadcast to the tunnel
c, err := t.tunnel.Dial(ctx, topic, tunnel.DialMode(tunnel.Multicast))
@ -177,12 +189,12 @@ func (t *tunBatchSubscriber) run() {
// receive message
m := new(transport.Message)
if err := c.Recv(m); err != nil {
if logger.V(logger.ErrorLevel) {
logger.Error(t.opts.Context, err.Error())
if logger.DefaultLogger.V(logger.ErrorLevel) {
logger.DefaultLogger.Error(t.opts.Context, err.Error(), err)
}
if err = c.Close(); err != nil {
if logger.V(logger.ErrorLevel) {
logger.Error(t.opts.Context, err.Error())
if logger.DefaultLogger.V(logger.ErrorLevel) {
logger.DefaultLogger.Error(t.opts.Context, err.Error(), err)
}
}
continue
@ -222,12 +234,12 @@ func (t *tunSubscriber) run() {
// receive message
m := new(transport.Message)
if err := c.Recv(m); err != nil {
if logger.V(logger.ErrorLevel) {
logger.Error(t.opts.Context, err.Error())
if logger.DefaultLogger.V(logger.ErrorLevel) {
logger.DefaultLogger.Error(t.opts.Context, err.Error(), err)
}
if err = c.Close(); err != nil {
if logger.V(logger.ErrorLevel) {
logger.Error(t.opts.Context, err.Error())
if logger.DefaultLogger.V(logger.ErrorLevel) {
logger.DefaultLogger.Error(t.opts.Context, err.Error(), err)
}
}
continue

View File

@ -269,7 +269,7 @@ func Logger(l logger.Logger, opts ...LoggerOption) Option {
}
}
}
for _, trc := range o.Tracers {
for _, ot := range lopts.tracers {
if trc.Name() == ot || all {
@ -294,8 +294,8 @@ type loggerOptions struct {
brokers []string
registers []string
stores []string
meters []string
tracers []string
// meters []string
tracers []string
}
/*

View File

@ -15,6 +15,6 @@ func (p *noopProfiler) String() string {
}
// NewProfiler returns new noop profiler
func NewProfiler(opts ...Option) Profiler {
func NewProfiler(_ ...Option) Profiler {
return &noopProfiler{}
}

View File

@ -12,7 +12,7 @@ type Profiler interface {
}
// DefaultProfiler holds the default profiler
var DefaultProfiler Profiler = NewProfiler()
var DefaultProfiler = NewProfiler()
// Options holds the options for profiler
type Options struct {

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Register, bool) {
return c, ok
}
// MustContext get register from context
func MustContext(ctx context.Context) Register {
r, ok := FromContext(ctx)
if !ok {
panic("missing register")
}
return r
}
// NewContext put register in context
func NewContext(ctx context.Context, c Register) context.Context {
if ctx == nil {

View File

@ -2,6 +2,7 @@ package register
import (
"context"
"fmt"
"sync"
"time"
@ -30,10 +31,10 @@ type record struct {
}
type memory struct {
sync.RWMutex
records map[string]services
watchers map[string]*watcher
opts register.Options
sync.RWMutex
}
// services is a KV map with service name as the key and a map of records as the value
@ -64,7 +65,7 @@ func (m *memory) ttlPrune() {
for id, n := range record.Nodes {
if n.TTL != 0 && time.Since(n.LastSeen) > n.TTL {
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Register TTL expired for node %s of service %s", n.ID, service)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Register TTL expired for node %s of service %s", n.ID, service))
}
delete(m.records[domain][service][version].Nodes, id)
}
@ -99,11 +100,11 @@ func (m *memory) sendEvent(r *register.Result) {
}
}
func (m *memory) Connect(ctx context.Context) error {
func (m *memory) Connect(_ context.Context) error {
return nil
}
func (m *memory) Disconnect(ctx context.Context) error {
func (m *memory) Disconnect(_ context.Context) error {
return nil
}
@ -123,7 +124,7 @@ func (m *memory) Options() register.Options {
return m.opts
}
func (m *memory) Register(ctx context.Context, s *register.Service, opts ...register.RegisterOption) error {
func (m *memory) Register(_ context.Context, s *register.Service, opts ...register.RegisterOption) error {
m.Lock()
defer m.Unlock()
@ -151,7 +152,7 @@ func (m *memory) Register(ctx context.Context, s *register.Service, opts ...regi
if _, ok := srvs[s.Name][s.Version]; !ok {
srvs[s.Name][s.Version] = r
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Register added new service: %s, version: %s", s.Name, s.Version)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Register added new service: %s, version: %s", s.Name, s.Version))
}
m.records[options.Domain] = srvs
go m.sendEvent(&register.Result{Action: "create", Service: s})
@ -191,14 +192,14 @@ func (m *memory) Register(ctx context.Context, s *register.Service, opts ...regi
if addedNodes {
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Register added new node to service: %s, version: %s", s.Name, s.Version)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Register added new node to service: %s, version: %s", s.Name, s.Version))
}
go m.sendEvent(&register.Result{Action: "update", Service: s})
} else {
// refresh TTL and timestamp
for _, n := range s.Nodes {
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Updated registration for service: %s, version: %s", s.Name, s.Version)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Updated registration for service: %s, version: %s", s.Name, s.Version))
}
srvs[s.Name][s.Version].Nodes[n.ID].TTL = options.TTL
srvs[s.Name][s.Version].Nodes[n.ID].LastSeen = time.Now()
@ -243,7 +244,7 @@ func (m *memory) Deregister(ctx context.Context, s *register.Service, opts ...re
for _, n := range s.Nodes {
if _, ok := version.Nodes[n.ID]; ok {
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Register removed node from service: %s, version: %s", s.Name, s.Version)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Register removed node from service: %s, version: %s", s.Name, s.Version))
}
delete(version.Nodes, n.ID)
}
@ -264,7 +265,7 @@ func (m *memory) Deregister(ctx context.Context, s *register.Service, opts ...re
go m.sendEvent(&register.Result{Action: "delete", Service: s})
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Register removed service: %s", s.Name)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Register removed service: %s", s.Name))
}
return nil
}
@ -273,7 +274,7 @@ func (m *memory) Deregister(ctx context.Context, s *register.Service, opts ...re
delete(m.records[options.Domain][s.Name], s.Version)
go m.sendEvent(&register.Result{Action: "delete", Service: s})
if m.opts.Logger.V(logger.DebugLevel) {
m.opts.Logger.Debugf(m.opts.Context, "Register removed service: %s, version: %s", s.Name, s.Version)
m.opts.Logger.Debug(m.opts.Context, fmt.Sprintf("Register removed service: %s, version: %s", s.Name, s.Version))
}
return nil
@ -468,9 +469,7 @@ func serviceToRecord(s *register.Service, ttl time.Duration) *record {
}
endpoints := make([]*register.Endpoint, len(s.Endpoints))
for i, e := range s.Endpoints {
endpoints[i] = e
}
copy(endpoints, s.Endpoints)
return &record{
Name: s.Name,

View File

@ -208,9 +208,9 @@ func TestMemoryRegistryTTLConcurrent(t *testing.T) {
}
}
//if len(os.Getenv("IN_TRAVIS_CI")) == 0 {
// if len(os.Getenv("IN_TRAVIS_CI")) == 0 {
// t.Logf("test will wait %v, then check TTL timeouts", waitTime)
//}
// }
errChan := make(chan error, concurrency)
syncChan := make(chan struct{})
@ -290,27 +290,29 @@ func TestWatcher(t *testing.T) {
ctx := context.TODO()
m := NewRegister()
m.Init()
m.Connect(ctx)
if err := m.Init(); err != nil {
t.Fatal(err)
}
if err := m.Connect(ctx); err != nil {
t.Fatal(err)
}
wc, err := m.Watch(ctx)
if err != nil {
t.Fatalf("cant watch: %v", err)
}
defer wc.Stop()
cherr := make(chan error, 10)
var wg sync.WaitGroup
wg.Add(1)
go func() {
for {
_, err := wc.Next()
if err != nil {
t.Fatal("unexpected err", err)
}
// t.Logf("changes %#+v", ch.Service)
wc.Stop()
wg.Done()
return
_, err := wc.Next()
if err != nil {
cherr <- fmt.Errorf("unexpected err %v", err)
}
// t.Logf("changes %#+v", ch.Service)
wc.Stop()
wg.Done()
}()
if err := m.Register(ctx, testSrv); err != nil {

View File

@ -18,7 +18,7 @@ var DefaultDomain = "micro"
var (
// DefaultRegister is the global default register
DefaultRegister Register = NewRegister()
DefaultRegister = NewRegister()
// ErrNotFound returned when LookupService is called and no services found
ErrNotFound = errors.New("service not found")
// ErrWatcherStopped returned when when watcher is stopped
@ -29,17 +29,32 @@ var (
// and an abstraction over varying implementations
// {consul, etcd, zookeeper, ...}
type Register interface {
// Name returns register name
Name() string
// Init initialize register
Init(...Option) error
// Options returns options for register
Options() Options
// Connect initialize connect to register
Connect(context.Context) error
// Disconnect initialize discconection from register
Disconnect(context.Context) error
// Register service in registry
Register(context.Context, *Service, ...RegisterOption) error
// Deregister service from registry
Deregister(context.Context, *Service, ...DeregisterOption) error
// LookupService in registry
LookupService(context.Context, string, ...LookupOption) ([]*Service, error)
// ListServices in registry
ListServices(context.Context, ...ListOption) ([]*Service, error)
// Watch registry events
Watch(context.Context, ...WatchOption) (Watcher, error)
// String returns registry string representation
String() string
// Live returns register liveness
// Live() bool
// Ready returns register readiness
// Ready() bool
}
// Service holds service register info

View File

@ -12,9 +12,9 @@ import (
// Resolver is a DNS network resolve
type Resolver struct {
sync.RWMutex
goresolver *net.Resolver
Address string
mu sync.RWMutex
}
// Resolve tries to resolve endpoint address
@ -39,12 +39,12 @@ func (r *Resolver) Resolve(name string) ([]*resolver.Record, error) {
return []*resolver.Record{rec}, nil
}
r.RLock()
r.mu.RLock()
goresolver := r.goresolver
r.RUnlock()
r.mu.RUnlock()
if goresolver == nil {
r.Lock()
r.mu.Lock()
r.goresolver = &net.Resolver{
Dial: func(ctx context.Context, _ string, _ string) (net.Conn, error) {
d := net.Dialer{
@ -53,7 +53,7 @@ func (r *Resolver) Resolve(name string) ([]*resolver.Record, error) {
return d.DialContext(ctx, "udp", r.Address)
},
}
r.Unlock()
r.mu.Unlock()
}
addrs, err := goresolver.LookupIP(context.TODO(), "ip", host)

View File

@ -9,6 +9,6 @@ import (
type Resolver struct{}
// Resolve returns the list of nodes
func (r *Resolver) Resolve(name string) ([]*resolver.Record, error) {
func (r *Resolver) Resolve(_ string) ([]*resolver.Record, error) {
return []*resolver.Record{}, nil
}

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Router, bool) {
return c, ok
}
// MustContext get router from context
func MustContext(ctx context.Context) Router {
r, ok := FromContext(ctx)
if !ok {
panic("missing router")
}
return r
}
// NewContext put router in context
func NewContext(ctx context.Context, c Router) context.Context {
if ctx == nil {

View File

@ -7,7 +7,7 @@ import (
var (
// DefaultRouter is the global default router
DefaultRouter Router = NewRouter()
DefaultRouter = NewRouter()
// DefaultNetwork is default micro network
DefaultNetwork = "micro"
// ErrRouteNotFound is returned when no route was found in the routing table

View File

@ -25,7 +25,7 @@ func (r *random) Select(routes []string, opts ...selector.SelectOption) (selecto
}, nil
}
func (r *random) Record(addr string, err error) error {
func (r *random) Record(_ string, _ error) error {
return nil
}

View File

@ -6,14 +6,14 @@ import (
)
// NewSelector returns an initialised round robin selector
func NewSelector(opts ...selector.Option) selector.Selector {
func NewSelector(_ ...selector.Option) selector.Selector {
return new(roundrobin)
}
type roundrobin struct{}
// Select return routes based on algo
func (r *roundrobin) Select(routes []string, opts ...selector.SelectOption) (selector.Next, error) {
func (r *roundrobin) Select(routes []string, _ ...selector.SelectOption) (selector.Next, error) {
if len(routes) == 0 {
return nil, selector.ErrNoneAvailable
}
@ -28,7 +28,7 @@ func (r *roundrobin) Select(routes []string, opts ...selector.SelectOption) (sel
}, nil
}
func (r *roundrobin) Record(addr string, err error) error { return nil }
func (r *roundrobin) Record(_ string, _ error) error { return nil }
func (r *roundrobin) Reset() error { return nil }

14
semconv/cache.go Normal file
View File

@ -0,0 +1,14 @@
package semconv
var (
// CacheRequestDurationSeconds specifies meter metric name
CacheRequestDurationSeconds = "micro_cache_request_duration_seconds"
// CacheRequestLatencyMicroseconds specifies meter metric name
CacheRequestLatencyMicroseconds = "micro_cache_request_latency_microseconds"
// CacheRequestTotal specifies meter metric name
CacheRequestTotal = "micro_cache_request_total"
// CacheRequestInflight specifies meter metric name
CacheRequestInflight = "micro_cache_request_inflight"
// CacheItemsTotal specifies total cache items
CacheItemsTotal = "micro_cache_items_total"
)

18
semconv/metadata.go Normal file
View File

@ -0,0 +1,18 @@
package semconv
var (
// HeaderTopic is the header name that contains topic name
HeaderTopic = "Micro-Topic"
// HeaderContentType specifies content type of message
HeaderContentType = "Content-Type"
// HeaderEndpoint specifies endpoint in service
HeaderEndpoint = "Micro-Endpoint"
// HeaderService specifies service
HeaderService = "Micro-Service"
// HeaderTimeout specifies timeout of operation
HeaderTimeout = "Micro-Timeout"
// HeaderAuthorization specifies Authorization header
HeaderAuthorization = "Authorization"
// HeaderXRequestID specifies request id
HeaderXRequestID = "X-Request-Id"
)

12
semconv/pool.go Normal file
View File

@ -0,0 +1,12 @@
package semconv
var (
// PoolGetTotal specifies meter metric name for total number of pool get ops
PoolGetTotal = "micro_pool_get_total"
// PoolPutTotal specifies meter metric name for total number of pool put ops
PoolPutTotal = "micro_pool_put_total"
// PoolMisTotal specifies meter metric name for total number of pool misses
PoolMisTotal = "micro_pool_mis_total"
// PoolRetTotal specifies meter metric name for total number of pool returned to gc
PoolRetTotal = "micro_pool_ret_total"
)

View File

@ -3,7 +3,7 @@ package semconv
var (
// StoreRequestDurationSeconds specifies meter metric name
StoreRequestDurationSeconds = "micro_store_request_duration_seconds"
// ClientRequestLatencyMicroseconds specifies meter metric name
// StoreRequestLatencyMicroseconds specifies meter metric name
StoreRequestLatencyMicroseconds = "micro_store_request_latency_microseconds"
// StoreRequestTotal specifies meter metric name
StoreRequestTotal = "micro_store_request_total"

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Server, bool) {
return c, ok
}
// MustContext returns Server from context
func MustContext(ctx context.Context) Server {
s, ok := FromContext(ctx)
if !ok {
panic("missing server")
}
return s
}
// NewContext stores Server to context
func NewContext(ctx context.Context, s Server) context.Context {
if ctx == nil {

View File

@ -121,6 +121,18 @@ func (n *noopServer) newCodec(contentType string) (codec.Codec, error) {
return nil, codec.ErrUnknownContentType
}
func (n *noopServer) Live() bool {
return true
}
func (n *noopServer) Ready() bool {
return true
}
func (n *noopServer) Health() bool {
return true
}
func (n *noopServer) Handle(handler Handler) error {
n.h = handler
return nil
@ -159,7 +171,6 @@ type rpcMessage struct {
header metadata.Metadata
topic string
contentType string
body []byte
}
func (r *rpcMessage) ContentType() string {
@ -459,7 +470,7 @@ func (n *noopServer) Start() error {
}
} else if rerr != nil && !registered {
if config.Logger.V(logger.ErrorLevel) {
config.Logger.Errorf(n.opts.Context, fmt.Sprintf("server %s-%s register check error", config.Name, config.ID), rerr)
config.Logger.Error(n.opts.Context, fmt.Sprintf("server %s-%s register check error", config.Name, config.ID), rerr)
}
continue
}
@ -712,7 +723,7 @@ func (n *noopServer) createSubHandler(sb *subscriber, opts Options) broker.Handl
return nil
}
opts.Hooks.EachNext(func(hook options.Hook) {
opts.Hooks.EachPrev(func(hook options.Hook) {
if h, ok := hook.(HookSubHandler); ok {
fn = h(fn)
}
@ -764,13 +775,16 @@ func (s *subscriber) Options() SubscriberOptions {
}
type subscriber struct {
topic string
typ reflect.Type
subscriber interface{}
topic string
endpoints []*register.Endpoint
handlers []*handler
opts SubscriberOptions
rcvr reflect.Value
endpoints []*register.Endpoint
handlers []*handler
rcvr reflect.Value
opts SubscriberOptions
}
type handler struct {

View File

@ -9,6 +9,7 @@ import (
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/server"
)
@ -38,7 +39,9 @@ func TestNoopSub(t *testing.T) {
t.Fatal(err)
}
logger.DefaultLogger.Init(logger.WithLevel(logger.ErrorLevel))
if err := logger.DefaultLogger.Init(logger.WithLevel(logger.ErrorLevel)); err != nil {
t.Fatal(err)
}
s := server.NewServer(
server.Broker(b),
server.Codec("application/octet-stream", codec.NewCodec()),
@ -82,3 +85,40 @@ func TestNoopSub(t *testing.T) {
}
}()
}
func TestHooks_Wrap(t *testing.T) {
n := 5
fn1 := func(next server.FuncSubHandler) server.FuncSubHandler {
return func(ctx context.Context, msg server.Message) (err error) {
n *= 2
return next(ctx, msg)
}
}
fn2 := func(next server.FuncSubHandler) server.FuncSubHandler {
return func(ctx context.Context, msg server.Message) (err error) {
n -= 10
return next(ctx, msg)
}
}
hs := &options.Hooks{}
hs.Append(server.HookSubHandler(fn1), server.HookSubHandler(fn2))
var fn = func(ctx context.Context, msg server.Message) error {
return nil
}
hs.EachPrev(func(hook options.Hook) {
if h, ok := hook.(server.HookSubHandler); ok {
fn = h(fn)
}
})
if err := fn(nil, nil); err != nil {
t.Fatal(err)
}
if n != 0 {
t.Fatalf("uncorrected hooks call")
}
}

View File

@ -12,7 +12,6 @@ import (
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/network/transport"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/register"
msync "go.unistack.org/micro/v3/sync"
@ -25,38 +24,11 @@ type Option func(*Options)
// Options server struct
type Options struct {
// Context holds the external options and can be used for server shutdown
Context context.Context
// Broker holds the server broker
Broker broker.Broker
// Register holds the register
Register register.Register
// Tracer holds the tracer
Tracer tracer.Tracer
// Logger holds the logger
Logger logger.Logger
// Meter holds the meter
Meter meter.Meter
// Transport holds the transport
Transport transport.Transport
/*
// Router for requests
Router Router
*/
// Listener may be passed if already created
Listener net.Listener
// Wait group
Wait *msync.WaitGroup
// TLSConfig specifies tls.Config for secure serving
TLSConfig *tls.Config
// Metadata holds the server metadata
Metadata metadata.Metadata
// RegisterCheck run before register server
RegisterCheck func(context.Context) error
// Codecs map to handle content-type
Codecs map[string]codec.Codec
// Metadata holds the server metadata
Metadata metadata.Metadata
// ID holds the id of the server
ID string
// Namespace for te server
@ -69,21 +41,46 @@ type Options struct {
Advertise string
// Version holds the server version
Version string
// RegisterAttempts holds the number of register attempts before error
RegisterAttempts int
// Context holds the external options and can be used for server shutdown
Context context.Context
// Broker holds the server broker
Broker broker.Broker
// Register holds the register
Register register.Register
// Tracer holds the tracer
Tracer tracer.Tracer
// Logger holds the logger
Logger logger.Logger
// Meter holds the meter
Meter meter.Meter
// Listener may be passed if already created
Listener net.Listener
// TLSConfig specifies tls.Config for secure serving
TLSConfig *tls.Config
// Wait group
Wait *msync.WaitGroup
// RegisterCheck run before register server
RegisterCheck func(context.Context) error
// Hooks may contains hook actions that performs before/after server handler
// or server subscriber handler
Hooks options.Hooks
// RegisterInterval holds he interval for re-register
RegisterInterval time.Duration
// RegisterTTL specifies TTL for register record
RegisterTTL time.Duration
// GracefulTimeout timeout for graceful stop server
GracefulTimeout time.Duration
// MaxConn limits number of connections
MaxConn int
// DeregisterAttempts holds the number of deregister attempts before error
DeregisterAttempts int
// Hooks may contains hook actions that performs before/after server handler
// or server subscriber handler
Hooks options.Hooks
// GracefulTimeout timeout for graceful stop server
GracefulTimeout time.Duration
// RegisterAttempts holds the number of register attempts before error
RegisterAttempts int
}
// NewOptions returns new options struct with default or passed values
@ -100,7 +97,6 @@ func NewOptions(opts ...Option) Options {
Tracer: tracer.DefaultTracer,
Broker: broker.DefaultBroker,
Register: register.DefaultRegister,
Transport: transport.DefaultTransport,
Address: DefaultAddress,
Name: DefaultName,
Version: DefaultVersion,
@ -209,13 +205,6 @@ func Tracer(t tracer.Tracer) Option {
}
}
// Transport mechanism for communication e.g http, rabbitmq, etc
func Transport(t transport.Transport) Option {
return func(o *Options) {
o.Transport = t
}
}
// Metadata associated with the server
func Metadata(md metadata.Metadata) Option {
return func(o *Options) {
@ -249,14 +238,6 @@ func TLSConfig(t *tls.Config) Option {
return func(o *Options) {
// set the internal tls
o.TLSConfig = t
// set the default transport if one is not
// already set. Required for Init call below.
// set the transport tls
_ = o.Transport.Init(
transport.TLSConfig(t),
)
}
}
@ -337,14 +318,14 @@ type SubscriberOptions struct {
Context context.Context
// Queue holds the subscription queue
Queue string
// BatchWait flag specifies max wait time for batch filling
BatchWait time.Duration
// BatchSize flag specifies max size of batch
BatchSize int
// AutoAck flag for auto ack messages after processing
AutoAck bool
// BodyOnly flag specifies that message without headers
BodyOnly bool
// BatchSize flag specifies max size of batch
BatchSize int
// BatchWait flag specifies max wait time for batch filling
BatchWait time.Duration
}
// NewSubscriberOptions create new SubscriberOptions

View File

@ -12,7 +12,7 @@ import (
// DefaultServer default server
var (
DefaultServer Server = NewServer()
DefaultServer = NewServer()
)
var (
@ -62,6 +62,12 @@ type Server interface {
Stop() error
// Server implementation
String() string
// Live returns server liveness
Live() bool
// Ready returns server readiness
Ready() bool
// Health returns server health
Health() bool
}
type (

View File

@ -1,10 +1,14 @@
// Package micro is a pluggable framework for microservices
package micro // import "go.unistack.org/micro/v3"
package micro
import (
"fmt"
"net"
"sync"
"time"
"github.com/KimMachineGun/automemlimit/memlimit"
"go.uber.org/automaxprocs/maxprocs"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/config"
@ -15,8 +19,27 @@ import (
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v3/store"
"go.unistack.org/micro/v3/tracer"
utildns "go.unistack.org/micro/v3/util/dns"
)
func init() {
_, _ = maxprocs.Set()
_, _ = memlimit.SetGoMemLimitWithOpts(
memlimit.WithRatio(0.9),
memlimit.WithProvider(
memlimit.ApplyFallback(
memlimit.FromCgroup,
memlimit.FromSystem,
),
),
)
net.DefaultResolver = utildns.NewNetResolver(
utildns.Timeout(1*time.Second),
utildns.MinCacheTTL(5*time.Second),
)
}
// Service is an interface that wraps the lower level components.
// Its works as container with building blocks for service.
type Service interface {
@ -57,8 +80,14 @@ type Service interface {
Start() error
// Stop the service
Stop() error
// The service implementation
// String service representation
String() string
// Live returns service liveness
Live() bool
// Ready returns service readiness
Ready() bool
// Health returns service health
Health() bool
}
// RegisterHandler is syntactic sugar for registering a handler
@ -72,22 +101,21 @@ func RegisterSubscriber(topic string, s server.Server, h interface{}, opts ...se
}
type service struct {
done chan struct{}
opts Options
sync.RWMutex
}
// NewService creates and returns a new Service based on the packages within.
func NewService(opts ...Option) Service {
return &service{opts: NewOptions(opts...)}
return &service{opts: NewOptions(opts...), done: make(chan struct{})}
}
func (s *service) Name() string {
return s.opts.Name
}
// Init initialises options. Additionally it calls cmd.Init
// which parses command line flags. cmd.Init is only called
// on first Init.
// Init initialises options.
//
//nolint:gocyclo
func (s *service) Init(opts ...Option) error {
@ -236,6 +264,63 @@ func (s *service) String() string {
return s.opts.Name
}
func (s *service) Live() bool {
for _, v := range s.opts.Brokers {
if !v.Live() {
return false
}
}
for _, v := range s.opts.Servers {
if !v.Live() {
return false
}
}
for _, v := range s.opts.Stores {
if !v.Live() {
return false
}
}
return true
}
func (s *service) Ready() bool {
for _, v := range s.opts.Brokers {
if !v.Ready() {
return false
}
}
for _, v := range s.opts.Servers {
if !v.Ready() {
return false
}
}
for _, v := range s.opts.Stores {
if !v.Ready() {
return false
}
}
return true
}
func (s *service) Health() bool {
for _, v := range s.opts.Brokers {
if !v.Health() {
return false
}
}
for _, v := range s.opts.Servers {
if !v.Health() {
return false
}
}
for _, v := range s.opts.Stores {
if !v.Health() {
return false
}
}
return true
}
//nolint:gocyclo
func (s *service) Start() error {
var err error
@ -262,11 +347,7 @@ func (s *service) Start() error {
}
if config.Loggers[0].V(logger.InfoLevel) {
config.Loggers[0].Infof(s.opts.Context, "starting [service] %s version %s", s.Options().Name, s.Options().Version)
}
if len(s.opts.Servers) == 0 {
return fmt.Errorf("cant start nil server")
config.Loggers[0].Info(s.opts.Context, fmt.Sprintf("starting [service] %s version %s", s.Options().Name, s.Options().Version))
}
for _, reg := range s.opts.Registers {
@ -308,7 +389,7 @@ func (s *service) Stop() error {
s.RUnlock()
if config.Loggers[0].V(logger.InfoLevel) {
config.Loggers[0].Infof(s.opts.Context, "stoppping [service] %s", s.Name())
config.Loggers[0].Info(s.opts.Context, fmt.Sprintf("stoppping [service] %s", s.Name()))
}
var err error
@ -348,6 +429,8 @@ func (s *service) Stop() error {
}
}
close(s.done)
return nil
}
@ -371,7 +454,7 @@ func (s *service) Run() error {
}
// wait on context cancel
<-s.opts.Context.Done()
<-s.done
return s.Stop()
}

View File

@ -121,8 +121,10 @@ func TestNewService(t *testing.T) {
}
tests := []struct {
name string
args args
want Service
args args
}{
{
name: "NewService",
@ -134,7 +136,7 @@ func TestNewService(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := NewService(tt.args.opts...); !reflect.DeepEqual(got, tt.want) {
if got := NewService(tt.args.opts...); got.Name() != tt.want.Name() {
t.Errorf("NewService() = %v, want %v", got.Options().Name, tt.want.Options().Name)
}
})
@ -146,9 +148,10 @@ func Test_service_Name(t *testing.T) {
opts Options
}
tests := []struct {
name string
name string
want string
fields fields
want string
}{
{
name: "Test_service_Name",
@ -245,10 +248,12 @@ func Test_service_Broker(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want broker.Broker
fields fields
args args
want broker.Broker
}{
{
name: "service.Broker",
@ -301,10 +306,12 @@ func Test_service_Tracer(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want tracer.Tracer
fields fields
args args
want tracer.Tracer
}{
{
name: "service.Tracer",
@ -338,10 +345,11 @@ func Test_service_Config(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want config.Config
fields fields
args args
want config.Config
}{
{
name: "service.Config",
@ -375,10 +383,12 @@ func Test_service_Client(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want client.Client
fields fields
args args
want client.Client
}{
{
name: "service.Client",
@ -412,10 +422,12 @@ func Test_service_Server(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want server.Server
fields fields
args args
want server.Server
}{
{
name: "service.Server",
@ -449,10 +461,12 @@ func Test_service_Store(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want store.Store
fields fields
args args
want store.Store
}{
{
name: "service.Store",
@ -486,10 +500,12 @@ func Test_service_Register(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want register.Register
fields fields
args args
want register.Register
}{
{
name: "service.Register",
@ -523,10 +539,12 @@ func Test_service_Logger(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want logger.Logger
fields fields
args args
want logger.Logger
}{
{
name: "service.Logger",
@ -560,10 +578,12 @@ func Test_service_Router(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want router.Router
fields fields
args args
want router.Router
}{
{
name: "service.Router",
@ -597,10 +617,12 @@ func Test_service_Meter(t *testing.T) {
names []string
}
tests := []struct {
name string
name string
want meter.Meter
fields fields
args args
want meter.Meter
}{
{
name: "service.Meter",
@ -631,8 +653,8 @@ func Test_service_String(t *testing.T) {
}
tests := []struct {
name string
fields fields
want string
fields fields
}{
{
name: "service.String",

View File

@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Store, bool) {
return c, ok
}
// MustContext get store from context
func MustContext(ctx context.Context) Store {
s, ok := FromContext(ctx)
if !ok {
panic("missing store")
}
return s
}
// NewContext put store in context
func NewContext(ctx context.Context, c Store) context.Context {
if ctx == nil {

View File

@ -4,6 +4,7 @@ import (
"context"
"sort"
"strings"
"sync/atomic"
"time"
cache "github.com/patrickmn/go-cache"
@ -20,7 +21,10 @@ func NewStore(opts ...store.Option) store.Store {
}
func (m *memoryStore) Connect(ctx context.Context) error {
return nil
if m.opts.LazyConnect {
return nil
}
return m.connect(ctx)
}
func (m *memoryStore) Disconnect(ctx context.Context) error {
@ -29,13 +33,14 @@ func (m *memoryStore) Disconnect(ctx context.Context) error {
}
type memoryStore struct {
funcRead store.FuncRead
funcWrite store.FuncWrite
funcExists store.FuncExists
funcList store.FuncList
funcDelete store.FuncDelete
store *cache.Cache
opts store.Options
funcRead store.FuncRead
funcWrite store.FuncWrite
funcExists store.FuncExists
funcList store.FuncList
funcDelete store.FuncDelete
store *cache.Cache
opts store.Options
isConnected atomic.Int32
}
func (m *memoryStore) key(prefix, key string) string {
@ -91,15 +96,15 @@ func (m *memoryStore) list(prefix string, limit, offset uint) []string {
if limit != 0 || offset != 0 {
sort.Slice(allKeys, func(i, j int) bool { return allKeys[i] < allKeys[j] })
sort.Slice(allKeys, func(i, j int) bool { return allKeys[i] < allKeys[j] })
end := len(allKeys)
end := uint(len(allKeys))
if limit > 0 {
calcLimit := int(offset + limit)
calcLimit := offset + limit
if calcLimit < end {
end = calcLimit
}
}
if int(offset) >= end {
if offset >= end {
return nil
}
return allKeys[offset:end]
@ -118,7 +123,7 @@ func (m *memoryStore) Init(opts ...store.Option) error {
m.funcList = m.fnList
m.funcDelete = m.fnDelete
m.opts.Hooks.EachNext(func(hook options.Hook) {
m.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case store.HookRead:
m.funcRead = h(m.funcRead)
@ -144,7 +149,24 @@ func (m *memoryStore) Name() string {
return m.opts.Name
}
func (m *memoryStore) Live() bool {
return true
}
func (m *memoryStore) Ready() bool {
return true
}
func (m *memoryStore) Health() bool {
return true
}
func (m *memoryStore) Exists(ctx context.Context, key string, opts ...store.ExistsOption) error {
if m.opts.LazyConnect {
if err := m.connect(ctx); err != nil {
return err
}
}
return m.funcExists(ctx, key, opts...)
}
@ -157,6 +179,11 @@ func (m *memoryStore) fnExists(ctx context.Context, key string, opts ...store.Ex
}
func (m *memoryStore) Read(ctx context.Context, key string, val interface{}, opts ...store.ReadOption) error {
if m.opts.LazyConnect {
if err := m.connect(ctx); err != nil {
return err
}
}
return m.funcRead(ctx, key, val, opts...)
}
@ -169,6 +196,11 @@ func (m *memoryStore) fnRead(ctx context.Context, key string, val interface{}, o
}
func (m *memoryStore) Write(ctx context.Context, key string, val interface{}, opts ...store.WriteOption) error {
if m.opts.LazyConnect {
if err := m.connect(ctx); err != nil {
return err
}
}
return m.funcWrite(ctx, key, val, opts...)
}
@ -193,6 +225,11 @@ func (m *memoryStore) fnWrite(ctx context.Context, key string, val interface{},
}
func (m *memoryStore) Delete(ctx context.Context, key string, opts ...store.DeleteOption) error {
if m.opts.LazyConnect {
if err := m.connect(ctx); err != nil {
return err
}
}
return m.funcDelete(ctx, key, opts...)
}
@ -211,6 +248,11 @@ func (m *memoryStore) Options() store.Options {
}
func (m *memoryStore) List(ctx context.Context, opts ...store.ListOption) ([]string, error) {
if m.opts.LazyConnect {
if err := m.connect(ctx); err != nil {
return nil, err
}
}
return m.funcList(ctx, opts...)
}
@ -244,3 +286,21 @@ func (m *memoryStore) fnList(ctx context.Context, opts ...store.ListOption) ([]s
return keys, nil
}
func (m *memoryStore) connect(ctx context.Context) error {
m.isConnected.CompareAndSwap(0, 1)
return nil
}
func (m *memoryStore) Watch(ctx context.Context, opts ...store.WatchOption) (store.Watcher, error) {
return &watcher{}, nil
}
type watcher struct{}
func (w *watcher) Next() (store.Event, error) {
return nil, nil
}
func (w *watcher) Stop() {
}

View File

@ -2,22 +2,43 @@ package store
import (
"context"
"sync"
"sync/atomic"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/util/id"
)
var _ Store = (*noopStore)(nil)
type noopStore struct {
watchers map[string]Watcher
funcRead FuncRead
funcWrite FuncWrite
funcExists FuncExists
funcList FuncList
funcDelete FuncDelete
opts Options
opts Options
isConnected atomic.Int32
mu sync.Mutex
}
func NewStore(opts ...Option) *noopStore {
func (n *noopStore) Live() bool {
return true
}
func (n *noopStore) Ready() bool {
return true
}
func (n *noopStore) Health() bool {
return true
}
func NewStore(opts ...Option) Store {
options := NewOptions(opts...)
return &noopStore{opts: options}
}
@ -33,7 +54,7 @@ func (n *noopStore) Init(opts ...Option) error {
n.funcList = n.fnList
n.funcDelete = n.fnDelete
n.opts.Hooks.EachNext(func(hook options.Hook) {
n.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookRead:
n.funcRead = h(n.funcRead)
@ -52,12 +73,10 @@ func (n *noopStore) Init(opts ...Option) error {
}
func (n *noopStore) Connect(ctx context.Context) error {
select {
case <-ctx.Done():
return ctx.Err()
default:
if n.opts.LazyConnect {
return nil
}
return nil
return n.connect(ctx)
}
func (n *noopStore) Disconnect(ctx context.Context) error {
@ -70,10 +89,15 @@ func (n *noopStore) Disconnect(ctx context.Context) error {
}
func (n *noopStore) Read(ctx context.Context, key string, val interface{}, opts ...ReadOption) error {
if n.opts.LazyConnect {
if err := n.connect(ctx); err != nil {
return err
}
}
return n.funcRead(ctx, key, val, opts...)
}
func (n *noopStore) fnRead(ctx context.Context, key string, val interface{}, opts ...ReadOption) error {
func (n *noopStore) fnRead(ctx context.Context, _ string, _ interface{}, _ ...ReadOption) error {
select {
case <-ctx.Done():
return ctx.Err()
@ -83,10 +107,15 @@ func (n *noopStore) fnRead(ctx context.Context, key string, val interface{}, opt
}
func (n *noopStore) Delete(ctx context.Context, key string, opts ...DeleteOption) error {
if n.opts.LazyConnect {
if err := n.connect(ctx); err != nil {
return err
}
}
return n.funcDelete(ctx, key, opts...)
}
func (n *noopStore) fnDelete(ctx context.Context, key string, opts ...DeleteOption) error {
func (n *noopStore) fnDelete(ctx context.Context, _ string, _ ...DeleteOption) error {
select {
case <-ctx.Done():
return ctx.Err()
@ -96,10 +125,15 @@ func (n *noopStore) fnDelete(ctx context.Context, key string, opts ...DeleteOpti
}
func (n *noopStore) Exists(ctx context.Context, key string, opts ...ExistsOption) error {
if n.opts.LazyConnect {
if err := n.connect(ctx); err != nil {
return err
}
}
return n.funcExists(ctx, key, opts...)
}
func (n *noopStore) fnExists(ctx context.Context, key string, opts ...ExistsOption) error {
func (n *noopStore) fnExists(ctx context.Context, _ string, _ ...ExistsOption) error {
select {
case <-ctx.Done():
return ctx.Err()
@ -109,10 +143,15 @@ func (n *noopStore) fnExists(ctx context.Context, key string, opts ...ExistsOpti
}
func (n *noopStore) Write(ctx context.Context, key string, val interface{}, opts ...WriteOption) error {
if n.opts.LazyConnect {
if err := n.connect(ctx); err != nil {
return err
}
}
return n.funcWrite(ctx, key, val, opts...)
}
func (n *noopStore) fnWrite(ctx context.Context, key string, val interface{}, opts ...WriteOption) error {
func (n *noopStore) fnWrite(ctx context.Context, _ string, _ interface{}, _ ...WriteOption) error {
select {
case <-ctx.Done():
return ctx.Err()
@ -122,6 +161,11 @@ func (n *noopStore) fnWrite(ctx context.Context, key string, val interface{}, op
}
func (n *noopStore) List(ctx context.Context, opts ...ListOption) ([]string, error) {
if n.opts.LazyConnect {
if err := n.connect(ctx); err != nil {
return nil, err
}
}
return n.funcList(ctx, opts...)
}
@ -145,3 +189,53 @@ func (n *noopStore) String() string {
func (n *noopStore) Options() Options {
return n.opts
}
func (n *noopStore) connect(ctx context.Context) error {
if n.isConnected.CompareAndSwap(0, 1) {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
}
return nil
}
type watcher struct {
opts WatchOptions
ch chan Event
exit chan bool
id string
}
func (n *noopStore) Watch(_ context.Context, opts ...WatchOption) (Watcher, error) {
id, err := id.New()
if err != nil {
return nil, err
}
wo, err := NewWatchOptions(opts...)
if err != nil {
return nil, err
}
// construct the watcher
w := &watcher{
exit: make(chan bool),
ch: make(chan Event),
id: id,
opts: wo,
}
n.mu.Lock()
n.watchers[w.id] = w
n.mu.Unlock()
return w, nil
}
func (w *watcher) Next() (Event, error) {
return nil, nil
}
func (w *watcher) Stop() {
}

View File

@ -15,6 +15,13 @@ import (
// Options contains configuration for the Store
type Options struct {
// Name specifies store name
Name string
// Namespace of the records
Namespace string
// Separator used as key parts separator
Separator string
// Meter used for metrics
Meter meter.Meter
// Tracer used for tracing
@ -25,22 +32,19 @@ type Options struct {
Codec codec.Codec
// Logger used for logging
Logger logger.Logger
// TLSConfig holds tls.TLSConfig options
TLSConfig *tls.Config
// Name specifies store name
Name string
// Namespace of the records
Namespace string
// Separator used as key parts separator
Separator string
// Addrs contains store address
Addrs []string
// Wrappers store wrapper that called before actual functions
// Wrappers []Wrapper
// Timeout specifies timeout duration for all operations
Timeout time.Duration
// Hooks can be run before/after store Read/List/Write/Exists/Delete
Hooks options.Hooks
// Timeout specifies timeout duration for all operations
Timeout time.Duration
// LazyConnect creates a connection when using store
LazyConnect bool
}
// NewOptions creates options struct
@ -132,6 +136,13 @@ func Timeout(td time.Duration) Option {
}
}
// LazyConnect initialize connection only when needed
func LazyConnect(b bool) Option {
return func(o *Options) {
o.LazyConnect = b
}
}
// Addrs contains the addresses or other connection information of the backing storage.
// For example, an etcd implementation would contain the nodes of the cluster.
// A SQL implementation could contain one or more connection strings.

View File

@ -4,15 +4,19 @@ package store
import (
"context"
"errors"
"time"
)
var (
ErrWatcherStopped = errors.New("watcher stopped")
// ErrNotConnected is returned when a store is not connected
ErrNotConnected = errors.New("not connected")
// ErrNotFound is returned when a key doesn't exist
ErrNotFound = errors.New("not found")
// ErrInvalidKey is returned when a key has empty or have invalid format
ErrInvalidKey = errors.New("invalid key")
// DefaultStore is the global default store
DefaultStore Store = NewStore()
DefaultStore = NewStore()
// DefaultSeparator is the gloabal default key parts separator
DefaultSeparator = "/"
)
@ -41,6 +45,14 @@ type Store interface {
Disconnect(ctx context.Context) error
// String returns the name of the implementation.
String() string
// Watch returns events watcher
Watch(ctx context.Context, opts ...WatchOption) (Watcher, error)
// Live returns store liveness
Live() bool
// Ready returns store readiness
Ready() bool
// Health returns store health
Health() bool
}
type (
@ -55,3 +67,45 @@ type (
FuncList func(ctx context.Context, opts ...ListOption) ([]string, error)
HookList func(next FuncList) FuncList
)
type EventType int
const (
EventTypeUnknown = iota
EventTypeConnect
EventTypeDisconnect
EventTypeOpError
)
type Event interface {
Timestamp() time.Time
Error() error
Type() EventType
}
type Watcher interface {
// Next is a blocking call
Next() (Event, error)
// Stop stops the watcher
Stop()
}
type WatchOption func(*WatchOptions) error
type WatchOptions struct{}
func NewWatchOptions(opts ...WatchOption) (WatchOptions, error) {
options := WatchOptions{}
var err error
for _, o := range opts {
if err = o(&options); err != nil {
break
}
}
return options, err
}
func Watch(context.Context) (Watcher, error) {
return nil, nil
}

View File

@ -67,16 +67,18 @@ func (w *NamespaceStore) String() string {
return w.s.String()
}
// type NamespaceWrapper struct{}
// func NewNamespaceWrapper() Wrapper {
// return &NamespaceWrapper{}
// }
/*
func (w *OmitWrapper) Logf(fn LogfFunc) LogfFunc {
return func(ctx context.Context, level Level, msg string, args ...interface{}) {
fn(ctx, level, msg, getArgs(args)...)
}
func (w *NamespaceStore) Watch(ctx context.Context, opts ...WatchOption) (Watcher, error) {
return w.s.Watch(ctx, opts...)
}
func (w *NamespaceStore) Live() bool {
return w.s.Live()
}
func (w *NamespaceStore) Ready() bool {
return w.s.Ready()
}
func (w *NamespaceStore) Health() bool {
return w.s.Health()
}
*/

View File

@ -6,9 +6,10 @@ import (
)
type memorySync struct {
mtx gosync.RWMutex
locks map[string]*memoryLock
options Options
mtx gosync.RWMutex
}
type memoryLock struct {

View File

@ -18,6 +18,15 @@ func FromContext(ctx context.Context) (Tracer, bool) {
return nil, false
}
// MustContext returns a tracer from context
func MustContext(ctx context.Context) Tracer {
t, ok := FromContext(ctx)
if !ok {
panic("missing tracer")
}
return t
}
// NewContext saves the tracer in the context
func NewContext(ctx context.Context, tracer Tracer) context.Context {
if ctx == nil {
@ -28,6 +37,15 @@ func NewContext(ctx context.Context, tracer Tracer) context.Context {
type spanKey struct{}
// SpanFromContext returns a span from context
func SpanMustContext(ctx context.Context) Span {
sp, ok := SpanFromContext(ctx)
if !ok {
panic("missing span")
}
return sp
}
// SpanFromContext returns a span from context
func SpanFromContext(ctx context.Context) (Span, bool) {
if ctx == nil {

View File

@ -25,6 +25,7 @@ func (t *Tracer) Start(ctx context.Context, name string, opts ...tracer.SpanOpti
name: name,
ctx: ctx,
tracer: t,
labels: options.Labels,
kind: options.Kind,
startTime: time.Now(),
}
@ -37,6 +38,14 @@ func (t *Tracer) Start(ctx context.Context, name string, opts ...tracer.SpanOpti
return tracer.NewSpanContext(ctx, span), span
}
type memoryStringer struct {
s string
}
func (s memoryStringer) String() string {
return s.s
}
func (t *Tracer) Flush(_ context.Context) error {
return nil
}
@ -52,14 +61,6 @@ func (t *Tracer) Name() string {
return t.opts.Name
}
type noopStringer struct {
s string
}
func (s noopStringer) String() string {
return s.s
}
type Span struct {
ctx context.Context
tracer tracer.Tracer
@ -67,8 +68,8 @@ type Span struct {
statusMsg string
startTime time.Time
finishTime time.Time
traceID noopStringer
spanID noopStringer
traceID memoryStringer
spanID memoryStringer
events []*Event
labels []interface{}
logs []interface{}

View File

@ -17,14 +17,14 @@ func TestLoggerWithTracer(t *testing.T) {
buf := bytes.NewBuffer(nil)
logger.DefaultLogger = slog.NewLogger(logger.WithOutput(buf))
if err := logger.Init(); err != nil {
if err := logger.DefaultLogger.Init(); err != nil {
t.Fatal(err)
}
var span tracer.Span
tr := NewTracer()
ctx, span = tr.Start(ctx, "test1")
logger.Error(ctx, "my test error", fmt.Errorf("error"))
logger.DefaultLogger.Error(ctx, "my test error", fmt.Errorf("error"))
if !strings.Contains(buf.String(), span.TraceID()) {
t.Fatalf("log does not contains trace id: %s", buf.Bytes())

View File

@ -2,6 +2,7 @@ package tracer
import (
"context"
"time"
"go.unistack.org/micro/v3/util/id"
)
@ -20,18 +21,18 @@ func (t *noopTracer) Spans() []Span {
func (t *noopTracer) Start(ctx context.Context, name string, opts ...SpanOption) (context.Context, Span) {
options := NewSpanOptions(opts...)
span := &noopSpan{
name: name,
ctx: ctx,
tracer: t,
labels: options.Labels,
kind: options.Kind,
name: name,
ctx: ctx,
tracer: t,
startTime: time.Now(),
labels: options.Labels,
kind: options.Kind,
}
span.spanID.s, _ = id.New()
span.traceID.s, _ = id.New()
if span.ctx == nil {
span.ctx = context.Background()
}
t.spans = append(t.spans, span)
return NewSpanContext(ctx, span), span
}
@ -58,23 +59,18 @@ func (t *noopTracer) Name() string {
return t.opts.Name
}
type noopEvent struct {
name string
labels []interface{}
}
type noopSpan struct {
ctx context.Context
tracer Tracer
name string
statusMsg string
traceID noopStringer
spanID noopStringer
events []*noopEvent
labels []interface{}
logs []interface{}
kind SpanKind
status SpanStatus
ctx context.Context
tracer Tracer
name string
statusMsg string
startTime time.Time
finishTime time.Time
traceID noopStringer
spanID noopStringer
labels []interface{}
kind SpanKind
status SpanStatus
}
func (s *noopSpan) TraceID() string {
@ -86,6 +82,7 @@ func (s *noopSpan) SpanID() string {
}
func (s *noopSpan) Finish(_ ...SpanOption) {
s.finishTime = time.Now()
}
func (s *noopSpan) Context() context.Context {
@ -97,8 +94,6 @@ func (s *noopSpan) Tracer() Tracer {
}
func (s *noopSpan) AddEvent(name string, opts ...EventOption) {
options := NewEventOptions(opts...)
s.events = append(s.events, &noopEvent{name: name, labels: options.Labels})
}
func (s *noopSpan) SetName(name string) {
@ -106,7 +101,6 @@ func (s *noopSpan) SetName(name string) {
}
func (s *noopSpan) AddLogs(kv ...interface{}) {
s.logs = append(s.logs, kv...)
}
func (s *noopSpan) AddLabels(kv ...interface{}) {

409
util/dns/cache.go Normal file
View File

@ -0,0 +1,409 @@
package dns
import (
"context"
"math"
"net"
"sync"
"time"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/semconv"
)
// DialFunc is a [net.Resolver.Dial] function.
type DialFunc func(ctx context.Context, network, address string) (net.Conn, error)
// NewNetResolver creates a caching [net.Resolver] that uses parent to resolve names.
func NewNetResolver(opts ...Option) *net.Resolver {
options := Options{Resolver: &net.Resolver{}}
for _, o := range opts {
o(&options)
}
if options.Meter == nil {
options.Meter = meter.DefaultMeter
opts = append(opts, Meter(options.Meter))
}
return &net.Resolver{
PreferGo: true,
StrictErrors: options.Resolver.StrictErrors,
Dial: NewNetDialer(options.Resolver.Dial, append(opts, Resolver(options.Resolver))...),
}
}
// NewNetDialer adds caching to a [net.Resolver.Dial] function.
func NewNetDialer(parent DialFunc, opts ...Option) DialFunc {
cache := cache{dial: parent, opts: Options{}}
for _, o := range opts {
o(&cache.opts)
}
if cache.opts.MaxCacheEntries == 0 {
cache.opts.MaxCacheEntries = DefaultMaxCacheEntries
}
return func(_ context.Context, network, address string) (net.Conn, error) {
conn := &dnsConn{}
conn.roundTrip = cachingRoundTrip(&cache, network, address)
return conn, nil
}
}
const DefaultMaxCacheEntries = 300
// A Option customizes the resolver cache.
type Option func(*Options)
type Options struct {
Resolver *net.Resolver
MaxCacheEntries int
MaxCacheTTL time.Duration
MinCacheTTL time.Duration
NegativeCache bool
PreferIPV4 bool
PreferIPV6 bool
Timeout time.Duration
Meter meter.Meter
}
// MaxCacheEntries sets the maximum number of entries to cache.
// If zero, [DefaultMaxCacheEntries] is used; negative means no limit.
func MaxCacheEntries(n int) Option {
return func(o *Options) {
o.MaxCacheEntries = n
}
}
// MaxCacheTTL sets the maximum time-to-live for entries in the cache.
func MaxCacheTTL(td time.Duration) Option {
return func(o *Options) {
o.MaxCacheTTL = td
}
}
// MinCacheTTL sets the minimum time-to-live for entries in the cache.
func MinCacheTTL(td time.Duration) Option {
return func(o *Options) {
o.MinCacheTTL = td
}
}
// NegativeCache sets whether to cache negative responses.
func NegativeCache(b bool) Option {
return func(o *Options) {
o.NegativeCache = b
}
}
// Meter sets meter.Meter
func Meter(m meter.Meter) Option {
return func(o *Options) {
o.Meter = m
}
}
// Timeout sets upstream *net.Resolver timeout
func Timeout(td time.Duration) Option {
return func(o *Options) {
o.Timeout = td
}
}
// Resolver sets upstream *net.Resolver.
func Resolver(r *net.Resolver) Option {
return func(o *Options) {
o.Resolver = r
}
}
// PreferIPV4 resolve ipv4 records.
func PreferIPV4(b bool) Option {
return func(o *Options) {
o.PreferIPV4 = b
}
}
// PreferIPV6 resolve ipv4 records.
func PreferIPV6(b bool) Option {
return func(o *Options) {
o.PreferIPV6 = b
}
}
type cache struct {
entries map[string]cacheEntry
dial DialFunc
opts Options
sync.RWMutex
}
type cacheEntry struct {
deadline time.Time
value string
}
func (c *cache) put(req string, res string) {
// ignore uncacheable/unparseable answers
if invalid(req, res) {
return
}
// ignore errors (if requested)
if nameError(res) && !c.opts.NegativeCache {
return
}
// ignore uncacheable/unparseable answers
ttl := getTTL(res)
if ttl <= 0 {
return
}
// adjust TTL
if ttl < c.opts.MinCacheTTL {
ttl = c.opts.MinCacheTTL
}
// maxTTL overrides minTTL
if ttl > c.opts.MaxCacheTTL && c.opts.MaxCacheTTL != 0 {
ttl = c.opts.MaxCacheTTL
}
c.Lock()
if c.entries == nil {
c.entries = make(map[string]cacheEntry)
}
// do some cache evition
var tested, evicted int
for k, e := range c.entries {
if time.Until(e.deadline) <= 0 {
c.opts.Meter.Counter(semconv.CacheItemsTotal, "type", "dns").Dec()
c.opts.Meter.Counter(semconv.CacheRequestTotal, "type", "dns", "method", "evict").Inc()
// delete expired entry
delete(c.entries, k)
evicted++
}
tested++
if tested < 8 {
continue
}
if evicted == 0 && c.opts.MaxCacheEntries > 0 && len(c.entries) >= c.opts.MaxCacheEntries {
c.opts.Meter.Counter(semconv.CacheItemsTotal, "type", "dns").Dec()
c.opts.Meter.Counter(semconv.CacheRequestTotal, "type", "dns", "method", "evict").Inc()
// delete at least one entry
delete(c.entries, k)
}
break
}
// remove message IDs
c.entries[req[2:]] = cacheEntry{
deadline: time.Now().Add(ttl),
value: res[2:],
}
c.opts.Meter.Counter(semconv.CacheItemsTotal, "type", "dns").Inc()
c.Unlock()
}
func (c *cache) get(req string) (res string) {
// ignore invalid messages
if len(req) < 12 {
return ""
}
if req[2] >= 0x7f {
return ""
}
c.RLock()
defer c.RUnlock()
if c.entries == nil {
return ""
}
// remove message ID
entry, ok := c.entries[req[2:]]
if ok && time.Until(entry.deadline) > 0 {
// prepend correct ID
return req[:2] + entry.value
}
return ""
}
func invalid(req string, res string) bool {
if len(req) < 12 || len(res) < 12 { // header size
return true
}
if req[0] != res[0] || req[1] != res[1] { // IDs match
return true
}
if req[2] >= 0x7f || res[2] < 0x7f { // query, response
return true
}
if req[2]&0x7a != 0 || res[2]&0x7a != 0 { // standard query, not truncated
return true
}
if res[3]&0xf != 0 && res[3]&0xf != 3 { // no error, or name error
return true
}
return false
}
func nameError(res string) bool {
return res[3]&0xf == 3
}
func getTTL(msg string) time.Duration {
ttl := math.MaxInt32
qdcount := getUint16(msg[4:])
ancount := getUint16(msg[6:])
nscount := getUint16(msg[8:])
arcount := getUint16(msg[10:])
rdcount := ancount + nscount + arcount
msg = msg[12:] // skip header
// skip questions
for i := 0; i < qdcount; i++ {
name := getNameLen(msg)
if name < 0 || name+4 > len(msg) {
return -1
}
msg = msg[name+4:]
}
// parse records
for i := 0; i < rdcount; i++ {
name := getNameLen(msg)
if name < 0 || name+10 > len(msg) {
return -1
}
rtyp := getUint16(msg[name+0:])
rttl := getUint32(msg[name+4:])
rlen := getUint16(msg[name+8:])
if name+10+rlen > len(msg) {
return -1
}
// skip EDNS OPT since it doesn't have a TTL
if rtyp != 41 && rttl < ttl {
ttl = rttl
}
msg = msg[name+10+rlen:]
}
return time.Duration(ttl) * time.Second
}
func getNameLen(msg string) int {
i := 0
for i < len(msg) {
if msg[i] == 0 {
// end of name
i++
break
}
if msg[i] >= 0xc0 {
// compressed name
i += 2
break
}
if msg[i] >= 0x40 {
// reserved
return -1
}
i += int(msg[i] + 1)
}
return i
}
func getUint16(s string) int {
return int(s[1]) | int(s[0])<<8
}
func getUint32(s string) int {
return int(s[3]) | int(s[2])<<8 | int(s[1])<<16 | int(s[0])<<24
}
func cachingRoundTrip(cache *cache, network, address string) roundTripper {
return func(ctx context.Context, req string) (res string, err error) {
cache.opts.Meter.Counter(semconv.CacheRequestInflight, "type", "dns").Inc()
defer cache.opts.Meter.Counter(semconv.CacheRequestInflight, "type", "dns").Dec()
// check cache
if res = cache.get(req); res != "" {
return res, nil
}
cache.opts.Meter.Counter(semconv.CacheRequestTotal, "type", "dns", "method", "get", "status", "miss").Inc()
ts := time.Now()
defer func() {
cache.opts.Meter.Summary(semconv.CacheRequestLatencyMicroseconds, "type", "dns", "method", "get").UpdateDuration(ts)
cache.opts.Meter.Histogram(semconv.CacheRequestDurationSeconds, "type", "dns", "method", "get").UpdateDuration(ts)
}()
switch {
case cache.opts.PreferIPV4 && cache.opts.PreferIPV6:
network = "udp"
case cache.opts.PreferIPV4:
network = "udp4"
case cache.opts.PreferIPV6:
network = "udp6"
default:
network = "udp"
}
if cache.opts.Timeout > 0 {
var cancel func()
ctx, cancel = context.WithTimeout(ctx, cache.opts.Timeout)
defer cancel()
}
// dial connection
var conn net.Conn
if cache.dial != nil {
conn, err = cache.dial(ctx, network, address)
} else {
var d net.Dialer
conn, err = d.DialContext(ctx, network, address)
}
if err != nil {
return "", err
}
ctx, cancel := context.WithCancel(ctx)
go func() {
<-ctx.Done()
conn.Close()
}()
defer cancel()
if t, ok := ctx.Deadline(); ok {
err = conn.SetDeadline(t)
if err != nil {
return "", err
}
}
// send request
err = writeMessage(conn, req)
if err != nil {
return "", err
}
// read response
res, err = readMessage(conn)
if err != nil {
return "", err
}
// cache response
cache.put(req, res)
return res, nil
}
}

20
util/dns/cache_test.go Normal file
View File

@ -0,0 +1,20 @@
package dns
import (
"net"
"testing"
)
func TestCache(t *testing.T) {
net.DefaultResolver = NewNetResolver(PreferIPV4(true))
_, err := net.LookupHost("unistack.org")
if err != nil {
t.Fatal(err)
}
_, err = net.LookupHost("unistack.org")
if err != nil {
t.Fatal(err)
}
}

183
util/dns/conn.go Normal file
View File

@ -0,0 +1,183 @@
package dns
import (
"bytes"
"context"
"io"
"net"
"strings"
"sync"
"time"
)
type dnsConn struct {
ctx context.Context
cancel context.CancelFunc
roundTrip roundTripper
deadline time.Time
ibuf bytes.Buffer
obuf bytes.Buffer
sync.Mutex
}
type roundTripper func(ctx context.Context, req string) (res string, err error)
func (c *dnsConn) Read(b []byte) (n int, err error) {
imsg, n, err := c.drainBuffers(b)
if n != 0 || err != nil {
return n, err
}
ctx, cancel := c.childContext()
omsg, err := c.roundTrip(ctx, imsg)
cancel()
if err != nil {
return 0, err
}
return c.fillBuffer(b, omsg)
}
func (c *dnsConn) Write(b []byte) (n int, err error) {
c.Lock()
defer c.Unlock()
return c.ibuf.Write(b)
}
func (c *dnsConn) Close() error {
c.Lock()
cancel := c.cancel
c.Unlock()
if cancel != nil {
cancel()
}
return nil
}
func (c *dnsConn) LocalAddr() net.Addr {
return nil
}
func (c *dnsConn) RemoteAddr() net.Addr {
return nil
}
func (c *dnsConn) SetDeadline(t time.Time) error {
var err error
if err = c.SetReadDeadline(t); err != nil {
return err
}
if err = c.SetWriteDeadline(t); err != nil {
return err
}
return nil
}
func (c *dnsConn) SetReadDeadline(t time.Time) error {
c.Lock()
c.deadline = t
c.Unlock()
return nil
}
func (c *dnsConn) SetWriteDeadline(_ time.Time) error {
// writes do not timeout
return nil
}
func (c *dnsConn) drainBuffers(b []byte) (string, int, error) {
c.Lock()
defer c.Unlock()
// drain the output buffer
if c.obuf.Len() > 0 {
n, err := c.obuf.Read(b)
return "", n, err
}
// otherwise, get the next message from the input buffer
sz := c.ibuf.Next(2)
if len(sz) < 2 {
return "", 0, io.ErrUnexpectedEOF
}
size := int64(sz[0])<<8 | int64(sz[1])
var str strings.Builder
_, err := io.CopyN(&str, &c.ibuf, size)
if err == io.EOF {
return "", 0, io.ErrUnexpectedEOF
}
if err != nil {
return "", 0, err
}
return str.String(), 0, nil
}
func (c *dnsConn) fillBuffer(b []byte, str string) (int, error) {
c.Lock()
defer c.Unlock()
c.obuf.WriteByte(byte(len(str) >> 8))
c.obuf.WriteByte(byte(len(str)))
c.obuf.WriteString(str)
return c.obuf.Read(b)
}
func (c *dnsConn) childContext() (context.Context, context.CancelFunc) {
c.Lock()
defer c.Unlock()
if c.ctx == nil {
c.ctx, c.cancel = context.WithCancel(context.Background())
}
return context.WithDeadline(c.ctx, c.deadline)
}
func writeMessage(conn net.Conn, msg string) error {
var buf []byte
if _, ok := conn.(net.PacketConn); ok {
buf = []byte(msg)
} else {
buf = make([]byte, len(msg)+2)
buf[0] = byte(len(msg) >> 8)
buf[1] = byte(len(msg))
copy(buf[2:], msg)
}
// SHOULD do a single write on TCP (RFC 7766, section 8).
// MUST do a single write on UDP.
_, err := conn.Write(buf)
return err
}
func readMessage(c net.Conn) (string, error) {
if _, ok := c.(net.PacketConn); ok {
// RFC 1035 specifies 512 as the maximum message size for DNS over UDP.
// RFC 6891 OTOH suggests 4096 as the maximum payload size for EDNS.
b := make([]byte, 4096)
n, err := c.Read(b)
if err != nil {
return "", err
}
return string(b[:n]), nil
}
var sz [2]byte
_, err := io.ReadFull(c, sz[:])
if err != nil {
return "", err
}
size := int64(sz[0])<<8 | int64(sz[1])
var str strings.Builder
_, err = io.CopyN(&str, c, size)
if err == io.EOF {
return "", io.ErrUnexpectedEOF
}
if err != nil {
return "", err
}
return str.String(), nil
}

View File

@ -71,7 +71,7 @@ func (h *serverHandler) HandleRPC(ctx context.Context, rs stats.RPCStats) {
}
// TagConn can attach some information to the given context.
func (h *serverHandler) TagConn(ctx context.Context, info *stats.ConnTagInfo) context.Context {
func (h *serverHandler) TagConn(ctx context.Context, _ *stats.ConnTagInfo) context.Context {
if span, ok := tracer.SpanFromContext(ctx); ok {
attrs := peerAttr(peerFromCtx(ctx))
span.AddLabels(attrs...)
@ -80,7 +80,7 @@ func (h *serverHandler) TagConn(ctx context.Context, info *stats.ConnTagInfo) co
}
// HandleConn processes the Conn stats.
func (h *serverHandler) HandleConn(ctx context.Context, info stats.ConnStats) {
func (h *serverHandler) HandleConn(_ context.Context, _ stats.ConnStats) {
}
type clientHandler struct {

View File

@ -665,12 +665,12 @@ func patParamKeys(pattern string) ([]string, error) {
// longestPrefix finds the length of the shared prefix
// of two strings
func longestPrefix(k1, k2 string) int {
max := len(k1)
if l := len(k2); l < max {
max = l
maxLen := len(k1)
if l := len(k2); l < maxLen {
maxLen = l
}
var i int
for i = 0; i < max; i++ {
for i = 0; i < maxLen; i++ {
if k1[i] != k2[i] {
break
}

View File

@ -71,7 +71,7 @@ func New(opts ...Option) (string, error) {
func Must(opts ...Option) string {
id, err := New(opts...)
if err != nil {
logger.Fatal(context.TODO(), err)
logger.DefaultLogger.Fatal(context.TODO(), "Must call is failed", err)
}
return id
}

View File

@ -1,40 +0,0 @@
// Package io is for io management
package io
import (
"io"
"go.unistack.org/micro/v3/network/transport"
)
type rwc struct {
socket transport.Socket
}
func (r *rwc) Read(p []byte) (n int, err error) {
m := new(transport.Message)
if err := r.socket.Recv(m); err != nil {
return 0, err
}
copy(p, m.Body)
return len(m.Body), nil
}
func (r *rwc) Write(p []byte) (n int, err error) {
err = r.socket.Send(&transport.Message{
Body: p,
})
if err != nil {
return 0, err
}
return len(p), nil
}
func (r *rwc) Close() error {
return r.socket.Close()
}
// NewRWC returns a new ReadWriteCloser
func NewRWC(sock transport.Socket) io.ReadWriteCloser {
return &rwc{sock}
}

View File

@ -14,7 +14,7 @@ func Random(d time.Duration) time.Duration {
return time.Duration(v)
}
func RandomInterval(min, max time.Duration) time.Duration {
func RandomInterval(minTime, maxTime time.Duration) time.Duration {
var rng rand.Rand
return time.Duration(rng.Int63n(max.Nanoseconds()-min.Nanoseconds())+min.Nanoseconds()) * time.Nanosecond
return time.Duration(rng.Int63n(maxTime.Nanoseconds()-minTime.Nanoseconds())+minTime.Nanoseconds()) * time.Nanosecond
}

View File

@ -16,7 +16,6 @@ type Ticker struct {
C chan time.Time
min int64
max int64
exp int64
exit bool
rng rand.Rand
}
@ -24,12 +23,12 @@ type Ticker struct {
// NewTickerContext returns a pointer to an initialized instance of the Ticker.
// It works like NewTicker except that it has ability to close via context.
// Also it works fine with context.WithTimeout to handle max time to run ticker.
func NewTickerContext(ctx context.Context, min, max time.Duration) *Ticker {
func NewTickerContext(ctx context.Context, minTime, maxTime time.Duration) *Ticker {
ticker := &Ticker{
C: make(chan time.Time),
done: make(chan chan struct{}),
min: min.Nanoseconds(),
max: max.Nanoseconds(),
min: minTime.Nanoseconds(),
max: maxTime.Nanoseconds(),
ctx: ctx,
}
go ticker.run()
@ -39,12 +38,12 @@ func NewTickerContext(ctx context.Context, min, max time.Duration) *Ticker {
// NewTicker returns a pointer to an initialized instance of the Ticker.
// Min and max are durations of the shortest and longest allowed
// ticks. Ticker will run in a goroutine until explicitly stopped.
func NewTicker(min, max time.Duration) *Ticker {
func NewTicker(minTime, maxTime time.Duration) *Ticker {
ticker := &Ticker{
C: make(chan time.Time),
done: make(chan chan struct{}),
min: min.Nanoseconds(),
max: max.Nanoseconds(),
min: minTime.Nanoseconds(),
max: maxTime.Nanoseconds(),
ctx: context.Background(),
}
go ticker.run()

View File

@ -31,26 +31,26 @@ loop:
func TestTicker(t *testing.T) {
t.Parallel()
min := time.Duration(10)
max := time.Duration(20)
minTime := time.Duration(10)
maxTime := time.Duration(20)
// tick can take a little longer since we're not adjusting it to account for
// processing.
precision := time.Duration(4)
rt := NewTicker(min*time.Millisecond, max*time.Millisecond)
rt := NewTicker(minTime*time.Millisecond, maxTime*time.Millisecond)
for i := 0; i < 5; i++ {
t0 := time.Now()
t1 := <-rt.C
td := t1.Sub(t0)
if td < min*time.Millisecond {
if td < minTime*time.Millisecond {
t.Fatalf("tick was shorter than expected: %s", td)
} else if td > (max+precision)*time.Millisecond {
} else if td > (maxTime+precision)*time.Millisecond {
t.Fatalf("tick was longer than expected: %s", td)
}
}
rt.Stop()
time.Sleep((max + precision) * time.Millisecond)
time.Sleep((maxTime + precision) * time.Millisecond)
select {
case v, ok := <-rt.C:
if ok || !v.IsZero() {

View File

@ -48,19 +48,19 @@ func Listen(addr string, fn func(string) (net.Listener, error)) (net.Listener, e
// we have a port range
// extract min port
min, err := strconv.Atoi(prange[0])
minPort, err := strconv.Atoi(prange[0])
if err != nil {
return nil, errors.New("unable to extract port range")
}
// extract max port
max, err := strconv.Atoi(prange[1])
maxPort, err := strconv.Atoi(prange[1])
if err != nil {
return nil, errors.New("unable to extract port range")
}
// range the ports
for port := min; port <= max; port++ {
for port := minPort; port <= maxPort; port++ {
// try bind to host:port
ln, err := fn(HostPort(host, port))
if err == nil {
@ -68,7 +68,7 @@ func Listen(addr string, fn func(string) (net.Listener, error)) (net.Listener, e
}
// hit max port
if port == max {
if port == maxPort {
return nil, err
}
}

View File

@ -1,118 +0,0 @@
package pool
import (
"context"
"sync"
"time"
"go.unistack.org/micro/v3/network/transport"
"go.unistack.org/micro/v3/util/id"
)
type pool struct {
tr transport.Transport
conns map[string][]*poolConn
size int
ttl time.Duration
sync.Mutex
}
type poolConn struct {
created time.Time
transport.Client
id string
}
func newPool(options Options) *pool {
return &pool{
size: options.Size,
tr: options.Transport,
ttl: options.TTL,
conns: make(map[string][]*poolConn),
}
}
func (p *pool) Close() error {
p.Lock()
for k, c := range p.conns {
for _, conn := range c {
conn.Client.Close()
}
delete(p.conns, k)
}
p.Unlock()
return nil
}
// NoOp the Close since we manage it
func (p *poolConn) Close() error {
return nil
}
func (p *poolConn) ID() string {
return p.id
}
func (p *poolConn) Created() time.Time {
return p.created
}
func (p *pool) Get(ctx context.Context, addr string, opts ...transport.DialOption) (Conn, error) {
p.Lock()
conns := p.conns[addr]
// while we have conns check age and then return one
// otherwise we'll create a new conn
for len(conns) > 0 {
conn := conns[len(conns)-1]
conns = conns[:len(conns)-1]
p.conns[addr] = conns
// if conn is old kill it and move on
if d := time.Since(conn.Created()); d > p.ttl {
conn.Client.Close()
continue
}
// we got a good conn, lets unlock and return it
p.Unlock()
return conn, nil
}
p.Unlock()
// create new conn
c, err := p.tr.Dial(ctx, addr, opts...)
if err != nil {
return nil, err
}
id, err := id.New()
if err != nil {
return nil, err
}
return &poolConn{
Client: c,
id: id,
created: time.Now(),
}, nil
}
func (p *pool) Release(conn Conn, err error) error {
// don't store the conn if it has errored
if err != nil {
return conn.(*poolConn).Client.Close()
}
// otherwise put it back for reuse
p.Lock()
conns := p.conns[conn.Remote()]
if len(conns) >= p.size {
p.Unlock()
return conn.(*poolConn).Client.Close()
}
p.conns[conn.Remote()] = append(conns, conn.(*poolConn))
p.Unlock()
return nil
}

View File

@ -1,92 +0,0 @@
//go:build ignore
// +build ignore
package pool
import (
"testing"
"time"
"go.unistack.org/micro/v3/network/transport"
"go.unistack.org/micro/v3/network/transport/memory"
)
func testPool(t *testing.T, size int, ttl time.Duration) {
// mock transport
tr := memory.NewTransport()
options := Options{
TTL: ttl,
Size: size,
Transport: tr,
}
// zero pool
p := newPool(options)
// listen
l, err := tr.Listen(":0")
if err != nil {
t.Fatal(err)
}
defer l.Close()
// accept loop
go func() {
for {
if err := l.Accept(func(s transport.Socket) {
for {
var msg transport.Message
if err := s.Recv(&msg); err != nil {
return
}
if err := s.Send(&msg); err != nil {
return
}
}
}); err != nil {
return
}
}
}()
for i := 0; i < 10; i++ {
// get a conn
c, err := p.Get(l.Addr())
if err != nil {
t.Fatal(err)
}
msg := &transport.Message{
Body: []byte(`hello world`),
}
if err := c.Send(msg); err != nil {
t.Fatal(err)
}
var rcv transport.Message
if err := c.Recv(&rcv); err != nil {
t.Fatal(err)
}
if string(rcv.Body) != string(msg.Body) {
t.Fatalf("got %v, expected %v", rcv.Body, msg.Body)
}
// release the conn
p.Release(c, nil)
p.Lock()
if i := len(p.conns[l.Addr()]); i > size {
p.Unlock()
t.Fatalf("pool size %d is greater than expected %d", i, size)
}
p.Unlock()
}
}
func TestClientPool(t *testing.T) {
testPool(t, 0, time.Minute)
testPool(t, 2, time.Minute)
}

Some files were not shown because too many files have changed in this diff Show More