Compare commits

..

235 Commits

Author SHA1 Message Date
0f8f12aee0 add tracer enabled status
Some checks failed
coverage / build (push) Successful in 2m52s
test / test (push) Failing after 18m53s
sync / sync (push) Successful in 26s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-19 09:33:01 +03:00
8b406cf963 util/buffer: add Reset() method
Some checks failed
coverage / build (push) Failing after 1m36s
test / test (push) Successful in 3m35s
sync / sync (push) Successful in 7s
closes #402

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-12 19:18:45 +03:00
029a434a2b broker: pass broker content type if message options not pass it
All checks were successful
coverage / build (push) Successful in 1m44s
test / test (push) Successful in 3m5s
sync / sync (push) Successful in 7s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-09 13:51:35 +03:00
vtolstov
847259bc39 Apply Code Coverage Badge 2025-05-09 09:36:02 +00:00
a1ee8728ad broker: add Content-Type and DefaultContentType
All checks were successful
coverage / build (push) Successful in 1m58s
sync / sync (push) Successful in 1m37s
test / test (push) Successful in 3m47s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-09 12:34:46 +03:00
88a5875cfb switch yaml package to maintained one
All checks were successful
coverage / build (push) Successful in 2m16s
test / test (push) Successful in 4m37s
sync / sync (push) Successful in 8s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-09 12:18:49 +03:00
03ee33040c util/id: switch to default uuid package
All checks were successful
coverage / build (push) Successful in 2m21s
test / test (push) Successful in 4m50s
sync / sync (push) Successful in 6s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-08 19:07:00 +03:00
0144f175f0 add comment
All checks were successful
coverage / build (push) Successful in 1m33s
test / test (push) Successful in 3m41s
sync / sync (push) Successful in 8s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-06 23:00:15 +03:00
b3539a32ab logger: add none level
closes #399

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-06 23:00:15 +03:00
vtolstov
6a7223ea4a Apply Code Coverage Badge 2025-05-06 08:09:12 +00:00
1a1b67866a [v4] improve metadata documentation (#216)
All checks were successful
coverage / build (push) Successful in 1m47s
test / test (push) Successful in 2m50s
* add usage docs for context types and metadata, improve comments

* changes after review
2025-05-06 11:02:27 +03:00
b7c98da6d1 added commit hash check to avoid unnecessary repository cloning (#215)
All checks were successful
sync / sync (push) Successful in 16s
2025-05-05 14:53:28 +03:00
2c21cce076 tracer/noop: disable allocation for trace and span id
All checks were successful
coverage / build (push) Successful in 2m22s
test / test (push) Successful in 4m22s
sync / sync (push) Successful in 26s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-05 09:48:22 +03:00
c8946dcdc8 fix sync
All checks were successful
sync / sync (push) Successful in 24s
2025-05-04 15:03:22 +03:00
vtolstov
d342ff2626 Apply Code Coverage Badge 2025-05-02 06:23:44 +00:00
f2d0d67d4c hooks/requestid: fixup panic
All checks were successful
coverage / build (push) Successful in 2m0s
test / test (push) Successful in 2m33s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-02 09:22:19 +03:00
677dc30af0 [v4] update ci (#213)
All checks were successful
sync / sync (push) Successful in 41s
* update ci

* cleanup
2025-05-01 19:15:06 +03:00
vtolstov
7122cc873c Apply Code Coverage Badge 2025-05-01 15:58:29 +00:00
77e370ffdc move wrapper.sql to hook/sql (#401)
All checks were successful
coverage / build (push) Successful in 2m3s
test / test (push) Successful in 2m42s
move micro-wrapper-sql to core repo

Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #401
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2025-05-01 18:56:34 +03:00
vtolstov
7b1c42e50b Apply Code Coverage Badge 2025-04-29 20:18:49 +00:00
f3b9493ac3 hooks/metadata: fix
All checks were successful
coverage / build (push) Successful in 1m20s
test / test (push) Successful in 1m57s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 23:18:10 +03:00
e4ee705eb2 metadata: sync with grpc
Some checks failed
coverage / build (push) Failing after 41s
test / test (push) Successful in 2m31s
sync / sync (push) Successful in 46s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 23:13:57 +03:00
7ff7a3dbe0 update all
All checks were successful
sync / sync (push) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 18:29:26 +03:00
7af5147f4b update all
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 18:28:33 +03:00
394fd16243 update all
All checks were successful
sync / sync (push) Successful in 36s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 18:16:16 +03:00
2b08c8f682 fixup coverage job
All checks were successful
sync / sync (push) Successful in 43s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 14:13:56 +03:00
f9a7f62d02 fixup workflow
Some checks failed
sync / sync (push) Failing after 1m9s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 14:06:43 +03:00
vtolstov
f5aedf5951 Apply Code Coverage Badge 2025-04-29 10:54:56 +00:00
a5ef231171 cleanup metadata
All checks were successful
sync / sync (push) Successful in 1m20s
coverage / build (push) Successful in 1m23s
test / test (push) Successful in 2m42s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 13:54:16 +03:00
23f2ee9bb7 fixup hooks
All checks were successful
coverage / build (push) Successful in 3m12s
test / test (push) Successful in 5m4s
sync / sync (push) Successful in 31s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 13:09:56 +03:00
88606e89ca fixup metadata
Some checks are pending
coverage / build (push) Waiting to run
test / test (push) Waiting to run
sync / sync (push) Has started running
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 13:01:37 +03:00
vtolstov
24efbb68bf Apply Code Coverage Badge
All checks were successful
coverage / build (push) Successful in 2m4s
test / test (push) Successful in 3m28s
sync / sync (push) Successful in 25s
2025-04-27 19:16:46 +00:00
vtolstov
cecdaa0fed Apply Code Coverage Badge 2025-04-27 19:13:06 +00:00
vtolstov
9627995cee Apply Code Coverage Badge
All checks were successful
sync / sync (push) Successful in 1m54s
coverage / build (push) Successful in 1m57s
test / test (push) Successful in 2m27s
2025-04-27 19:07:59 +00:00
vtolstov
0f3539dc7b Apply Code Coverage Badge 2025-04-27 19:06:03 +00:00
ff414eff2e [v4] fix flatten map util function (#211)
All checks were successful
sync / sync (push) Successful in 1m51s
coverage / build (push) Successful in 2m17s
test / test (push) Successful in 4m29s
* add the fixed version of FlattenMap() and corresponding tests
* replaced the old FlattenMap() implementation with a new one
2025-04-27 21:44:24 +03:00
vtolstov
fbf6832738 Apply Code Coverage Badge
Some checks are pending
coverage / build (push) Successful in 6m34s
test / test (push) Successful in 8m23s
sync / sync (push) Has started running
2025-04-27 18:21:57 +00:00
vtolstov
59ff1f931b Apply Code Coverage Badge 2025-04-27 18:19:31 +00:00
2030bd2803 attempt to fix coverage/lint/test job (#210) 2025-04-27 21:12:16 +03:00
bb87a87ae5 improve sync
All checks were successful
sync / sync (push) Successful in 13s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 21:02:19 +03:00
0bd5aed7cc rename workflow
All checks were successful
sync / sync (push) Successful in 10s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 16:08:57 +03:00
434798a574 check actions env
All checks were successful
sync / sync (push) Successful in 9s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 16:00:00 +03:00
459a951115 check actions env
All checks were successful
syncpull / pull (push) Successful in 9s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 15:49:50 +03:00
770c2715d4 check actions env
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 15:48:57 +03:00
c93286afd5 [v4] rename .gitea to .github
All checks were successful
syncpull / pull (push) Successful in 10s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 14:10:57 +03:00
vtolstov
6bf118d978 Apply Code Coverage Badge 2025-04-27 11:05:07 +00:00
7493de1168 move hooks (#398)
All checks were successful
coverage / build (push) Successful in 1m18s
test / test (push) Successful in 2m5s
## Pull Request template
Please, go through these steps before clicking submit on this PR.

1. Give a descriptive title to your PR.
2. Provide a description of your changes.
3. Make sure you have some relevant tests.
4. Put `closes #XXXX` in your comment to auto-close the issue that your PR fixes (if applicable).

**PLEASE REMOVE THIS TEMPLATE BEFORE SUBMITTING**

Reviewed-on: #398
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2025-04-27 14:04:33 +03:00
vtolstov
212a685b50 Apply Code Coverage Badge 2025-04-27 10:58:10 +00:00
3f21bafc2f fixup lint
All checks were successful
coverage / build (push) Successful in 1m17s
test / test (push) Successful in 2m51s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 13:57:42 +03:00
a9ed8b16c1 skip on needed changes
All checks were successful
syncpull / pull (push) Successful in 10s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 13:40:33 +03:00
vtolstov
740cd5931d Apply Code Coverage Badge 2025-04-27 10:39:03 +00:00
85a78063d0 fix panic on shutdown caused by double channel close (#209)
All checks were successful
coverage / build (push) Successful in 2m35s
test / test (push) Successful in 3m43s
2025-04-27 13:37:18 +03:00
604ad9cd9d check sync action
All checks were successful
syncpull / pull (push) Successful in 18s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 13:36:11 +03:00
91137537a2 check sync action
All checks were successful
syncpull / pull (push) Successful in 11s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 13:23:33 +03:00
950e2352fd check sync action
Some checks failed
syncpull / pull (push) Failing after 10s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 13:17:27 +03:00
0bb29b29cf check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:38:02 +03:00
17bcd0b0ab check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:37:17 +03:00
20f9f4da3b check sync action
Some checks failed
syncpull / pull (push) Failing after 8s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:34:28 +03:00
66fa04b8dc check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:33:09 +03:00
1ef3ad6531 check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:32:22 +03:00
c95a91349d check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:30:52 +03:00
fdcf8e6ca4 check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 12:25:27 +03:00
8cb2d9db4a check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:28:50 +03:00
04da4388ac check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:27:26 +03:00
79fb23e644 check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:24:40 +03:00
f8fe923ab1 check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:21:25 +03:00
105f56dbfe check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:20:21 +03:00
9fed5a368b check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:19:58 +03:00
7374d41cf8 check sync action
Some checks failed
syncpull / pull (push) Failing after 6s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:15:32 +03:00
a4a8935c1f check sync action
Some checks failed
syncpull / pull (push) Failing after 6s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:11:39 +03:00
5f498c8232 check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:10:06 +03:00
a00fdf679b check sync action
Some checks failed
syncpull / pull (push) Failing after 6s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 11:05:22 +03:00
dc9ebe4155 check sync action
Some checks failed
syncpull / pull (push) Failing after 7s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:57:51 +03:00
87ced484b7 check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:56:34 +03:00
af99b11a59 check sync action
Some checks failed
syncpull / pull (push) Failing after 7s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:52:20 +03:00
2724b51f7c check sync action
Some checks failed
syncpull / pull (push) Failing after 4s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:41:22 +03:00
5b5d0e02b9 check sync action
Some checks failed
syncpull / pull (push) Failing after 7s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:38:36 +03:00
afc2de6819 check sync action
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:37:01 +03:00
32a8ab9c05 check sync action
Some checks failed
syncpull / pull (push) Failing after 5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 10:34:25 +03:00
vtolstov
7e5401bded Apply Code Coverage Badge 2025-04-27 06:30:32 +00:00
64b91cea06 check sync action
All checks were successful
coverage / build (push) Successful in 1m18s
test / test (push) Successful in 2m5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 09:29:33 +03:00
vtolstov
0f59fdcbde Apply Code Coverage Badge 2025-04-27 06:29:19 +00:00
50979e6708 check sync action
Some checks failed
coverage / build (push) Has started running
test / test (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 09:28:17 +03:00
46f3108870 check sync action
Some checks failed
coverage / build (push) Has been cancelled
test / test (push) Has started running
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 09:27:35 +03:00
vtolstov
5fed91a65f Apply Code Coverage Badge 2025-04-27 06:25:16 +00:00
1c5bba908d check sync action
All checks were successful
coverage / build (push) Successful in 1m14s
test / test (push) Successful in 2m3s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 09:24:41 +03:00
vtolstov
bc8ebdcad5 Apply Code Coverage Badge 2025-04-24 11:55:11 +00:00
fc24f3af92 metadata: add AsMap func
All checks were successful
coverage / build (push) Successful in 1m18s
test / test (push) Successful in 2m3s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-24 14:54:37 +03:00
1058177d1c Delete SECURITY.md 2025-04-22 15:54:19 +03:00
vtolstov
fa53fac085 Apply Code Coverage Badge 2025-04-13 21:03:53 +00:00
8c060df5e3 tracer: add IsRecording to span interface
All checks were successful
coverage / build (push) Successful in 2m0s
test / test (push) Successful in 4m29s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-14 00:02:45 +03:00
e1f8c62685 broker: add SetPublishOption
All checks were successful
coverage / build (push) Successful in 1m33s
test / test (push) Successful in 2m15s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-03-07 18:21:57 +03:00
562b1ab9b7 broker: simplify handler check
All checks were successful
coverage / build (push) Successful in 1m19s
test / test (push) Successful in 2m5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-03-07 15:26:20 +03:00
vtolstov
f3c877a37b Apply Code Coverage Badge 2025-03-06 19:19:52 +00:00
0999b2ad78 remove debug
All checks were successful
coverage / build (push) Successful in 1m31s
test / test (push) Successful in 4m2s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-03-06 22:18:50 +03:00
a365513177 logger: fixup WithAddFields
All checks were successful
coverage / build (push) Successful in 1m43s
test / test (push) Successful in 4m9s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-02-21 18:12:44 +03:00
vtolstov
d1e3f3cab2 Apply Code Coverage Badge 2025-02-20 06:12:44 +00:00
ec94a09417 fixup old deps
All checks were successful
coverage / build (push) Successful in 58s
test / test (push) Successful in 4m16s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-02-20 09:08:46 +03:00
1728b88e06 logger/slog: fixup stacktrace
Some checks failed
coverage / build (push) Failing after 1m11s
test / test (push) Successful in 3m25s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-02-06 16:22:11 +03:00
d3c31da9db util/buffer: rework
Some checks failed
coverage / build (push) Failing after 1m13s
test / test (push) Successful in 3m51s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-02-06 13:46:06 +03:00
59095681be import flow
Some checks failed
coverage / build (push) Failing after 33s
test / test (push) Successful in 3m4s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-31 18:47:17 +03:00
f11ceba225 options: improve options handling
Some checks failed
coverage / build (push) Failing after 1m5s
test / test (push) Successful in 2m2s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-30 23:35:06 +03:00
ffa01de78f broker: refactor (#396)
All checks were successful
coverage / build (push) Successful in 1m6s
test / test (push) Successful in 2m2s
* remove subscribe from server
* remove publish from client
* broker package refactoring

Co-authored-by: vtolstov <vtolstov@users.noreply.github.com>
Reviewed-on: #396
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-30 23:26:45 +03:00
816abc2bbc add copy metadata from grpc-go (#386)
Some checks failed
coverage / build (push) Failing after 1m5s
test / test (push) Successful in 2m10s
Co-authored-by: Василий Толстов <v.tolstov@unistack.org>
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-authored-by: vtolstov <vtolstov@users.noreply.github.com>
Reviewed-on: #386
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2025-01-25 15:57:55 +03:00
f3f2a9b737 move to v4
Some checks failed
coverage / build (push) Failing after 56s
test / test (push) Successful in 2m30s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-25 15:48:10 +03:00
3f82cb3ba4 Обновить README.md
All checks were successful
coverage / build (push) Successful in 1m31s
test / test (push) Successful in 2m34s
2025-01-18 15:35:52 +03:00
vtolstov
306b7a3962 Apply Code Coverage Badge 2025-01-17 12:58:03 +00:00
a8eda9d58d Merge pull request 'move set content-type in client publish' (#394) from devstigneev/micro:v3_publish_bug into v3
All checks were successful
coverage / build (push) Successful in 1m19s
test / test (push) Successful in 2m13s
Reviewed-on: #394
2025-01-17 15:57:30 +03:00
7e4477dcb4 move set content-type in client publish
Some checks failed
test / test (pull_request) Successful in 3m40s
lint / lint (pull_request) Successful in 45s
coverage / build (pull_request) Failing after 26s
2025-01-17 15:38:53 +03:00
vtolstov
d846044fc6 Apply Code Coverage Badge 2025-01-04 16:10:26 +00:00
29d956e74e fix readme
All checks were successful
coverage / build (push) Successful in 59s
test / test (push) Successful in 3m27s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-04 19:09:50 +03:00
fcc4faff8a fix godoc link
All checks were successful
coverage / build (push) Successful in 56s
test / test (push) Successful in 3m25s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-04 18:57:02 +03:00
5df8f83f45 badges (#392)
Some checks failed
coverage / build (push) Successful in 57s
test / test (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-authored-by: vtolstov <vtolstov@users.noreply.github.com>
Reviewed-on: #392
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-01-04 18:53:57 +03:00
vtolstov
27fa6e9173 Apply Code Coverage Badge 2024-12-28 22:58:19 +00:00
bd55a35dc3 logger/slog: add delayed buffer test
All checks were successful
test / test (push) Successful in 3m33s
coverage / build (push) Successful in 8m22s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-29 01:57:41 +03:00
653bd386cc util/buffer: add DelayedBuffer
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-29 01:57:41 +03:00
vtolstov
558c6f4d7c Apply Code Coverage Badge 2024-12-28 11:56:07 +00:00
d7dd6fbeb2 register/memory: fix build
All checks were successful
test / test (push) Successful in 3m35s
coverage / build (push) Successful in 8m22s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-28 14:55:20 +03:00
a00cf2c8d9 register: watcher fixes
Some checks failed
coverage / build (push) Failing after 55s
test / test (push) Successful in 3m39s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-28 14:51:10 +03:00
vtolstov
a3e8ab2492 Apply Code Coverage Badge 2024-12-27 20:57:08 +00:00
06da500ef4 register: cleanup
All checks were successful
test / test (push) Successful in 3m33s
coverage / build (push) Successful in 9m11s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 23:56:27 +03:00
277f04ba19 register: add Codec option
All checks were successful
coverage / build (push) Successful in 1m3s
test / test (push) Successful in 3m31s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 19:33:50 +03:00
vtolstov
470263ff5f Apply Code Coverage Badge 2024-12-27 16:14:00 +00:00
b8232e02be register: add ListName option
All checks were successful
test / test (push) Successful in 3m50s
coverage / build (push) Successful in 8m15s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 19:12:57 +03:00
vtolstov
f8c5e10c1d Apply Code Coverage Badge 2024-12-26 22:21:35 +00:00
397e71f815 Merge pull request 'register: improvements' (#390) from register into v3
All checks were successful
test / test (push) Successful in 3m50s
coverage / build (push) Successful in 13m40s
Reviewed-on: #390
2024-12-27 01:20:49 +03:00
74e31d99f6 fixup
Some checks failed
lint / lint (pull_request) Successful in 1m10s
test / test (pull_request) Successful in 3m45s
coverage / build (pull_request) Failing after 12m5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 01:16:22 +03:00
f39de15d93 fixup
Some checks failed
test / test (pull_request) Failing after 55s
coverage / build (pull_request) Failing after 1m1s
lint / lint (pull_request) Successful in 1m4s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 01:12:29 +03:00
d291102877 register: improvements
Some checks failed
coverage / build (pull_request) Failing after 1m33s
lint / lint (pull_request) Successful in 1m52s
test / test (pull_request) Successful in 4m11s
* change domain to namespace

* lower go.mod deps

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 01:08:00 +03:00
37ffbb18d8 lower go.deps
Some checks failed
coverage / build (push) Failing after 30s
test / test (push) Successful in 4m5s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-26 08:35:58 +03:00
9a85dead86 lower go.deps
Some checks failed
coverage / build (push) Failing after 23s
test / test (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-26 08:32:59 +03:00
a489aab1c3 Merge pull request 'logger/slog: fixed time len field' (#389) from logger-slog into v3
Some checks failed
coverage / build (push) Failing after 47s
test / test (push) Successful in 7m10s
Reviewed-on: #389
2024-12-24 20:51:47 +03:00
d160664ef1 fixup test
Some checks failed
lint / lint (pull_request) Successful in 1m25s
coverage / build (pull_request) Failing after 1m25s
test / test (pull_request) Successful in 5m30s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-24 20:45:53 +03:00
fa868edcaa logger/slog: fixed time len field
Some checks failed
lint / lint (pull_request) Successful in 1m22s
test / test (pull_request) Successful in 4m50s
coverage / build (pull_request) Failing after 29s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-24 20:36:32 +03:00
vtolstov
6ed0b0e090 Apply Code Coverage Badge 2024-12-23 18:18:20 +00:00
533b265d19 add codec.RawMessage support
All checks were successful
test / test (push) Successful in 3m41s
coverage / build (push) Successful in 8m22s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-23 21:17:32 +03:00
1ace2631a4 Merge pull request 'codec: add yaml support' (#388) from codec-yaml into v3
Some checks failed
test / test (push) Failing after 13m17s
coverage / build (push) Failing after 13m26s
Reviewed-on: #388
2024-12-23 19:08:47 +03:00
3dd5ca68d1 codec: add yaml support
Some checks failed
lint / lint (pull_request) Successful in 1m38s
test / test (pull_request) Successful in 4m17s
coverage / build (pull_request) Failing after 8m42s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-23 19:08:21 +03:00
66ccd6021f Merge pull request 'codec: add yaml support' (#387) from codec-yaml into v3
Some checks failed
test / test (push) Successful in 2m28s
coverage / build (push) Failing after 13m22s
Reviewed-on: #387
2024-12-23 18:39:03 +03:00
e5346f7e4f codec: add yaml support
All checks were successful
lint / lint (pull_request) Successful in 1m31s
coverage / build (pull_request) Successful in 3m10s
test / test (pull_request) Successful in 4m1s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-23 18:38:01 +03:00
vtolstov
daf19f031a Apply Code Coverage Badge 2024-12-23 08:01:49 +00:00
5989fd54ca Merge pull request 'util/id: add uuid helper func' (#385) from uuid into v3
Some checks failed
coverage / build (push) Successful in 9m9s
test / test (push) Failing after 12m2s
Reviewed-on: #385
2024-12-23 11:00:04 +03:00
ed30c26324 util/id: add uuid helper func
Some checks failed
lint / lint (pull_request) Successful in 1m26s
test / test (pull_request) Successful in 4m2s
coverage / build (pull_request) Failing after 9m29s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-23 10:59:29 +03:00
0f8f93d09a Обновить README.md
Some checks failed
coverage / build (push) Failing after 1m1s
test / test (push) Successful in 3m46s
2024-12-22 23:42:47 +03:00
vtolstov
f460e2f8dd Apply Code Coverage Badge 2024-12-22 20:39:05 +00:00
70d6a79274 add coverage badge (#383)
Some checks failed
test / test (push) Successful in 3m37s
coverage / build (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #383
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-22 23:38:28 +03:00
664b1586af util/id: add uuid v8 (#382)
All checks were successful
test / test (push) Successful in 3m25s
* util/id: add ability to specify what kind of id generate (nanoid/uuid v8)
* logger/slog: write stacktrace always on fatal
* logger/slog: try to close Out and sleep 1s

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #382
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-22 22:23:00 +03:00
8d747c64a8 Merge pull request 'tracer: minimize overhead on noop tracer usage' (#381) from tr into v3
All checks were successful
test / test (push) Successful in 4m20s
Reviewed-on: #381
2024-12-19 19:08:22 +03:00
94beb5ed3b tracer: minimize overhead on noop tracer usage
All checks were successful
lint / lint (pull_request) Successful in 1m5s
test / test (pull_request) Successful in 3m29s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-19 18:47:03 +03:00
98981ba86c Merge pull request 'metadata: add Copy method, fix old methods' (#379) from md into v3
All checks were successful
test / test (push) Successful in 3m23s
Reviewed-on: #379
2024-12-19 16:21:09 +03:00
1013f50d0e metadata: add Copy method, fix old methods
All checks were successful
lint / lint (pull_request) Successful in 1m7s
test / test (pull_request) Successful in 3m30s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-19 16:16:07 +03:00
0b190997b1 metadata: add Copy method, fix old methods
Some checks failed
lint / lint (pull_request) Failing after 55s
test / test (pull_request) Successful in 3m33s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-19 16:03:23 +03:00
69a44eb190 correcting hooks calling (#376)
All checks were successful
test / test (push) Successful in 3m30s
Reviewed-on: #376
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-12-18 20:31:07 +03:00
0476028f69 Merge pull request 'add context Must methods' (#377) from context into v3
All checks were successful
test / test (push) Successful in 3m46s
Reviewed-on: #377
2024-12-18 01:38:25 +03:00
330d8b149a add context Must methods
All checks were successful
lint / lint (pull_request) Successful in 1m39s
test / test (pull_request) Successful in 4m12s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-18 01:31:21 +03:00
19b04fe070 metadata: add MustGet func
All checks were successful
test / test (push) Successful in 1h34m51s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-15 23:45:17 +03:00
4cd55875c6 add Must*Context methods
All checks were successful
test / test (push) Successful in 11m55s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-13 17:02:57 +03:00
a7896cc728 Merge pull request 'logger/slog: fix dedup keys' (#374) from loggerfix into v3
All checks were successful
test / test (push) Successful in 11m49s
Reviewed-on: #374
2024-12-13 01:05:22 +03:00
ff991bf49c logger/slog: fix dedup keys
All checks were successful
lint / lint (pull_request) Successful in 1m32s
test / test (pull_request) Successful in 12m16s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-13 01:04:55 +03:00
5a6551b703 Merge pull request 'logger: improvements' (#373) from logger-dedup into v3
All checks were successful
test / test (push) Successful in 12m27s
Reviewed-on: #373
2024-12-13 00:28:28 +03:00
9406a33d60 logger: improvements
All checks were successful
lint / lint (pull_request) Successful in 1m51s
test / test (pull_request) Successful in 12m57s
* logger: add WithDedupKeys option

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-13 00:24:11 +03:00
8f185abd9d Обновить README.md
All checks were successful
test / test (push) Successful in 1m49s
2024-12-11 00:25:16 +03:00
86492e0644 Обновить README.md
All checks were successful
test / test (push) Successful in 1m42s
2024-12-11 00:23:14 +03:00
b21972964a Merge pull request 'micro-tests' (#372) from micro-tests into v3
All checks were successful
test / test (push) Successful in 1m42s
Reviewed-on: #372
2024-12-10 23:45:54 +03:00
f5ee065d09 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 1m7s
test / test (pull_request) Successful in 2m47s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:21:45 +03:00
8cb02f2b08 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 51s
test / test (pull_request) Failing after 1m55s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:17:51 +03:00
bc926cd6bd add micro-tests trigger
All checks were successful
test / test (pull_request) Successful in 39s
lint / lint (pull_request) Successful in 43s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:15:40 +03:00
356abfd818 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 48s
test / test (pull_request) Failing after 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:08:10 +03:00
18444d3f98 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Failing after 36s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 20:05:10 +03:00
d5f07922e8 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 1m19s
test / test (pull_request) Successful in 1m16s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 19:57:27 +03:00
675e717410 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Successful in 40s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 18:06:17 +03:00
7b6aea235a add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 46s
test / test (pull_request) Successful in 43s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:39:36 +03:00
2cb7200467 add micro-tests trigger
All checks were successful
test / test (pull_request) Successful in 36s
lint / lint (pull_request) Successful in 44s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:36:23 +03:00
f430f97a97 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:30:03 +03:00
0060c4377a add micro-tests trigger
All checks were successful
test / test (pull_request) Successful in 39s
lint / lint (pull_request) Successful in 45s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:27:28 +03:00
e46579fe9a add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 44s
test / test (pull_request) Failing after 37s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:14:54 +03:00
ca52973194 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 17:08:14 +03:00
5bb33c7e1d add micro-tests trigger
Some checks failed
test / test (pull_request) Failing after 33s
lint / lint (pull_request) Successful in 44s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:52:15 +03:00
b71fc25328 add micro-tests trigger
All checks were successful
lint / lint (pull_request) Successful in 44s
test / test (pull_request) Successful in 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:46:40 +03:00
9345dd075a add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 46s
test / test (pull_request) Failing after 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:16:16 +03:00
1c1b9c0a28 add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 1m9s
test / test (pull_request) Failing after 1m38s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 16:02:43 +03:00
2969494c5a Merge branch 'v3' into micro-tests
Some checks failed
test / test (pull_request) Failing after 43s
lint / lint (pull_request) Failing after 14m11s
2024-12-10 15:59:30 +03:00
cbd3fa38ba add micro-tests trigger
Some checks failed
lint / lint (pull_request) Successful in 43s
test / test (pull_request) Failing after 58s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 15:58:29 +03:00
569a36383d workflow fix (#371)
All checks were successful
test / test (push) Successful in 30s
Reviewed-on: #371
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 01:40:56 +03:00
90bed77526 workflow improve
All checks were successful
test / test (pull_request) Successful in 33s
lint / lint (pull_request) Successful in 1m11s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 01:38:30 +03:00
ba4478a5e0 workflow improve
Some checks failed
lint / lint (pull_request) Failing after 18m59s
test / test (pull_request) Successful in 1m20s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 00:59:54 +03:00
6dc76cdfea workflow improve
Some checks failed
lint / lint (pull_request) Has been cancelled
test / test (pull_request) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-10 00:56:52 +03:00
e266683d96 Merge branch 'v3' of https://git.unistack.org/unistack-org/micro into v3 2024-12-10 00:51:37 +03:00
2b62ad04f2 metadata: fix for grpc case (#370)
All checks were successful
test / test (push) Successful in 42s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #370
2024-12-09 19:06:49 +03:00
275b0a64e5 metadata: fix for grpc case
Some checks failed
test / test (pull_request) Successful in 46s
lint / lint (pull_request) Failing after 10m4s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-09 18:02:09 +03:00
38c5fe8b5a fixed struct alignment && refactor linter (#369)
All checks were successful
test / test (push) Successful in 42s
## Pull Request template
Please, go through these steps before clicking submit on this PR.

1. Give a descriptive title to your PR.
2. Provide a description of your changes.
3. Make sure you have some relevant tests.
4. Put `closes #XXXX` in your comment to auto-close the issue that your PR fixes (if applicable).

**PLEASE REMOVE THIS TEMPLATE BEFORE SUBMITTING**

Reviewed-on: #369
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-12-09 16:23:25 +03:00
b6a0e4d983 add metrics for dns
All checks were successful
test / test (push) Successful in 46s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-09 00:41:08 +03:00
d9b822deff logger/slog: add ability to pass func that creates slog.Handler compatible interface
All checks were successful
test / test (push) Successful in 42s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-07 16:16:45 +03:00
0e66688f8f logger/slog: add option to pass slog.Handler
All checks were successful
test / test (push) Successful in 45s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-07 14:19:29 +03:00
9213fd212f Обновить .gitea/workflows/job_lint.yml
All checks were successful
test / test (push) Successful in 51s
2024-12-06 23:13:24 +03:00
aa360dcf51 Обновить .gitea/workflows/job_lint.yml
All checks were successful
test / test (push) Successful in 50s
2024-12-06 23:11:48 +03:00
2df259b5b8 Обновить .gitea/workflows/job_test.yml
Some checks failed
test / test (push) Has been cancelled
2024-12-06 23:11:32 +03:00
15e9310368 Merge pull request 'Update actions' (#368) from atolstikhin/micro:v3 into v3
All checks were successful
test / test (push) Successful in 1m1s
Reviewed-on: #368
2024-12-06 23:08:50 +03:00
Aleksandr Tolstikhin
16d8cf3434 Update actions
All checks were successful
test / test (pull_request) Successful in 1m4s
lint / lint (pull_request) Successful in 10m11s
2024-12-07 02:37:12 +07:00
9704ef2e5e fix pipeline (#365)
Co-authored-by: Aleksandr Tolstikhin <atolstikhin@mtsbank.ru>
Reviewed-on: #365
Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Co-committed-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-06 19:05:27 +03:00
94e8f90f00 Merge pull request 'removed create empty out/ingoing metadata' (#364) from devstigneev/micro:v3 into v3
Reviewed-on: #364
Reviewed-by: Василий Толстов <v.tolstov@unistack.org>
2024-12-06 12:49:25 +03:00
34d1587881 removed create empty out/ingoing metadata
Some checks failed
lint / lint (pull_request) Has been cancelled
pr / test (pull_request) Has been cancelled
2024-12-06 00:08:03 +03:00
bf4143cde5 replace default go resolver with caching resolver
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-03 01:11:08 +03:00
36b7b9f5fb add Live/Ready/Health methods
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-02 13:20:13 +03:00
ae97023092 store: updates for Watcher
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-01 19:54:38 +03:00
115ca6a018 logger: add WithAddFields option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-29 15:34:02 +03:00
89cf4ef8af store: add missin LazyConnect option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-26 17:48:09 +03:00
2a6ce6d4da add using lazy connect (#361)
#357

Co-authored-by: Василий Толстов <v.tolstov@unistack.org>
Reviewed-on: #361
Reviewed-by: Василий Толстов <v.tolstov@unistack.org>
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-11-26 12:18:17 +03:00
ad19fe2b90 logger/slog: fix race condigtion with Enabled and Level
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-24 23:40:54 +03:00
49055a28ea logger/slog: wrap handler
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-24 23:28:15 +03:00
d1c6e121c1 logger/slog: fix Clone and Fields methods
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-24 15:31:40 +03:00
7cd7fb0c0a disable logging for automaxprocs
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-20 22:35:36 +03:00
77eb5b5264 add yaml support
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-01 11:23:29 +03:00
929e46c087 improve slog
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-01 00:56:40 +03:00
1fb5673d27 fixup graceful stop
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-25 17:21:54 +03:00
3bbb0cbc72 update slog/logger (#351)
Изменено (методы logger без форматирования):

- Добавлена подготовка и выравнивание аттрибутов для logger
- Выравнивание за счет добавления !BADKEY до процессинга log/slog
- Добавлено переиспользование метода Log
- Удалены методы [Logf, Infof, Debugf, Errorf, Warnf, Fatalf, Tracef]
- Обновлены юниттесты
- Удален wrapper в пакете logger
- Изменен интерфейс logger
- Отрефакторены вызовы logger'a в micro

Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #351
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2024-10-12 12:37:43 +03:00
71fe0df73f use automaxproc and automemlimit
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-06 13:50:59 +03:00
f1b8ecbdb3 store: add new ErrNotConnected error
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-05 14:46:22 +03:00
fd2b2762e9 fixup missing xpool dep
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-30 09:57:07 +03:00
82d269cfb4 xpool: add metrics
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-29 22:58:53 +03:00
6641463eed util/reflect: add ability to merge maps
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-20 19:22:20 +03:00
faf2454f0a cleanup
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-20 17:54:17 +03:00
de9e4d73f5 change semconv metric names to include micro_ prefix
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-20 08:38:36 +03:00
4ae7277140 meter: remove prefix options
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-20 08:27:25 +03:00
a98618ed5b add codec.Flatten option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-16 23:10:43 +03:00
3aaf1182cb add codec option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-16 23:02:45 +03:00
eb1482d789 codec: simplify codec interface
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-16 22:41:47 +03:00
a305f7553f Merge pull request '#347 add test' (#349) from kgorbunov/micro:#347-v3 into v3
Reviewed-on: #349
2024-09-16 14:59:58 +03:00
Gorbunov Kirill Andreevich
d9b2f2a45d #347 add test
Some checks failed
pr / test (pull_request) Failing after 0s
lint / lint (pull_request) Failing after 1s
2024-09-16 14:48:47 +03:00
3ace7657dc codec: RawMessage Marshal fix
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-10 10:43:45 +03:00
53b40617e2 fixup util/xpool
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-04 23:06:40 +03:00
1a9236caad update meter options
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-04 22:41:10 +03:00
6c68d39081 errors: add RFC9457 problem type
closes #297

Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-08-01 01:06:02 +03:00
35e62fbeb0 tracer: add default context attr funcs option
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-07-06 00:09:27 +03:00
00b3ceb468 smeconv: fix naming
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-07-04 14:56:48 +03:00
238 changed files with 29058 additions and 7864 deletions

View File

@@ -1,24 +0,0 @@
name: lint
on:
pull_request:
branches:
- master
- v3
jobs:
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: setup-go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: checkout
uses: actions/checkout@v3
- name: deps
run: go get -v -d ./...
- name: lint
uses: https://github.com/golangci/golangci-lint-action@v3.4.0
continue-on-error: true
with:
version: v1.52

View File

@@ -1,23 +0,0 @@
name: pr
on:
pull_request:
branches:
- master
- v3
jobs:
test:
name: test
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
- name: setup-go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: deps
run: go get -v -t -d ./...
- name: test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...

53
.github/workflows/job_coverage.yml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: coverage
on:
push:
branches: [ main, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
pull_request:
branches: [ main, v3, v4 ]
jobs:
build:
if: github.server_url != 'https://github.com'
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: test coverage
run: |
go test -v -cover ./... -covermode=count -coverprofile coverage.out -coverpkg ./...
go tool cover -func coverage.out -o coverage.out
- name: coverage badge
uses: tj-actions/coverage-badge-go@v2
with:
green: 80
filename: coverage.out
- uses: stefanzweifel/git-auto-commit-action@v4
name: autocommit
with:
commit_message: Apply Code Coverage Badge
skip_fetch: false
skip_checkout: false
file_pattern: ./README.md
- name: push
if: steps.auto-commit-action.outputs.changes_detected == 'true'
uses: ad-m/github-push-action@master
with:
github_token: ${{ github.token }}
branch: ${{ github.ref }}

29
.github/workflows/job_lint.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: lint
on:
pull_request:
types: [opened, reopened, synchronize]
branches: [ master, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup deps
run: go get -v ./...
- name: run lint
uses: golangci/golangci-lint-action@v6
with:
version: 'latest'

94
.github/workflows/job_sync.yml vendored Normal file
View File

@@ -0,0 +1,94 @@
name: sync
on:
schedule:
- cron: '*/5 * * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
sync:
if: github.server_url != 'https://github.com'
runs-on: ubuntu-latest
steps:
- name: init
run: |
git config --global user.email "vtolstov <vtolstov@users.noreply.github.com>"
git config --global user.name "github-actions[bot]"
echo "machine git.unistack.org login vtolstov password ${{ secrets.TOKEN_GITEA }}" >> /root/.netrc
echo "machine github.com login vtolstov password ${{ secrets.TOKEN_GITHUB }}" >> /root/.netrc
- name: check master
id: check_master
run: |
src_hash=$(git ls-remote https://github.com/${GITHUB_REPOSITORY} refs/heads/master | cut -f1)
dst_hash=$(git ls-remote ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} refs/heads/master | cut -f1)
echo "src_hash=$src_hash"
echo "dst_hash=$dst_hash"
if [ "$src_hash" != "$dst_hash" ]; then
echo "sync_needed=true" >> $GITHUB_OUTPUT
else
echo "sync_needed=false" >> $GITHUB_OUTPUT
fi
- name: sync master
if: steps.check_master.outputs.sync_needed == 'true'
run: |
git clone --filter=blob:none --filter=tree:0 --branch master --single-branch ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} repo
cd repo
git remote add --no-tags --fetch --track master upstream https://github.com/${GITHUB_REPOSITORY}
git pull --rebase upstream master
git push upstream master --progress
git push origin master --progress
cd ../
rm -rf repo
- name: check v3
id: check_v3
run: |
src_hash=$(git ls-remote https://github.com/${GITHUB_REPOSITORY} refs/heads/v3 | cut -f1)
dst_hash=$(git ls-remote ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} refs/heads/v3 | cut -f1)
echo "src_hash=$src_hash"
echo "dst_hash=$dst_hash"
if [ "$src_hash" != "$dst_hash" ]; then
echo "sync_needed=true" >> $GITHUB_OUTPUT
else
echo "sync_needed=false" >> $GITHUB_OUTPUT
fi
- name: sync v3
if: steps.check_v3.outputs.sync_needed == 'true'
run: |
git clone --filter=blob:none --filter=tree:0 --branch v3 --single-branch ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} repo
cd repo
git remote add --no-tags --fetch --track v3 upstream https://github.com/${GITHUB_REPOSITORY}
git pull --rebase upstream v3
git push upstream v3 --progress
git push origin v3 --progress
cd ../
rm -rf repo
- name: check v4
id: check_v4
run: |
src_hash=$(git ls-remote https://github.com/${GITHUB_REPOSITORY} refs/heads/v4 | cut -f1)
dst_hash=$(git ls-remote ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} refs/heads/v4 | cut -f1)
echo "src_hash=$src_hash"
echo "dst_hash=$dst_hash"
if [ "$src_hash" != "$dst_hash" ]; then
echo "sync_needed=true" >> $GITHUB_OUTPUT
else
echo "sync_needed=false" >> $GITHUB_OUTPUT
fi
- name: sync v4
if: steps.check_v4.outputs.sync_needed == 'true'
run: |
git clone --filter=blob:none --filter=tree:0 --branch v4 --single-branch ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} repo
cd repo
git remote add --no-tags --fetch --track v4 upstream https://github.com/${GITHUB_REPOSITORY}
git pull --rebase upstream v4
git push upstream v4 --progress
git push origin v4 --progress
cd ../
rm -rf repo

31
.github/workflows/job_test.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: test
on:
pull_request:
types: [opened, reopened, synchronize]
branches: [ master, v3, v4 ]
push:
branches: [ master, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup deps
run: go get -v ./...
- name: run test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...

50
.github/workflows/job_tests.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: test
on:
pull_request:
types: [opened, reopened, synchronize]
branches: [ master, v3, v4 ]
push:
branches: [ master, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: checkout tests
uses: actions/checkout@v4
with:
ref: master
filter: 'blob:none'
repository: unistack-org/micro-tests
path: micro-tests
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup go work
env:
GOWORK: ${{ github.workspace }}/go.work
run: |
go work init
go work use .
go work use micro-tests
- name: setup deps
env:
GOWORK: ${{ github.workspace }}/go.work
run: go get -v ./...
- name: run tests
env:
INTEGRATION_TESTS: yes
GOWORK: ${{ github.workspace }}/go.work
run: |
cd micro-tests
go test -mod readonly -v ./... || true

View File

@@ -1,44 +1,5 @@
run:
concurrency: 4
deadline: 5m
concurrency: 8
timeout: 5m
issues-exit-code: 1
tests: true
linters-settings:
govet:
check-shadowing: true
enable:
- fieldalignment
linters:
enable:
- govet
- deadcode
- errcheck
- govet
- ineffassign
- staticcheck
- structcheck
- typecheck
- unused
- varcheck
- bodyclose
- gci
- goconst
- gocritic
- gosimple
- gofmt
- gofumpt
- goimports
- revive
- gosec
- makezero
- misspell
- nakedret
- nestif
- nilerr
- noctx
- prealloc
- unconvert
- unparam
disable-all: false

View File

@@ -1,4 +1,9 @@
# Micro [![License](https://img.shields.io/:license-apache-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Doc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/unistack-org/micro/v3?tab=overview) [![Status](https://github.com/unistack-org/micro/workflows/build/badge.svg?branch=master)](https://github.com/unistack-org/micro/actions?query=workflow%3Abuild+branch%3Amaster+event%3Apush) [![Lint](https://goreportcard.com/badge/go.unistack.org/micro/v3)](https://goreportcard.com/report/go.unistack.org/micro/v3) [![Coverage](https://codecov.io/gh/unistack-org/micro/branch/v3/graph/badge.svg?token=OZPO2LP7VS)](https://codecov.io/gh/unistack-org/micro)
# Micro
![Coverage](https://img.shields.io/badge/Coverage-33.7%25-yellow)
[![License](https://img.shields.io/:license-apache-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Doc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/go.unistack.org/micro/v4?tab=overview)
[![Status](https://git.unistack.org/unistack-org/micro/actions/workflows/job_tests.yml/badge.svg?branch=v4)](https://git.unistack.org/unistack-org/micro/actions?query=workflow%3Abuild+branch%3Av4+event%3Apush)
[![Lint](https://goreportcard.com/badge/go.unistack.org/micro/v4)](https://goreportcard.com/report/go.unistack.org/micro/v4)
Micro is a standard library for microservices.
@@ -10,30 +15,20 @@ Micro provides the core requirements for distributed systems development includi
Micro abstracts away the details of distributed systems. Here are the main features.
- **Authentication** - Auth is built in as a first class citizen. Authentication and authorization enable secure
zero trust networking by providing every service an identity and certificates. This additionally includes rule
based access control.
- **Dynamic Config** - Load and hot reload dynamic config from anywhere. The config interface provides a way to load application
level config from any source such as env vars, file, etcd. You can merge the sources and even define fallbacks.
level config from any source such as env vars, cmdline, file, consul, vault... You can merge the sources and even define fallbacks.
- **Data Storage** - A simple data store interface to read, write and delete records. It includes support for memory, file and
CockroachDB by default. State and persistence becomes a core requirement beyond prototyping and Micro looks to build that into the framework.
s3. State and persistence becomes a core requirement beyond prototyping and Micro looks to build that into the framework.
- **Service Discovery** - Automatic service registration and name resolution. Service discovery is at the core of micro service
development. When service A needs to speak to service B it needs the location of that service.
- **Load Balancing** - Client side load balancing built on service discovery. Once we have the addresses of any number of instances
of a service we now need a way to decide which node to route to. We use random hashed load balancing to provide even distribution
across the services and retry a different node if there's a problem.
- **Message Encoding** - Dynamic message encoding based on content-type. The client and server will use codecs along with content-type
to seamlessly encode and decode Go types for you. Any variety of messages could be encoded and sent from different clients. The client
and server handle this by default.
- **Transport** - gRPC or http based request/response with support for bidirectional streaming. We provide an abstraction for synchronous communication. A request made to a service will be automatically resolved, load balanced, dialled and streamed.
- **Async Messaging** - PubSub is built in as a first class citizen for asynchronous communication and event driven architectures.
- **Async Messaging** - Pub/Sub is built in as a first class citizen for asynchronous communication and event driven architectures.
Event notifications are a core pattern in micro service development.
- **Synchronization** - Distributed systems are often built in an eventually consistent manner. Support for distributed locking and
@@ -42,10 +37,6 @@ leadership are built in as a Sync interface. When using an eventually consistent
- **Pluggable Interfaces** - Micro makes use of Go interfaces for each system abstraction. Because of this these interfaces
are pluggable and allows Micro to be runtime agnostic.
## Getting Started
To be created.
## License
Micro is Apache 2.0 licensed.

View File

@@ -1,15 +0,0 @@
# Security Policy
## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
| 3.7.x | :white_check_mark: |
| < 3.7.0 | :x: |
## Reporting a Vulnerability
If you find any issue, please create github issue in this repo

View File

@@ -1,13 +1,13 @@
// Package broker is an interface used for asynchronous messaging
package broker // import "go.unistack.org/micro/v3/broker"
package broker
import (
"context"
"errors"
"time"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
)
// DefaultBroker default memory broker
@@ -18,6 +18,10 @@ var (
ErrNotConnected = errors.New("broker not connected")
// ErrDisconnected returns when broker disconnected
ErrDisconnected = errors.New("broker disconnected")
// ErrInvalidMessage returns when invalid Message passed
ErrInvalidMessage = errors.New("invalid message")
// ErrInvalidHandler returns when subscriber passed to Subscribe
ErrInvalidHandler = errors.New("invalid handler, ony func(Message) error and func([]Message) error supported")
// DefaultGracefulTimeout
DefaultGracefulTimeout = 5 * time.Second
)
@@ -36,85 +40,44 @@ type Broker interface {
Connect(ctx context.Context) error
// Disconnect disconnect from broker
Disconnect(ctx context.Context) error
// NewMessage create new broker message to publish.
NewMessage(ctx context.Context, hdr metadata.Metadata, body interface{}, opts ...PublishOption) (Message, error)
// Publish message to broker topic
Publish(ctx context.Context, topic string, msg *Message, opts ...PublishOption) error
Publish(ctx context.Context, topic string, messages ...Message) error
// Subscribe subscribes to topic message via handler
Subscribe(ctx context.Context, topic string, h Handler, opts ...SubscribeOption) (Subscriber, error)
// BatchPublish messages to broker with multiple topics
BatchPublish(ctx context.Context, msgs []*Message, opts ...PublishOption) error
// BatchSubscribe subscribes to topic messages via handler
BatchSubscribe(ctx context.Context, topic string, h BatchHandler, opts ...SubscribeOption) (Subscriber, error)
Subscribe(ctx context.Context, topic string, handler interface{}, opts ...SubscribeOption) (Subscriber, error)
// String type of broker
String() string
// Live returns broker liveness
Live() bool
// Ready returns broker readiness
Ready() bool
// Health returns broker health
Health() bool
}
type (
FuncPublish func(ctx context.Context, topic string, msg *Message, opts ...PublishOption) error
HookPublish func(next FuncPublish) FuncPublish
FuncBatchPublish func(ctx context.Context, msgs []*Message, opts ...PublishOption) error
HookBatchPublish func(next FuncBatchPublish) FuncBatchPublish
FuncSubscribe func(ctx context.Context, topic string, h Handler, opts ...SubscribeOption) (Subscriber, error)
HookSubscribe func(next FuncSubscribe) FuncSubscribe
FuncBatchSubscribe func(ctx context.Context, topic string, h BatchHandler, opts ...SubscribeOption) (Subscriber, error)
HookBatchSubscribe func(next FuncBatchSubscribe) FuncBatchSubscribe
FuncPublish func(ctx context.Context, topic string, messages ...Message) error
HookPublish func(next FuncPublish) FuncPublish
FuncSubscribe func(ctx context.Context, topic string, handler interface{}, opts ...SubscribeOption) (Subscriber, error)
HookSubscribe func(next FuncSubscribe) FuncSubscribe
)
// Handler is used to process messages via a subscription of a topic.
type Handler func(Event) error
// Events contains multiple events
type Events []Event
// Ack try to ack all events and return
func (evs Events) Ack() error {
var err error
for _, ev := range evs {
if err = ev.Ack(); err != nil {
return err
}
}
return nil
}
// SetError sets error on event
func (evs Events) SetError(err error) {
for _, ev := range evs {
ev.SetError(err)
}
}
// BatchHandler is used to process messages in batches via a subscription of a topic.
type BatchHandler func(Events) error
// Event is given to a subscription handler for processing
type Event interface {
// Context return context.Context for event
// Message is given to a subscription handler for processing
type Message interface {
// Context for the message.
Context() context.Context
// Topic returns event topic
// Topic returns message destination topic.
Topic() string
// Message returns broker message
Message() *Message
// Ack acknowledge message
// Header returns message headers.
Header() metadata.Metadata
// Body returns broker message []byte slice
Body() []byte
// Unmarshal try to decode message body to dst.
// This is helper method that uses codec.Unmarshal.
Unmarshal(dst interface{}, opts ...codec.Option) error
// Ack acknowledge message if supported.
Ack() error
// Error returns message error (like decoding errors or some other)
Error() error
// SetError set event processing error
SetError(err error)
}
// Message is used to transfer data
type Message struct {
// Header contains message metadata
Header metadata.Metadata
// Body contains message body
Body codec.RawMessage
}
// NewMessage create broker message with topic filled
func NewMessage(topic string) *Message {
m := &Message{Header: metadata.New(2)}
m.Header.Set(metadata.HeaderTopic, topic)
return m
}
// Subscriber is a convenience return type for the Subscribe method

View File

@@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Broker, bool) {
return c, ok
}
// MustContext returns broker from passed context
func MustContext(ctx context.Context) Broker {
b, ok := FromContext(ctx)
if !ok {
panic("missing broker")
}
return b
}
// NewContext savess broker in context
func NewContext(ctx context.Context, s Broker) context.Context {
if ctx == nil {
@@ -33,16 +42,6 @@ func SetSubscribeOption(k, v interface{}) SubscribeOption {
}
}
// SetOption returns a function to setup a context with given value
func SetOption(k, v interface{}) Option {
return func(o *Options) {
if o.Context == nil {
o.Context = context.Background()
}
o.Context = context.WithValue(o.Context, k, v)
}
}
// SetPublishOption returns a function to setup a context with given value
func SetPublishOption(k, v interface{}) PublishOption {
return func(o *PublishOptions) {
@@ -52,3 +51,13 @@ func SetPublishOption(k, v interface{}) PublishOption {
o.Context = context.WithValue(o.Context, k, v)
}
}
// SetOption returns a function to setup a context with given value
func SetOption(k, v interface{}) Option {
return func(o *Options) {
if o.Context == nil {
o.Context = context.Background()
}
o.Context = context.WithValue(o.Context, k, v)
}
}

View File

@@ -49,17 +49,6 @@ func TestSetSubscribeOption(t *testing.T) {
}
}
func TestSetPublishOption(t *testing.T) {
type key struct{}
o := SetPublishOption(key{}, "test")
opts := &PublishOptions{}
o(opts)
if v, ok := opts.Context.Value(key{}).(string); !ok || v == "" {
t.Fatal("SetPublishOption not works")
}
}
func TestSetOption(t *testing.T) {
type key struct{}
o := SetOption(key{}, "test")

View File

@@ -2,66 +2,104 @@ package broker
import (
"context"
"strings"
"sync"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/options"
maddr "go.unistack.org/micro/v3/util/addr"
"go.unistack.org/micro/v3/util/id"
mnet "go.unistack.org/micro/v3/util/net"
"go.unistack.org/micro/v3/util/rand"
"go.unistack.org/micro/v4/broker"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/options"
maddr "go.unistack.org/micro/v4/util/addr"
"go.unistack.org/micro/v4/util/id"
mnet "go.unistack.org/micro/v4/util/net"
"go.unistack.org/micro/v4/util/rand"
)
type memoryBroker struct {
funcPublish broker.FuncPublish
funcBatchPublish broker.FuncBatchPublish
funcSubscribe broker.FuncSubscribe
funcBatchSubscribe broker.FuncBatchSubscribe
subscribers map[string][]*memorySubscriber
addr string
opts broker.Options
type Broker struct {
funcPublish broker.FuncPublish
funcSubscribe broker.FuncSubscribe
subscribers map[string][]*Subscriber
addr string
opts broker.Options
sync.RWMutex
connected bool
}
type memoryEvent struct {
err error
message interface{}
type memoryMessage struct {
c codec.Codec
topic string
ctx context.Context
body []byte
hdr metadata.Metadata
opts broker.PublishOptions
}
func (m *memoryMessage) Ack() error {
return nil
}
func (m *memoryMessage) Body() []byte {
return m.body
}
func (m *memoryMessage) Header() metadata.Metadata {
return m.hdr
}
func (m *memoryMessage) Context() context.Context {
return m.ctx
}
func (m *memoryMessage) Topic() string {
return ""
}
func (m *memoryMessage) Unmarshal(dst interface{}, opts ...codec.Option) error {
return m.c.Unmarshal(m.body, dst)
}
type Subscriber struct {
ctx context.Context
exit chan bool
handler interface{}
id string
topic string
opts broker.Options
opts broker.SubscribeOptions
}
type memorySubscriber struct {
ctx context.Context
exit chan bool
handler broker.Handler
batchhandler broker.BatchHandler
id string
topic string
opts broker.SubscribeOptions
func (b *Broker) newCodec(ct string) (codec.Codec, error) {
if idx := strings.IndexRune(ct, ';'); idx >= 0 {
ct = ct[:idx]
}
b.RLock()
c, ok := b.opts.Codecs[ct]
b.RUnlock()
if ok {
return c, nil
}
return nil, codec.ErrUnknownContentType
}
func (m *memoryBroker) Options() broker.Options {
return m.opts
func (b *Broker) Options() broker.Options {
return b.opts
}
func (m *memoryBroker) Address() string {
return m.addr
func (b *Broker) Address() string {
return b.addr
}
func (m *memoryBroker) Connect(ctx context.Context) error {
func (b *Broker) Connect(ctx context.Context) error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
m.Lock()
defer m.Unlock()
b.Lock()
defer b.Unlock()
if m.connected {
if b.connected {
return nil
}
@@ -75,154 +113,129 @@ func (m *memoryBroker) Connect(ctx context.Context) error {
// set addr with port
addr = mnet.HostPort(addr, 10000+i)
m.addr = addr
m.connected = true
b.addr = addr
b.connected = true
return nil
}
func (m *memoryBroker) Disconnect(ctx context.Context) error {
func (b *Broker) Disconnect(ctx context.Context) error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
m.Lock()
defer m.Unlock()
b.Lock()
defer b.Unlock()
if !m.connected {
if !b.connected {
return nil
}
m.connected = false
b.connected = false
return nil
}
func (m *memoryBroker) Init(opts ...broker.Option) error {
func (b *Broker) Init(opts ...broker.Option) error {
for _, o := range opts {
o(&m.opts)
o(&b.opts)
}
m.funcPublish = m.fnPublish
m.funcBatchPublish = m.fnBatchPublish
m.funcSubscribe = m.fnSubscribe
m.funcBatchSubscribe = m.fnBatchSubscribe
b.funcPublish = b.fnPublish
b.funcSubscribe = b.fnSubscribe
m.opts.Hooks.EachNext(func(hook options.Hook) {
b.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case broker.HookPublish:
m.funcPublish = h(m.funcPublish)
case broker.HookBatchPublish:
m.funcBatchPublish = h(m.funcBatchPublish)
b.funcPublish = h(b.funcPublish)
case broker.HookSubscribe:
m.funcSubscribe = h(m.funcSubscribe)
case broker.HookBatchSubscribe:
m.funcBatchSubscribe = h(m.funcBatchSubscribe)
b.funcSubscribe = h(b.funcSubscribe)
}
})
return nil
}
func (m *memoryBroker) Publish(ctx context.Context, topic string, msg *broker.Message, opts ...broker.PublishOption) error {
return m.funcPublish(ctx, topic, msg, opts...)
func (b *Broker) NewMessage(ctx context.Context, hdr metadata.Metadata, body interface{}, opts ...broker.PublishOption) (broker.Message, error) {
options := broker.NewPublishOptions(opts...)
if options.ContentType == "" {
options.ContentType = b.opts.ContentType
}
m := &memoryMessage{ctx: ctx, hdr: hdr, opts: options}
c, err := b.newCodec(m.opts.ContentType)
if err == nil {
m.body, err = c.Marshal(body)
}
if err != nil {
return nil, err
}
return m, nil
}
func (m *memoryBroker) fnPublish(ctx context.Context, topic string, msg *broker.Message, opts ...broker.PublishOption) error {
msg.Header.Set(metadata.HeaderTopic, topic)
return m.publish(ctx, []*broker.Message{msg}, opts...)
func (b *Broker) Publish(ctx context.Context, topic string, messages ...broker.Message) error {
return b.funcPublish(ctx, topic, messages...)
}
func (m *memoryBroker) BatchPublish(ctx context.Context, msgs []*broker.Message, opts ...broker.PublishOption) error {
return m.funcBatchPublish(ctx, msgs, opts...)
func (b *Broker) fnPublish(ctx context.Context, topic string, messages ...broker.Message) error {
return b.publish(ctx, topic, messages...)
}
func (m *memoryBroker) fnBatchPublish(ctx context.Context, msgs []*broker.Message, opts ...broker.PublishOption) error {
return m.publish(ctx, msgs, opts...)
}
func (m *memoryBroker) publish(ctx context.Context, msgs []*broker.Message, opts ...broker.PublishOption) error {
m.RLock()
if !m.connected {
m.RUnlock()
func (b *Broker) publish(ctx context.Context, topic string, messages ...broker.Message) error {
b.RLock()
if !b.connected {
b.RUnlock()
return broker.ErrNotConnected
}
m.RUnlock()
var err error
b.RUnlock()
select {
case <-ctx.Done():
return ctx.Err()
default:
options := broker.NewPublishOptions(opts...)
}
msgTopicMap := make(map[string]broker.Events)
for _, v := range msgs {
p := &memoryEvent{opts: m.opts}
b.RLock()
subs, ok := b.subscribers[topic]
b.RUnlock()
if !ok {
return nil
}
if m.opts.Codec == nil || options.BodyOnly {
p.topic, _ = v.Header.Get(metadata.HeaderTopic)
p.message = v.Body
} else {
p.topic, _ = v.Header.Get(metadata.HeaderTopic)
p.message, err = m.opts.Codec.Marshal(v)
if err != nil {
return err
}
var err error
for _, sub := range subs {
switch s := sub.handler.(type) {
default:
if b.opts.Logger.V(logger.ErrorLevel) {
b.opts.Logger.Error(ctx, "broker handler error", broker.ErrInvalidHandler)
}
msgTopicMap[p.topic] = append(msgTopicMap[p.topic], p)
}
beh := m.opts.BatchErrorHandler
eh := m.opts.ErrorHandler
for t, ms := range msgTopicMap {
m.RLock()
subs, ok := m.subscribers[t]
m.RUnlock()
if !ok {
continue
}
for _, sub := range subs {
if sub.opts.BatchErrorHandler != nil {
beh = sub.opts.BatchErrorHandler
}
if sub.opts.ErrorHandler != nil {
eh = sub.opts.ErrorHandler
}
switch {
// batch processing
case sub.batchhandler != nil:
if err = sub.batchhandler(ms); err != nil {
ms.SetError(err)
if beh != nil {
_ = beh(ms)
} else if m.opts.Logger.V(logger.ErrorLevel) {
m.opts.Logger.Error(m.opts.Context, err.Error())
}
} else if sub.opts.AutoAck {
if err = ms.Ack(); err != nil {
m.opts.Logger.Errorf(m.opts.Context, "ack failed: %v", err)
}
case func(broker.Message) error:
for _, message := range messages {
msg, ok := message.(*memoryMessage)
if !ok {
if b.opts.Logger.V(logger.ErrorLevel) {
b.opts.Logger.Error(ctx, "broker handler error", broker.ErrInvalidMessage)
}
// single processing
case sub.handler != nil:
for _, p := range ms {
if err = sub.handler(p); err != nil {
p.SetError(err)
if eh != nil {
_ = eh(p)
} else if m.opts.Logger.V(logger.ErrorLevel) {
m.opts.Logger.Error(m.opts.Context, err.Error())
}
} else if sub.opts.AutoAck {
if err = p.Ack(); err != nil {
m.opts.Logger.Errorf(m.opts.Context, "ack failed: %v", err)
}
}
msg.topic = topic
if err = s(msg); err == nil && sub.opts.AutoAck {
err = msg.Ack()
}
if err != nil {
if b.opts.Logger.V(logger.ErrorLevel) {
b.opts.Logger.Error(ctx, "broker handler error", err)
}
}
}
case func([]broker.Message) error:
if err = s(messages); err == nil && sub.opts.AutoAck {
for _, message := range messages {
err = message.Ack()
if err != nil {
if b.opts.Logger.V(logger.ErrorLevel) {
b.opts.Logger.Error(ctx, "broker handler error", err)
}
}
}
@@ -233,17 +246,21 @@ func (m *memoryBroker) publish(ctx context.Context, msgs []*broker.Message, opts
return nil
}
func (m *memoryBroker) BatchSubscribe(ctx context.Context, topic string, handler broker.BatchHandler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
return m.funcBatchSubscribe(ctx, topic, handler, opts...)
func (b *Broker) Subscribe(ctx context.Context, topic string, handler interface{}, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
return b.funcSubscribe(ctx, topic, handler, opts...)
}
func (m *memoryBroker) fnBatchSubscribe(ctx context.Context, topic string, handler broker.BatchHandler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
m.RLock()
if !m.connected {
m.RUnlock()
func (b *Broker) fnSubscribe(ctx context.Context, topic string, handler interface{}, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
if err := broker.IsValidHandler(handler); err != nil {
return nil, err
}
b.RLock()
if !b.connected {
b.RUnlock()
return nil, broker.ErrNotConnected
}
m.RUnlock()
b.RUnlock()
sid, err := id.New()
if err != nil {
@@ -252,56 +269,7 @@ func (m *memoryBroker) fnBatchSubscribe(ctx context.Context, topic string, handl
options := broker.NewSubscribeOptions(opts...)
sub := &memorySubscriber{
exit: make(chan bool, 1),
id: sid,
topic: topic,
batchhandler: handler,
opts: options,
ctx: ctx,
}
m.Lock()
m.subscribers[topic] = append(m.subscribers[topic], sub)
m.Unlock()
go func() {
<-sub.exit
m.Lock()
newSubscribers := make([]*memorySubscriber, 0, len(m.subscribers)-1)
for _, sb := range m.subscribers[topic] {
if sb.id == sub.id {
continue
}
newSubscribers = append(newSubscribers, sb)
}
m.subscribers[topic] = newSubscribers
m.Unlock()
}()
return sub, nil
}
func (m *memoryBroker) Subscribe(ctx context.Context, topic string, handler broker.Handler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
return m.funcSubscribe(ctx, topic, handler, opts...)
}
func (m *memoryBroker) fnSubscribe(ctx context.Context, topic string, handler broker.Handler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
m.RLock()
if !m.connected {
m.RUnlock()
return nil, broker.ErrNotConnected
}
m.RUnlock()
sid, err := id.New()
if err != nil {
return nil, err
}
options := broker.NewSubscribeOptions(opts...)
sub := &memorySubscriber{
sub := &Subscriber{
exit: make(chan bool, 1),
id: sid,
topic: topic,
@@ -310,90 +278,64 @@ func (m *memoryBroker) fnSubscribe(ctx context.Context, topic string, handler br
ctx: ctx,
}
m.Lock()
m.subscribers[topic] = append(m.subscribers[topic], sub)
m.Unlock()
b.Lock()
b.subscribers[topic] = append(b.subscribers[topic], sub)
b.Unlock()
go func() {
<-sub.exit
m.Lock()
newSubscribers := make([]*memorySubscriber, 0, len(m.subscribers)-1)
for _, sb := range m.subscribers[topic] {
b.Lock()
newSubscribers := make([]*Subscriber, 0, len(b.subscribers)-1)
for _, sb := range b.subscribers[topic] {
if sb.id == sub.id {
continue
}
newSubscribers = append(newSubscribers, sb)
}
m.subscribers[topic] = newSubscribers
m.Unlock()
b.subscribers[topic] = newSubscribers
b.Unlock()
}()
return sub, nil
}
func (m *memoryBroker) String() string {
func (b *Broker) String() string {
return "memory"
}
func (m *memoryBroker) Name() string {
return m.opts.Name
func (b *Broker) Name() string {
return b.opts.Name
}
func (m *memoryEvent) Topic() string {
return m.topic
func (b *Broker) Live() bool {
return true
}
func (m *memoryEvent) Message() *broker.Message {
switch v := m.message.(type) {
case *broker.Message:
return v
case []byte:
msg := &broker.Message{}
if err := m.opts.Codec.Unmarshal(v, msg); err != nil {
if m.opts.Logger.V(logger.ErrorLevel) {
m.opts.Logger.Error(m.opts.Context, "[memory]: failed to unmarshal: %v", err)
}
return nil
}
return msg
}
return nil
func (b *Broker) Ready() bool {
return true
}
func (m *memoryEvent) Ack() error {
return nil
func (b *Broker) Health() bool {
return true
}
func (m *memoryEvent) Error() error {
return m.err
}
func (m *memoryEvent) SetError(err error) {
m.err = err
}
func (m *memoryEvent) Context() context.Context {
return m.opts.Context
}
func (m *memorySubscriber) Options() broker.SubscribeOptions {
func (m *Subscriber) Options() broker.SubscribeOptions {
return m.opts
}
func (m *memorySubscriber) Topic() string {
func (m *Subscriber) Topic() string {
return m.topic
}
func (m *memorySubscriber) Unsubscribe(ctx context.Context) error {
func (m *Subscriber) Unsubscribe(ctx context.Context) error {
m.exit <- true
return nil
}
// NewBroker return new memory broker
func NewBroker(opts ...broker.Option) broker.Broker {
return &memoryBroker{
return &Broker{
opts: broker.NewOptions(opts...),
subscribers: make(map[string][]*memorySubscriber),
subscribers: make(map[string][]*Subscriber),
}
}

View File

@@ -5,62 +5,23 @@ import (
"fmt"
"testing"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/metadata"
"go.uber.org/atomic"
"go.unistack.org/micro/v4/broker"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
)
func TestMemoryBatchBroker(t *testing.T) {
b := NewBroker()
ctx := context.Background()
type hldr struct {
c atomic.Int64
}
if err := b.Init(); err != nil {
t.Fatalf("Unexpected init error %v", err)
}
if err := b.Connect(ctx); err != nil {
t.Fatalf("Unexpected connect error %v", err)
}
topic := "test"
count := 10
fn := func(evts broker.Events) error {
return evts.Ack()
}
sub, err := b.BatchSubscribe(ctx, topic, fn)
if err != nil {
t.Fatalf("Unexpected error subscribing %v", err)
}
msgs := make([]*broker.Message, 0, count)
for i := 0; i < count; i++ {
message := &broker.Message{
Header: map[string]string{
metadata.HeaderTopic: topic,
"foo": "bar",
"id": fmt.Sprintf("%d", i),
},
Body: []byte(`"hello world"`),
}
msgs = append(msgs, message)
}
if err := b.BatchPublish(ctx, msgs); err != nil {
t.Fatalf("Unexpected error publishing %v", err)
}
if err := sub.Unsubscribe(ctx); err != nil {
t.Fatalf("Unexpected error unsubscribing from %s: %v", topic, err)
}
if err := b.Disconnect(ctx); err != nil {
t.Fatalf("Unexpected connect error %v", err)
}
func (h *hldr) Handler(m broker.Message) error {
h.c.Add(1)
return nil
}
func TestMemoryBroker(t *testing.T) {
b := NewBroker()
b := NewBroker(broker.Codec("application/octet-stream", codec.NewCodec()))
ctx := context.Background()
if err := b.Init(); err != nil {
@@ -72,38 +33,33 @@ func TestMemoryBroker(t *testing.T) {
}
topic := "test"
count := 10
count := int64(10)
fn := func(p broker.Event) error {
return nil
}
h := &hldr{}
sub, err := b.Subscribe(ctx, topic, fn)
sub, err := b.Subscribe(ctx, topic, h.Handler)
if err != nil {
t.Fatalf("Unexpected error subscribing %v", err)
}
msgs := make([]*broker.Message, 0, count)
for i := 0; i < count; i++ {
message := &broker.Message{
Header: map[string]string{
metadata.HeaderTopic: topic,
"foo": "bar",
"id": fmt.Sprintf("%d", i),
},
Body: []byte(`"hello world"`),
for i := int64(0); i < count; i++ {
message, err := b.NewMessage(ctx,
metadata.Pairs(
"foo", "bar",
"id", fmt.Sprintf("%d", i),
),
[]byte(`"hello world"`),
broker.PublishContentType("application/octet-stream"),
)
if err != nil {
t.Fatal(err)
}
msgs = append(msgs, message)
if err := b.Publish(ctx, topic, message); err != nil {
t.Fatalf("Unexpected error publishing %d err: %v", i, err)
}
}
if err := b.BatchPublish(ctx, msgs); err != nil {
t.Fatalf("Unexpected error publishing %v", err)
}
if err := sub.Unsubscribe(ctx); err != nil {
t.Fatalf("Unexpected error unsubscribing from %s: %v", topic, err)
}
@@ -111,4 +67,8 @@ func TestMemoryBroker(t *testing.T) {
if err := b.Disconnect(ctx); err != nil {
t.Fatalf("Unexpected connect error %v", err)
}
if h.c.Load() != count {
t.Fatal("invalid messages count received")
}
}

View File

@@ -3,28 +3,53 @@ package broker
import (
"context"
"strings"
"sync"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/options"
)
type NoopBroker struct {
funcPublish FuncPublish
funcBatchPublish FuncBatchPublish
funcSubscribe FuncSubscribe
funcBatchSubscribe FuncBatchSubscribe
opts Options
funcPublish FuncPublish
funcSubscribe FuncSubscribe
opts Options
sync.RWMutex
}
func (b *NoopBroker) newCodec(ct string) (codec.Codec, error) {
if idx := strings.IndexRune(ct, ';'); idx >= 0 {
ct = ct[:idx]
}
b.RLock()
c, ok := b.opts.Codecs[ct]
b.RUnlock()
if ok {
return c, nil
}
return nil, codec.ErrUnknownContentType
}
func NewBroker(opts ...Option) *NoopBroker {
b := &NoopBroker{opts: NewOptions(opts...)}
b.funcPublish = b.fnPublish
b.funcBatchPublish = b.fnBatchPublish
b.funcSubscribe = b.fnSubscribe
b.funcBatchSubscribe = b.fnBatchSubscribe
return b
}
func (b *NoopBroker) Health() bool {
return true
}
func (b *NoopBroker) Live() bool {
return true
}
func (b *NoopBroker) Ready() bool {
return true
}
func (b *NoopBroker) Name() string {
return b.opts.Name
}
@@ -43,20 +68,14 @@ func (b *NoopBroker) Init(opts ...Option) error {
}
b.funcPublish = b.fnPublish
b.funcBatchPublish = b.fnBatchPublish
b.funcSubscribe = b.fnSubscribe
b.funcBatchSubscribe = b.fnBatchSubscribe
b.opts.Hooks.EachNext(func(hook options.Hook) {
b.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookPublish:
b.funcPublish = h(b.funcPublish)
case HookBatchPublish:
b.funcBatchPublish = h(b.funcBatchPublish)
case HookSubscribe:
b.funcSubscribe = h(b.funcSubscribe)
case HookBatchSubscribe:
b.funcBatchSubscribe = h(b.funcBatchSubscribe)
}
})
@@ -75,43 +94,75 @@ func (b *NoopBroker) Address() string {
return strings.Join(b.opts.Addrs, ",")
}
func (b *NoopBroker) fnBatchPublish(_ context.Context, _ []*Message, _ ...PublishOption) error {
type noopMessage struct {
c codec.Codec
ctx context.Context
body []byte
hdr metadata.Metadata
opts PublishOptions
}
func (m *noopMessage) Ack() error {
return nil
}
func (b *NoopBroker) BatchPublish(ctx context.Context, msgs []*Message, opts ...PublishOption) error {
return b.funcBatchPublish(ctx, msgs, opts...)
func (m *noopMessage) Body() []byte {
return m.body
}
func (b *NoopBroker) fnPublish(_ context.Context, _ string, _ *Message, _ ...PublishOption) error {
func (m *noopMessage) Header() metadata.Metadata {
return m.hdr
}
func (m *noopMessage) Context() context.Context {
return m.ctx
}
func (m *noopMessage) Topic() string {
return ""
}
func (m *noopMessage) Unmarshal(dst interface{}, opts ...codec.Option) error {
return m.c.Unmarshal(m.body, dst)
}
func (b *NoopBroker) NewMessage(ctx context.Context, hdr metadata.Metadata, body interface{}, opts ...PublishOption) (Message, error) {
options := NewPublishOptions(opts...)
if options.ContentType == "" {
options.ContentType = b.opts.ContentType
}
m := &noopMessage{ctx: ctx, hdr: hdr, opts: options}
c, err := b.newCodec(m.opts.ContentType)
if err == nil {
m.body, err = c.Marshal(body)
}
if err != nil {
return nil, err
}
return m, nil
}
func (b *NoopBroker) fnPublish(_ context.Context, _ string, _ ...Message) error {
return nil
}
func (b *NoopBroker) Publish(ctx context.Context, topic string, msg *Message, opts ...PublishOption) error {
return b.funcPublish(ctx, topic, msg, opts...)
func (b *NoopBroker) Publish(ctx context.Context, topic string, msg ...Message) error {
return b.funcPublish(ctx, topic, msg...)
}
type NoopSubscriber struct {
ctx context.Context
topic string
handler Handler
batchHandler BatchHandler
opts SubscribeOptions
ctx context.Context
topic string
handler interface{}
opts SubscribeOptions
}
func (b *NoopBroker) fnBatchSubscribe(ctx context.Context, topic string, handler BatchHandler, opts ...SubscribeOption) (Subscriber, error) {
return &NoopSubscriber{ctx: ctx, topic: topic, opts: NewSubscribeOptions(opts...), batchHandler: handler}, nil
}
func (b *NoopBroker) BatchSubscribe(ctx context.Context, topic string, handler BatchHandler, opts ...SubscribeOption) (Subscriber, error) {
return b.funcBatchSubscribe(ctx, topic, handler, opts...)
}
func (b *NoopBroker) fnSubscribe(ctx context.Context, topic string, handler Handler, opts ...SubscribeOption) (Subscriber, error) {
func (b *NoopBroker) fnSubscribe(ctx context.Context, topic string, handler interface{}, opts ...SubscribeOption) (Subscriber, error) {
return &NoopSubscriber{ctx: ctx, topic: topic, opts: NewSubscribeOptions(opts...), handler: handler}, nil
}
func (b *NoopBroker) Subscribe(ctx context.Context, topic string, handler Handler, opts ...SubscribeOption) (Subscriber, error) {
func (b *NoopBroker) Subscribe(ctx context.Context, topic string, handler interface{}, opts ...SubscribeOption) (Subscriber, error) {
return b.funcSubscribe(ctx, topic, handler, opts...)
}

View File

@@ -10,9 +10,9 @@ type testHook struct {
}
func (t *testHook) Publish1(fn FuncPublish) FuncPublish {
return func(ctx context.Context, topic string, msg *Message, opts ...PublishOption) error {
return func(ctx context.Context, topic string, messages ...Message) error {
t.f = true
return fn(ctx, topic, msg, opts...)
return fn(ctx, topic, messages...)
}
}

View File

@@ -5,46 +5,49 @@ import (
"crypto/tls"
"time"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/register"
"go.unistack.org/micro/v3/sync"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/options"
"go.unistack.org/micro/v4/register"
"go.unistack.org/micro/v4/sync"
"go.unistack.org/micro/v4/tracer"
)
// Options struct
type Options struct {
// Name holds the broker name
Name string
// Tracer used for tracing
Tracer tracer.Tracer
// Register can be used for clustering
Register register.Register
// Codec holds the codec for marshal/unmarshal
Codec codec.Codec
// Codecs holds the codecs for marshal/unmarshal based on content-type
Codecs map[string]codec.Codec
// Logger used for logging
Logger logger.Logger
// Meter used for metrics
Meter meter.Meter
// Context holds external options
Context context.Context
// TLSConfig holds tls.TLSConfig options
TLSConfig *tls.Config
// ErrorHandler used when broker can't unmarshal incoming message
ErrorHandler Handler
// BatchErrorHandler used when broker can't unmashal incoming messages
BatchErrorHandler BatchHandler
// Name holds the broker name
Name string
// Addrs holds the broker address
Addrs []string
// Wait waits for a collection of goroutines to finish
Wait *sync.WaitGroup
// GracefulTimeout contains time to wait to finish in flight requests
GracefulTimeout time.Duration
// TLSConfig holds tls.TLSConfig options
TLSConfig *tls.Config
// Addrs holds the broker address
Addrs []string
// Hooks can be run before broker Publish/BatchPublish and
// Subscribe/BatchSubscribe methods
Hooks options.Hooks
// GracefulTimeout contains time to wait to finish in flight requests
GracefulTimeout time.Duration
// ContentType will be used if no content-type set when creating message
ContentType string
}
// NewOptions create new Options
@@ -54,16 +57,22 @@ func NewOptions(opts ...Option) Options {
Logger: logger.DefaultLogger,
Context: context.Background(),
Meter: meter.DefaultMeter,
Codec: codec.DefaultCodec,
Codecs: make(map[string]codec.Codec),
Tracer: tracer.DefaultTracer,
GracefulTimeout: DefaultGracefulTimeout,
ContentType: DefaultContentType,
}
for _, o := range opts {
o(&options)
}
return options
}
// DefaultContentType is the default content-type if not specified
var DefaultContentType = ""
// Context sets the context option
func Context(ctx context.Context) Option {
return func(o *Options) {
@@ -71,12 +80,22 @@ func Context(ctx context.Context) Option {
}
}
// ContentType used by default if not specified
func ContentType(ct string) Option {
return func(o *Options) {
o.ContentType = ct
}
}
// PublishOptions struct
type PublishOptions struct {
// Context holds external options
Context context.Context
// BodyOnly flag says the message contains raw body bytes
// ContentType for message body
ContentType string
// BodyOnly flag says the message contains raw body bytes and don't need
// codec Marshal method
BodyOnly bool
// Context holds custom options
Context context.Context
}
// NewPublishOptions creates PublishOptions struct
@@ -94,10 +113,6 @@ func NewPublishOptions(opts ...PublishOption) PublishOptions {
type SubscribeOptions struct {
// Context holds external options
Context context.Context
// ErrorHandler used when broker can't unmarshal incoming message
ErrorHandler Handler
// BatchErrorHandler used when broker can't unmashal incoming messages
BatchErrorHandler BatchHandler
// Group holds consumer group
Group string
// AutoAck flag specifies auto ack of incoming message when no error happens
@@ -116,6 +131,13 @@ type Option func(*Options)
// PublishOption func
type PublishOption func(*PublishOptions)
// PublishContentType sets message content-type that used to Marshal
func PublishContentType(ct string) PublishOption {
return func(o *PublishOptions) {
o.ContentType = ct
}
}
// PublishBodyOnly publish only body of the message
func PublishBodyOnly(b bool) PublishOption {
return func(o *PublishOptions) {
@@ -123,13 +145,6 @@ func PublishBodyOnly(b bool) PublishOption {
}
}
// PublishContext sets the context
func PublishContext(ctx context.Context) PublishOption {
return func(o *PublishOptions) {
o.Context = ctx
}
}
// Addrs sets the host addresses to be used by the broker
func Addrs(addrs ...string) Option {
return func(o *Options) {
@@ -137,51 +152,10 @@ func Addrs(addrs ...string) Option {
}
}
// Codec sets the codec used for encoding/decoding used where
// a broker does not support headers
func Codec(c codec.Codec) Option {
// Codec sets the codec used for encoding/decoding messages
func Codec(ct string, c codec.Codec) Option {
return func(o *Options) {
o.Codec = c
}
}
// ErrorHandler will catch all broker errors that cant be handled
// in normal way, for example Codec errors
func ErrorHandler(h Handler) Option {
return func(o *Options) {
o.ErrorHandler = h
}
}
// BatchErrorHandler will catch all broker errors that cant be handled
// in normal way, for example Codec errors
func BatchErrorHandler(h BatchHandler) Option {
return func(o *Options) {
o.BatchErrorHandler = h
}
}
// SubscribeErrorHandler will catch all broker errors that cant be handled
// in normal way, for example Codec errors
func SubscribeErrorHandler(h Handler) SubscribeOption {
return func(o *SubscribeOptions) {
o.ErrorHandler = h
}
}
// SubscribeBatchErrorHandler will catch all broker errors that cant be handled
// in normal way, for example Codec errors
func SubscribeBatchErrorHandler(h BatchHandler) SubscribeOption {
return func(o *SubscribeOptions) {
o.BatchErrorHandler = h
}
}
// Queue sets the subscribers queue
// Deprecated
func Queue(name string) SubscribeOption {
return func(o *SubscribeOptions) {
o.Group = name
o.Codecs[ct] = c
}
}
@@ -248,14 +222,6 @@ func SubscribeContext(ctx context.Context) SubscribeOption {
}
}
// DisableAutoAck disables auto ack
// Deprecated
func DisableAutoAck() SubscribeOption {
return func(o *SubscribeOptions) {
o.AutoAck = false
}
}
// SubscribeAutoAck contol auto acking of messages
// after they have been handled.
func SubscribeAutoAck(b bool) SubscribeOption {

14
broker/subscriber.go Normal file
View File

@@ -0,0 +1,14 @@
package broker
// IsValidHandler func signature
func IsValidHandler(sub interface{}) error {
switch sub.(type) {
default:
return ErrInvalidHandler
case func(Message) error:
break
case func([]Message) error:
break
}
return nil
}

View File

@@ -5,7 +5,7 @@ import (
"math"
"time"
"go.unistack.org/micro/v3/util/backoff"
"go.unistack.org/micro/v4/util/backoff"
)
// BackoffFunc is the backoff call func
@@ -17,13 +17,13 @@ func BackoffExp(_ context.Context, _ Request, attempts int) (time.Duration, erro
}
// BackoffInterval specifies randomization interval for backoff func
func BackoffInterval(min time.Duration, max time.Duration) BackoffFunc {
func BackoffInterval(minTime time.Duration, maxTime time.Duration) BackoffFunc {
return func(_ context.Context, _ Request, attempts int) (time.Duration, error) {
td := time.Duration(math.Pow(float64(attempts), math.E)) * time.Millisecond * 100
if td < min {
return min, nil
} else if td > max {
return max, nil
if td < minTime {
return minTime, nil
} else if td > maxTime {
return maxTime, nil
}
return td, nil
}

View File

@@ -34,23 +34,23 @@ func TestBackoffExp(t *testing.T) {
}
func TestBackoffInterval(t *testing.T) {
min := 100 * time.Millisecond
max := 300 * time.Millisecond
minTime := 100 * time.Millisecond
maxTime := 300 * time.Millisecond
r := &testRequest{
service: "test",
method: "test",
}
fn := BackoffInterval(min, max)
fn := BackoffInterval(minTime, maxTime)
for i := 0; i < 5; i++ {
d, err := fn(context.TODO(), r, i)
if err != nil {
t.Fatal(err)
}
if d < min || d > max {
t.Fatalf("Expected %v < %v < %v", min, d, max)
if d < minTime || d > maxTime {
t.Fatalf("Expected %v < %v < %v", minTime, d, maxTime)
}
}
}

View File

@@ -1,12 +1,12 @@
// Package client is an interface for an RPC client
package client // import "go.unistack.org/micro/v3/client"
package client
import (
"context"
"time"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
)
var (
@@ -29,40 +29,24 @@ var (
)
// Client is the interface used to make requests to services.
// It supports Request/Response via Transport and Publishing via the Broker.
// It also supports bidirectional streaming of requests.
type Client interface {
Name() string
Init(opts ...Option) error
Options() Options
NewMessage(topic string, msg interface{}, opts ...MessageOption) Message
NewRequest(service string, endpoint string, req interface{}, opts ...RequestOption) Request
Call(ctx context.Context, req Request, rsp interface{}, opts ...CallOption) error
Stream(ctx context.Context, req Request, opts ...CallOption) (Stream, error)
Publish(ctx context.Context, msg Message, opts ...PublishOption) error
BatchPublish(ctx context.Context, msg []Message, opts ...PublishOption) error
String() string
}
type (
FuncCall func(ctx context.Context, req Request, rsp interface{}, opts ...CallOption) error
HookCall func(next FuncCall) FuncCall
FuncStream func(ctx context.Context, req Request, opts ...CallOption) (Stream, error)
HookStream func(next FuncStream) FuncStream
FuncPublish func(ctx context.Context, msg Message, opts ...PublishOption) error
HookPublish func(next FuncPublish) FuncPublish
FuncBatchPublish func(ctx context.Context, msg []Message, opts ...PublishOption) error
HookBatchPublish func(next FuncBatchPublish) FuncBatchPublish
FuncCall func(ctx context.Context, req Request, rsp interface{}, opts ...CallOption) error
HookCall func(next FuncCall) FuncCall
FuncStream func(ctx context.Context, req Request, opts ...CallOption) (Stream, error)
HookStream func(next FuncStream) FuncStream
)
// Message is the interface for publishing asynchronously
type Message interface {
Topic() string
Payload() interface{}
ContentType() string
Metadata() metadata.Metadata
}
// Request is the interface for a synchronous request used by Call or Stream
type Request interface {
// The service to call
@@ -121,11 +105,5 @@ type Option func(*Options)
// CallOption used by Call or Stream
type CallOption func(*CallOptions)
// PublishOption used by Publish
type PublishOption func(*PublishOptions)
// MessageOption used by NewMessage
type MessageOption func(*MessageOptions)
// RequestOption used by NewRequest
type RequestOption func(*RequestOptions)

View File

@@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Client, bool) {
return c, ok
}
// MustContext get client from context
func MustContext(ctx context.Context) Client {
c, ok := FromContext(ctx)
if !ok {
panic("missing client")
}
return c
}
// NewContext put client in context
func NewContext(ctx context.Context, c Client) context.Context {
if ctx == nil {
@@ -23,16 +32,6 @@ func NewContext(ctx context.Context, c Client) context.Context {
return context.WithValue(ctx, clientKey{}, c)
}
// SetPublishOption returns a function to setup a context with given value
func SetPublishOption(k, v interface{}) PublishOption {
return func(o *PublishOptions) {
if o.Context == nil {
o.Context = context.Background()
}
o.Context = context.WithValue(o.Context, k, v)
}
}
// SetCallOption returns a function to setup a context with given value
func SetCallOption(k, v interface{}) CallOption {
return func(o *CallOptions) {

View File

@@ -39,17 +39,6 @@ func TestNewNilContext(t *testing.T) {
}
}
func TestSetPublishOption(t *testing.T) {
type key struct{}
o := SetPublishOption(key{}, "test")
opts := &PublishOptions{}
o(opts)
if v, ok := opts.Context.Value(key{}).(string); !ok || v == "" {
t.Fatal("SetPublishOption not works")
}
}
func TestSetCallOption(t *testing.T) {
type key struct{}
o := SetCallOption(key{}, "test")

View File

@@ -4,8 +4,8 @@ import (
"context"
"sort"
"go.unistack.org/micro/v3/errors"
"go.unistack.org/micro/v3/router"
"go.unistack.org/micro/v4/errors"
"go.unistack.org/micro/v4/router"
)
// LookupFunc is used to lookup routes for a service

View File

@@ -3,18 +3,16 @@ package client
import (
"context"
"fmt"
"os"
"strconv"
"time"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/errors"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/selector"
"go.unistack.org/micro/v3/semconv"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/errors"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/options"
"go.unistack.org/micro/v4/selector"
"go.unistack.org/micro/v4/semconv"
"go.unistack.org/micro/v4/tracer"
)
// DefaultCodecs will be used to encode/decode data
@@ -23,17 +21,9 @@ var DefaultCodecs = map[string]codec.Codec{
}
type noopClient struct {
funcPublish FuncPublish
funcBatchPublish FuncBatchPublish
funcCall FuncCall
funcStream FuncStream
opts Options
}
type noopMessage struct {
topic string
payload interface{}
opts MessageOptions
funcCall FuncCall
funcStream FuncStream
opts Options
}
type noopRequest struct {
@@ -52,8 +42,6 @@ func NewClient(opts ...Option) Client {
n.funcCall = n.fnCall
n.funcStream = n.fnStream
n.funcPublish = n.fnPublish
n.funcBatchPublish = n.fnBatchPublish
return n
}
@@ -158,32 +146,6 @@ func (n *noopStream) CloseSend() error {
return n.err
}
func (n *noopMessage) Topic() string {
return n.topic
}
func (n *noopMessage) Payload() interface{} {
return n.payload
}
func (n *noopMessage) ContentType() string {
return n.opts.ContentType
}
func (n *noopMessage) Metadata() metadata.Metadata {
return n.opts.Metadata
}
func (n *noopClient) newCodec(contentType string) (codec.Codec, error) {
if cf, ok := n.opts.Codecs[contentType]; ok {
return cf, nil
}
if cf, ok := DefaultCodecs[contentType]; ok {
return cf, nil
}
return nil, codec.ErrUnknownContentType
}
func (n *noopClient) Init(opts ...Option) error {
for _, o := range opts {
o(&n.opts)
@@ -191,19 +153,13 @@ func (n *noopClient) Init(opts ...Option) error {
n.funcCall = n.fnCall
n.funcStream = n.fnStream
n.funcPublish = n.fnPublish
n.funcBatchPublish = n.fnBatchPublish
n.opts.Hooks.EachNext(func(hook options.Hook) {
n.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookCall:
n.funcCall = h(n.funcCall)
case HookStream:
n.funcStream = h(n.funcStream)
case HookPublish:
n.funcPublish = h(n.funcPublish)
case HookBatchPublish:
n.funcBatchPublish = h(n.funcBatchPublish)
}
})
@@ -222,7 +178,7 @@ func (n *noopClient) Call(ctx context.Context, req Request, rsp interface{}, opt
ts := time.Now()
n.opts.Meter.Counter(semconv.ClientRequestInflight, "endpoint", req.Endpoint()).Inc()
var sp tracer.Span
ctx, sp = n.opts.Tracer.Start(ctx, req.Endpoint()+" rpc-client",
ctx, sp = n.opts.Tracer.Start(ctx, "rpc-client",
tracer.WithSpanKind(tracer.SpanKindClient),
tracer.WithSpanLabels("endpoint", req.Endpoint()),
)
@@ -298,7 +254,7 @@ func (n *noopClient) fnCall(ctx context.Context, req Request, rsp interface{}, o
// call backoff first. Someone may want an initial start delay
t, err := callOpts.Backoff(ctx, req, i)
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
return errors.InternalServerError("go.micro.client", "%s", err)
}
// only sleep if greater than 0
@@ -312,7 +268,7 @@ func (n *noopClient) fnCall(ctx context.Context, req Request, rsp interface{}, o
// TODO apply any filtering here
routes, err = n.opts.Lookup(ctx, req, callOpts)
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
return errors.InternalServerError("go.micro.client", "%s", err)
}
// balance the list of nodes
@@ -372,20 +328,15 @@ func (n *noopClient) fnCall(ctx context.Context, req Request, rsp interface{}, o
return gerr
}
func (n *noopClient) NewRequest(service, endpoint string, req interface{}, opts ...RequestOption) Request {
func (n *noopClient) NewRequest(service, endpoint string, _ interface{}, _ ...RequestOption) Request {
return &noopRequest{service: service, endpoint: endpoint}
}
func (n *noopClient) NewMessage(topic string, msg interface{}, opts ...MessageOption) Message {
options := NewMessageOptions(append([]MessageOption{MessageContentType(n.opts.ContentType)}, opts...)...)
return &noopMessage{topic: topic, payload: msg, opts: options}
}
func (n *noopClient) Stream(ctx context.Context, req Request, opts ...CallOption) (Stream, error) {
ts := time.Now()
n.opts.Meter.Counter(semconv.ClientRequestInflight, "endpoint", req.Endpoint()).Inc()
var sp tracer.Span
ctx, sp = n.opts.Tracer.Start(ctx, req.Endpoint()+" rpc-client",
ctx, sp = n.opts.Tracer.Start(ctx, "rpc-client",
tracer.WithSpanKind(tracer.SpanKindClient),
tracer.WithSpanLabels("endpoint", req.Endpoint()),
)
@@ -466,7 +417,7 @@ func (n *noopClient) fnStream(ctx context.Context, req Request, opts ...CallOpti
// call backoff first. Someone may want an initial start delay
t, cerr := callOpts.Backoff(ctx, req, i)
if cerr != nil {
return nil, errors.InternalServerError("go.micro.client", cerr.Error())
return nil, errors.InternalServerError("go.micro.client", "%s", cerr)
}
// only sleep if greater than 0
@@ -480,7 +431,7 @@ func (n *noopClient) fnStream(ctx context.Context, req Request, opts ...CallOpti
// TODO apply any filtering here
routes, err = n.opts.Lookup(ctx, req, callOpts)
if err != nil {
return nil, errors.InternalServerError("go.micro.client", err.Error())
return nil, errors.InternalServerError("go.micro.client", "%s", err)
}
// balance the list of nodes
@@ -546,92 +497,6 @@ func (n *noopClient) fnStream(ctx context.Context, req Request, opts ...CallOpti
return nil, grr
}
func (n *noopClient) stream(ctx context.Context, addr string, req Request, opts CallOptions) (Stream, error) {
func (n *noopClient) stream(ctx context.Context, _ string, _ Request, _ CallOptions) (Stream, error) {
return &noopStream{ctx: ctx}, nil
}
func (n *noopClient) BatchPublish(ctx context.Context, ps []Message, opts ...PublishOption) error {
return n.funcBatchPublish(ctx, ps, opts...)
}
func (n *noopClient) fnBatchPublish(ctx context.Context, ps []Message, opts ...PublishOption) error {
return n.publish(ctx, ps, opts...)
}
func (n *noopClient) Publish(ctx context.Context, p Message, opts ...PublishOption) error {
return n.funcPublish(ctx, p, opts...)
}
func (n *noopClient) fnPublish(ctx context.Context, p Message, opts ...PublishOption) error {
return n.publish(ctx, []Message{p}, opts...)
}
func (n *noopClient) publish(ctx context.Context, ps []Message, opts ...PublishOption) error {
options := NewPublishOptions(opts...)
msgs := make([]*broker.Message, 0, len(ps))
// get proxy
exchange := ""
if v, ok := os.LookupEnv("MICRO_PROXY"); ok {
exchange = v
}
// get the exchange
if len(options.Exchange) > 0 {
exchange = options.Exchange
}
omd, ok := metadata.FromOutgoingContext(ctx)
if !ok {
omd = metadata.New(0)
}
for _, p := range ps {
md := metadata.Copy(omd)
md[metadata.HeaderContentType] = p.ContentType()
topic := p.Topic()
if len(exchange) > 0 {
topic = exchange
}
md[metadata.HeaderTopic] = topic
iter := p.Metadata().Iterator()
var k, v string
for iter.Next(&k, &v) {
md.Set(k, v)
}
var body []byte
// passed in raw data
if d, ok := p.Payload().(*codec.Frame); ok {
body = d.Data
} else {
// use codec for payload
cf, err := n.newCodec(p.ContentType())
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
}
// set the body
b, err := cf.Marshal(p.Payload())
if err != nil {
return errors.InternalServerError("go.micro.client", err.Error())
}
body = b
}
msgs = append(msgs, &broker.Message{Header: md, Body: body})
}
if len(msgs) == 1 {
return n.opts.Broker.Publish(ctx, msgs[0].Header[metadata.HeaderTopic], msgs[0],
broker.PublishContext(options.Context),
broker.PublishBodyOnly(options.BodyOnly),
)
}
return n.opts.Broker.BatchPublish(ctx, msgs,
broker.PublishContext(options.Context),
broker.PublishBodyOnly(options.BodyOnly),
)
}

View File

@@ -1,35 +0,0 @@
package client
import (
"context"
"testing"
)
type testHook struct {
f bool
}
func (t *testHook) Publish(fn FuncPublish) FuncPublish {
return func(ctx context.Context, msg Message, opts ...PublishOption) error {
t.f = true
return fn(ctx, msg, opts...)
}
}
func TestNoopHook(t *testing.T) {
h := &testHook{}
c := NewClient(Hooks(HookPublish(h.Publish)))
if err := c.Init(); err != nil {
t.Fatal(err)
}
if err := c.Publish(context.TODO(), c.NewMessage("", nil, MessageContentType("application/octet-stream"))); err != nil {
t.Fatal(err)
}
if !h.f {
t.Fatal("hook not works")
}
}

View File

@@ -6,24 +6,31 @@ import (
"net"
"time"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/network/transport"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/register"
"go.unistack.org/micro/v3/router"
"go.unistack.org/micro/v3/selector"
"go.unistack.org/micro/v3/selector/random"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/broker"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/options"
"go.unistack.org/micro/v4/register"
"go.unistack.org/micro/v4/router"
"go.unistack.org/micro/v4/selector"
"go.unistack.org/micro/v4/selector/random"
"go.unistack.org/micro/v4/tracer"
)
// Options holds client options
type Options struct {
// Transport used for transfer messages
Transport transport.Transport
// Codecs map
Codecs map[string]codec.Codec
// Proxy is used for proxy requests
Proxy string
// ContentType is used to select codec
ContentType string
// Name is the client name
Name string
// Selector used to select needed address
Selector selector.Selector
// Logger used to log messages
@@ -38,31 +45,28 @@ type Options struct {
Context context.Context
// Router used to get route
Router router.Router
// TLSConfig specifies tls.Config for secure connection
TLSConfig *tls.Config
// Codecs map
Codecs map[string]codec.Codec
// Lookup func used to get destination addr
Lookup LookupFunc
// Proxy is used for proxy requests
Proxy string
// ContentType is used to select codec
ContentType string
// Name is the client name
Name string
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
// Wrappers contains wrappers
Wrappers []Wrapper
// Hooks can be run before broker Publish/BatchPublish and
// Subscribe/BatchSubscribe methods
Hooks options.Hooks
// CallOptions contains default CallOptions
CallOptions CallOptions
// PoolSize connection pool size
PoolSize int
// PoolTTL connection pool ttl
PoolTTL time.Duration
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
// Hooks can be run before broker Publish/BatchPublish and
// Subscribe/BatchSubscribe methods
Hooks options.Hooks
}
// NewCallOptions creates new call options struct
@@ -76,6 +80,16 @@ func NewCallOptions(opts ...CallOption) CallOptions {
// CallOptions holds client call options
type CallOptions struct {
// RequestMetadata holds additional metadata for call
RequestMetadata metadata.Metadata
// Network name
Network string
// Content-Type
ContentType string
// AuthToken string
AuthToken string
// Selector selects addr
Selector selector.Selector
// Context used for deadline
@@ -83,33 +97,30 @@ type CallOptions struct {
// Router used for route
Router router.Router
// Retry func used for retries
// ResponseMetadata holds additional metadata from call
ResponseMetadata *metadata.Metadata
Retry RetryFunc
// Backoff func used for backoff when retry
Backoff BackoffFunc
// Network name
Network string
// Content-Type
ContentType string
// AuthToken string
AuthToken string
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
// Address specifies static addr list
Address []string
// SelectOptions selector options
SelectOptions []selector.SelectOption
// StreamTimeout stream timeout
StreamTimeout time.Duration
// RequestTimeout request timeout
RequestTimeout time.Duration
// RequestMetadata holds additional metadata for call
RequestMetadata metadata.Metadata
// ResponseMetadata holds additional metadata from call
ResponseMetadata *metadata.Metadata
// DialTimeout dial timeout
DialTimeout time.Duration
// Retries specifies retries num
Retries int
// ContextDialer used to connect
ContextDialer func(context.Context, string) (net.Conn, error)
}
// ContextDialer pass ContextDialer to client
@@ -126,43 +137,6 @@ func Context(ctx context.Context) Option {
}
}
// NewPublishOptions create new PublishOptions struct from option
func NewPublishOptions(opts ...PublishOption) PublishOptions {
options := PublishOptions{}
for _, o := range opts {
o(&options)
}
return options
}
// PublishOptions holds publish options
type PublishOptions struct {
// Context used for external options
Context context.Context
// Exchange topic exchange name
Exchange string
// BodyOnly will publish only message body
BodyOnly bool
}
// NewMessageOptions creates message options struct
func NewMessageOptions(opts ...MessageOption) MessageOptions {
options := MessageOptions{Metadata: metadata.New(1)}
for _, o := range opts {
o(&options)
}
return options
}
// MessageOptions holds client message options
type MessageOptions struct {
// Metadata additional metadata
Metadata metadata.Metadata
// ContentType specify content-type of message
// deprecated
ContentType string
}
// NewRequestOptions creates new RequestOptions struct
func NewRequestOptions(opts ...RequestOption) RequestOptions {
options := RequestOptions{}
@@ -194,18 +168,16 @@ func NewOptions(opts ...Option) Options {
Retry: DefaultRetry,
Retries: DefaultRetries,
RequestTimeout: DefaultRequestTimeout,
DialTimeout: transport.DefaultDialTimeout,
},
Lookup: LookupRoute,
PoolSize: DefaultPoolSize,
PoolTTL: DefaultPoolTTL,
Selector: random.NewSelector(),
Logger: logger.DefaultLogger,
Broker: broker.DefaultBroker,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
Router: router.DefaultRouter,
Transport: transport.DefaultTransport,
Lookup: LookupRoute,
PoolSize: DefaultPoolSize,
PoolTTL: DefaultPoolTTL,
Selector: random.NewSelector(),
Logger: logger.DefaultLogger,
Broker: broker.DefaultBroker,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
Router: router.DefaultRouter,
}
for _, o := range opts {
@@ -278,13 +250,6 @@ func PoolTTL(d time.Duration) Option {
}
}
// Transport to use for communication e.g http, rabbitmq, etc
func Transport(t transport.Transport) Option {
return func(o *Options) {
o.Transport = t
}
}
// Register sets the routers register
func Register(r register.Register) Option {
return func(o *Options) {
@@ -334,14 +299,6 @@ func TLSConfig(t *tls.Config) Option {
return func(o *Options) {
// set the internal tls
o.TLSConfig = t
// set the default transport if one is not
// already set. Required for Init call below.
// set the transport tls
_ = o.Transport.Init(
transport.TLSConfig(t),
)
}
}
@@ -380,43 +337,6 @@ func DialTimeout(d time.Duration) Option {
}
}
// WithExchange sets the exchange to route a message through
// Deprecated
func WithExchange(e string) PublishOption {
return func(o *PublishOptions) {
o.Exchange = e
}
}
// PublishExchange sets the exchange to route a message through
func PublishExchange(e string) PublishOption {
return func(o *PublishOptions) {
o.Exchange = e
}
}
// WithBodyOnly publish only message body
// DERECATED
func WithBodyOnly(b bool) PublishOption {
return func(o *PublishOptions) {
o.BodyOnly = b
}
}
// PublishBodyOnly publish only message body
func PublishBodyOnly(b bool) PublishOption {
return func(o *PublishOptions) {
o.BodyOnly = b
}
}
// PublishContext sets the context in publish options
func PublishContext(ctx context.Context) PublishOption {
return func(o *PublishOptions) {
o.Context = ctx
}
}
// WithContextDialer pass ContextDialer to client call
func WithContextDialer(fn func(context.Context, string) (net.Conn, error)) CallOption {
return func(o *CallOptions) {
@@ -507,13 +427,6 @@ func WithAuthToken(t string) CallOption {
}
}
// WithNetwork is a CallOption which sets the network attribute
func WithNetwork(n string) CallOption {
return func(o *CallOptions) {
o.Network = n
}
}
// WithRouter sets the router to use for this call
func WithRouter(r router.Router) CallOption {
return func(o *CallOptions) {
@@ -535,30 +448,6 @@ func WithSelectOptions(sops ...selector.SelectOption) CallOption {
}
}
// WithMessageContentType sets the message content type
// Deprecated
func WithMessageContentType(ct string) MessageOption {
return func(o *MessageOptions) {
o.Metadata.Set(metadata.HeaderContentType, ct)
o.ContentType = ct
}
}
// MessageContentType sets the message content type
func MessageContentType(ct string) MessageOption {
return func(o *MessageOptions) {
o.Metadata.Set(metadata.HeaderContentType, ct)
o.ContentType = ct
}
}
// MessageMetadata sets the message metadata
func MessageMetadata(k, v string) MessageOption {
return func(o *MessageOptions) {
o.Metadata.Set(k, v)
}
}
// StreamingRequest specifies that request is streaming
func StreamingRequest(b bool) RequestOption {
return func(o *RequestOptions) {

View File

@@ -3,7 +3,7 @@ package client
import (
"context"
"go.unistack.org/micro/v3/errors"
"go.unistack.org/micro/v4/errors"
)
// RetryFunc that returning either false or a non-nil error will result in the call not being retried

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"testing"
"go.unistack.org/micro/v3/errors"
"go.unistack.org/micro/v4/errors"
)
func TestRetryAlways(t *testing.T) {

View File

@@ -1,7 +1,7 @@
package client
import (
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v4/codec"
)
type testRequest struct {

View File

@@ -3,7 +3,7 @@ package cluster
import (
"context"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v4/metadata"
)
// Message sent to member in cluster
@@ -38,4 +38,10 @@ type Cluster interface {
Broadcast(ctx context.Context, msg Message, filter ...string) error
// Unicast send message to single member in cluster
Unicast(ctx context.Context, node Node, msg Message) error
// Live returns cluster liveness
Live() bool
// Ready returns cluster readiness
Ready() bool
// Health returns cluster health
Health() bool
}

View File

@@ -1,19 +1,8 @@
// Package codec is an interface for encoding messages
package codec // import "go.unistack.org/micro/v3/codec"
package codec
import (
"errors"
"io"
"go.unistack.org/micro/v3/metadata"
)
// Message types
const (
Error MessageType = iota
Request
Response
Event
)
var (
@@ -24,65 +13,23 @@ var (
)
var (
// DefaultMaxMsgSize specifies how much data codec can handle
DefaultMaxMsgSize = 1024 * 1024 * 4 // 4Mb
// DefaultCodec is the global default codec
DefaultCodec = NewCodec()
// DefaultTagName specifies struct tag name to control codec Marshal/Unmarshal
DefaultTagName = "codec"
)
// MessageType specifies message type for codec
type MessageType int
// Codec encodes/decodes various types of messages used within micro.
// ReadHeader and ReadBody are called in pairs to read requests/responses
// from the connection. Close is called when finished with the
// connection. ReadBody may be called with a nil argument to force the
// body to be read and discarded.
// Codec encodes/decodes various types of messages.
type Codec interface {
ReadHeader(r io.Reader, m *Message, mt MessageType) error
ReadBody(r io.Reader, v interface{}) error
Write(w io.Writer, m *Message, v interface{}) error
Marshal(v interface{}, opts ...Option) ([]byte, error)
Unmarshal(b []byte, v interface{}, opts ...Option) error
String() string
}
// Message represents detailed information about
// the communication, likely followed by the body.
// In the case of an error, body may be nil.
type Message struct {
Header metadata.Metadata
Target string
Method string
Endpoint string
Error string
ID string
Body []byte
Type MessageType
}
// NewMessage creates new codec message
func NewMessage(t MessageType) *Message {
return &Message{Type: t, Header: metadata.New(0)}
}
// MarshalAppend calls codec.Marshal(v) and returns the data appended to buf.
// If codec implements MarshalAppend, that is called instead.
func MarshalAppend(buf []byte, c Codec, v interface{}, opts ...Option) ([]byte, error) {
if nc, ok := c.(interface {
MarshalAppend([]byte, interface{}, ...Option) ([]byte, error)
}); ok {
return nc.MarshalAppend(buf, v, opts...)
}
mbuf, err := c.Marshal(v, opts...)
if err != nil {
return nil, err
}
return append(buf, mbuf...), nil
type CodecV2 interface {
Marshal(buf []byte, v interface{}, opts ...Option) ([]byte, error)
Unmarshal(buf []byte, v interface{}, opts ...Option) error
String() string
}
// RawMessage is a raw encoded JSON value.
@@ -93,6 +40,8 @@ type RawMessage []byte
func (m *RawMessage) MarshalJSON() ([]byte, error) {
if m == nil {
return []byte("null"), nil
} else if len(*m) == 0 {
return []byte("null"), nil
}
return *m, nil
}
@@ -105,3 +54,22 @@ func (m *RawMessage) UnmarshalJSON(data []byte) error {
*m = append((*m)[0:0], data...)
return nil
}
// MarshalYAML returns m as the JSON encoding of m.
func (m *RawMessage) MarshalYAML() ([]byte, error) {
if m == nil {
return []byte("null"), nil
} else if len(*m) == 0 {
return []byte("null"), nil
}
return *m, nil
}
// UnmarshalYAML sets *m to a copy of data.
func (m *RawMessage) UnmarshalYAML(data []byte) error {
if m == nil {
return errors.New("RawMessage UnmarshalYAML on nil pointer")
}
*m = append((*m)[0:0], data...)
return nil
}

View File

@@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Codec, bool) {
return c, ok
}
// MustContext returns codec from context
func MustContext(ctx context.Context) Codec {
c, ok := FromContext(ctx)
if !ok {
panic("missing codec")
}
return c
}
// NewContext put codec in context
func NewContext(ctx context.Context, c Codec) context.Context {
if ctx == nil {

View File

@@ -20,6 +20,17 @@ func (m *Frame) UnmarshalJSON(data []byte) error {
return m.Unmarshal(data)
}
// MarshalYAML returns frame data
func (m *Frame) MarshalYAML() ([]byte, error) {
return m.Marshal()
}
// UnmarshalYAML set frame data
func (m *Frame) UnmarshalYAML(data []byte) error {
m.Data = append((m.Data)[0:0], data...)
return nil
}
// ProtoMessage noop func
func (m *Frame) ProtoMessage() {}

View File

@@ -17,7 +17,7 @@ syntax = "proto3";
package micro.codec;
option cc_enable_arenas = true;
option go_package = "go.unistack.org/micro/v3/codec;codec";
option go_package = "go.unistack.org/micro/v4/codec;codec";
option java_multiple_files = true;
option java_outer_classname = "MicroCodec";
option java_package = "micro.codec";

View File

@@ -2,70 +2,14 @@ package codec
import (
"encoding/json"
"io"
codecpb "go.unistack.org/micro-proto/v4/codec"
)
type noopCodec struct {
opts Options
}
func (c *noopCodec) ReadHeader(conn io.Reader, m *Message, t MessageType) error {
return nil
}
func (c *noopCodec) ReadBody(conn io.Reader, b interface{}) error {
// read bytes
buf, err := io.ReadAll(conn)
if err != nil {
return err
}
if b == nil {
return nil
}
switch v := b.(type) {
case *string:
*v = string(buf)
case *[]byte:
*v = buf
case *Frame:
v.Data = buf
default:
return json.Unmarshal(buf, v)
}
return nil
}
func (c *noopCodec) Write(conn io.Writer, m *Message, b interface{}) error {
if b == nil {
return nil
}
var v []byte
switch vb := b.(type) {
case *Frame:
v = vb.Data
case string:
v = []byte(vb)
case *string:
v = []byte(*vb)
case *[]byte:
v = *vb
case []byte:
v = vb
default:
var err error
v, err = json.Marshal(vb)
if err != nil {
return err
}
}
_, err := conn.Write(v)
return err
}
func (c *noopCodec) String() string {
return "noop"
}
@@ -91,8 +35,8 @@ func (c *noopCodec) Marshal(v interface{}, opts ...Option) ([]byte, error) {
return ve, nil
case *Frame:
return ve.Data, nil
case *Message:
return ve.Body, nil
case *codecpb.Frame:
return ve.Data, nil
}
return json.Marshal(v)
@@ -115,8 +59,8 @@ func (c *noopCodec) Unmarshal(d []byte, v interface{}, opts ...Option) error {
case *Frame:
ve.Data = d
return nil
case *Message:
ve.Body = d
case *codecpb.Frame:
ve.Data = d
return nil
}

View File

@@ -3,9 +3,9 @@ package codec
import (
"context"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/tracer"
)
// Option func
@@ -23,15 +23,8 @@ type Options struct {
Context context.Context
// TagName specifies tag name in struct to control codec
TagName string
// MaxMsgSize specifies max messages size that reads by codec
MaxMsgSize int
}
// MaxMsgSize sets the max message size
func MaxMsgSize(n int) Option {
return func(o *Options) {
o.MaxMsgSize = n
}
// Flatten specifies that struct must be analyzed for flatten tag
Flatten bool
}
// TagName sets the codec tag name in struct
@@ -41,6 +34,13 @@ func TagName(n string) Option {
}
}
// Flatten enables checking for flatten tag name
func Flatten(b bool) Option {
return func(o *Options) {
o.Flatten = b
}
}
// Logger sets the logger
func Logger(l logger.Logger) Option {
return func(o *Options) {
@@ -65,12 +65,12 @@ func Meter(m meter.Meter) Option {
// NewOptions returns new options
func NewOptions(opts ...Option) Options {
options := Options{
Context: context.Background(),
Logger: logger.DefaultLogger,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
MaxMsgSize: DefaultMaxMsgSize,
TagName: DefaultTagName,
Context: context.Background(),
Logger: logger.DefaultLogger,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
TagName: DefaultTagName,
Flatten: false,
}
for _, o := range opts {

View File

@@ -1,5 +1,5 @@
// Package config is an interface for dynamic configuration.
package config // import "go.unistack.org/micro/v3/config"
package config
import (
"context"
@@ -13,7 +13,7 @@ type Validator interface {
}
// DefaultConfig default config
var DefaultConfig Config = NewConfig()
var DefaultConfig = NewConfig()
// DefaultWatcherMinInterval default min interval for poll changes
var DefaultWatcherMinInterval = 5 * time.Second
@@ -138,7 +138,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s BeforeLoad err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" BeforeLoad error", err)
if !c.Options().AllowFail {
return err
}
@@ -153,7 +153,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s AfterLoad err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" AfterLoad error", err)
if !c.Options().AllowFail {
return err
}
@@ -168,7 +168,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s BeforeSave err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" BeforeSave error", err)
if !c.Options().AllowFail {
return err
}
@@ -183,7 +183,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s AfterSave err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" AfterSave error", err)
if !c.Options().AllowFail {
return err
}
@@ -198,7 +198,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s BeforeInit err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" BeforeInit error", err)
if !c.Options().AllowFail {
return err
}
@@ -213,7 +213,7 @@ var (
return nil
}
if err := fn(ctx, c); err != nil {
c.Options().Logger.Errorf(ctx, "%s AfterInit err: %v", c.String(), err)
c.Options().Logger.Error(ctx, c.String()+" AfterInit error", err)
if !c.Options().AllowFail {
return err
}

View File

@@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Config, bool) {
return c, ok
}
// MustContext returns store from context
func MustContext(ctx context.Context) Config {
c, ok := FromContext(ctx)
if !ok {
panic("missing config")
}
return c
}
// NewContext put store in context
func NewContext(ctx context.Context, c Config) context.Context {
if ctx == nil {

View File

@@ -9,10 +9,10 @@ import (
"dario.cat/mergo"
"github.com/google/uuid"
"go.unistack.org/micro/v3/options"
mid "go.unistack.org/micro/v3/util/id"
rutil "go.unistack.org/micro/v3/util/reflect"
mtime "go.unistack.org/micro/v3/util/time"
"go.unistack.org/micro/v4/options"
mid "go.unistack.org/micro/v4/util/id"
rutil "go.unistack.org/micro/v4/util/reflect"
mtime "go.unistack.org/micro/v4/util/time"
)
type defaultConfig struct {
@@ -37,7 +37,7 @@ func (c *defaultConfig) Init(opts ...Option) error {
c.funcLoad = c.fnLoad
c.funcSave = c.fnSave
c.opts.Hooks.EachNext(func(hook options.Hook) {
c.opts.Hooks.EachPrev(func(hook options.Hook) {
switch h := hook.(type) {
case HookLoad:
c.funcLoad = h(c.funcLoad)
@@ -210,7 +210,7 @@ func fillValue(value reflect.Value, val string) error {
return err
}
value.Set(reflect.ValueOf(v))
case value.Type().String() == "time.Duration" && value.Type().PkgPath() == "go.unistack.org/micro/v3/util/time":
case value.Type().String() == "time.Duration" && value.Type().PkgPath() == "go.unistack.org/micro/v4/util/time":
v, err := mtime.ParseDuration(val)
if err != nil {
return err

View File

@@ -3,24 +3,26 @@ package config_test
import (
"context"
"fmt"
"reflect"
"testing"
"time"
"go.unistack.org/micro/v3/config"
mid "go.unistack.org/micro/v3/util/id"
mtime "go.unistack.org/micro/v3/util/time"
"go.unistack.org/micro/v4/config"
mtime "go.unistack.org/micro/v4/util/time"
)
type cfg struct {
StringValue string `default:"string_value"`
IgnoreValue string `json:"-"`
StructValue *cfgStructValue
IntValue int `default:"99"`
DurationValue time.Duration `default:"10s"`
MDurationValue mtime.Duration `default:"10s"`
MapValue map[string]bool `default:"key1=true,key2=false"`
UUIDValue string `default:"micro:generate uuid"`
IDValue string `default:"micro:generate id"`
MapValue map[string]bool `default:"key1=true,key2=false"`
StructValue *cfgStructValue
StringValue string `default:"string_value"`
IgnoreValue string `json:"-"`
UUIDValue string `default:"micro:generate uuid"`
IDValue string `default:"micro:generate id"`
DurationValue time.Duration `default:"10s"`
MDurationValue mtime.Duration `default:"10s"`
IntValue int `default:"99"`
}
type cfgStructValue struct {
@@ -112,8 +114,6 @@ func TestDefault(t *testing.T) {
if conf.IDValue == "" {
t.Fatalf("id value empty")
} else if len(conf.IDValue) != mid.DefaultSize {
t.Fatalf("id value invalid: %s", conf.IDValue)
}
_ = conf
// t.Logf("%#+v\n", conf)
@@ -134,3 +134,13 @@ func TestValidate(t *testing.T) {
t.Fatal(err)
}
}
func Test_SizeOf(t *testing.T) {
st := cfg{}
tVal := reflect.TypeOf(st)
for i := 0; i < tVal.NumField(); i++ {
field := tVal.Field(i)
fmt.Printf("Field: %s, Offset: %d, Size: %d\n", field.Name, field.Offset, field.Type.Size())
}
}

View File

@@ -4,11 +4,11 @@ import (
"context"
"time"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/options"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/options"
"go.unistack.org/micro/v4/tracer"
)
// Options hold the config options
@@ -41,14 +41,16 @@ type Options struct {
BeforeInit []func(context.Context, Config) error
// AfterInit contains slice of funcs that runs after Init
AfterInit []func(context.Context, Config) error
// AllowFail flag to allow fail in config source
AllowFail bool
// SkipLoad runs only if condition returns true
SkipLoad func(context.Context, Config) bool
// SkipSave runs only if condition returns true
SkipSave func(context.Context, Config) bool
// Hooks can be run before/after config Save/Load
Hooks options.Hooks
// AllowFail flag to allow fail in config source
AllowFail bool
}
// Option function signature
@@ -278,10 +280,10 @@ func WatchCoalesce(b bool) WatchOption {
}
// WatchInterval specifies min and max time.Duration for pulling changes
func WatchInterval(min, max time.Duration) WatchOption {
func WatchInterval(minTime, maxTime time.Duration) WatchOption {
return func(o *WatchOptions) {
o.MinInterval = min
o.MaxInterval = max
o.MinInterval = minTime
o.MaxInterval = maxTime
}
}

View File

@@ -1,6 +1,6 @@
// Package errors provides a way to return detailed information
// for an RPC request error. The error is normally JSON encoded.
package errors // import "go.unistack.org/micro/v3/errors"
package errors
import (
"bytes"
@@ -44,6 +44,20 @@ var (
ErrGatewayTimeout = &Error{Code: 504}
)
const ProblemContentType = "application/problem+json"
type Problem struct {
Type string `json:"type,omitempty"`
Title string `json:"title,omitempty"`
Detail string `json:"detail,omitempty"`
Instance string `json:"instance,omitempty"`
Errors []struct {
Title string `json:"title,omitempty"`
Detail string `json:"detail,omitempty"`
} `json:"errors,omitempty"`
Status int `json:"status,omitempty"`
}
// Error type
type Error struct {
// ID holds error id or service, usually someting like my_service or id

View File

@@ -17,7 +17,7 @@ syntax = "proto3";
package micro.errors;
option cc_enable_arenas = true;
option go_package = "go.unistack.org/micro/v3/errors;errors";
option go_package = "go.unistack.org/micro/v4/errors;errors";
option java_multiple_files = true;
option java_outer_classname = "MicroErrors";
option java_package = "micro.errors";

View File

@@ -2,7 +2,7 @@ package errors
import (
"encoding/json"
er "errors"
"errors"
"fmt"
"net/http"
"testing"
@@ -26,7 +26,7 @@ func TestMarshalJSON(t *testing.T) {
func TestEmpty(t *testing.T) {
msg := "test"
var err *Error
err = FromError(fmt.Errorf(msg))
err = FromError(errors.New(msg))
if err.Detail != msg {
t.Fatalf("invalid error %v", err)
}
@@ -42,7 +42,7 @@ func TestFromError(t *testing.T) {
if merr.ID != "go.micro.test" || merr.Code != 404 {
t.Fatalf("invalid conversation %v != %v", err, merr)
}
err = er.New(err.Error())
err = errors.New(err.Error())
merr = FromError(err)
if merr.ID != "go.micro.test" || merr.Code != 404 {
t.Fatalf("invalid conversation %v != %v", err, merr)
@@ -57,7 +57,7 @@ func TestEqual(t *testing.T) {
t.Fatal("errors must be equal")
}
err3 := er.New("my test err")
err3 := errors.New("my test err")
if Equal(err1, err3) {
t.Fatal("errors must be not equal")
}

View File

@@ -1,27 +0,0 @@
package micro
import (
"context"
"go.unistack.org/micro/v3/client"
)
// Event is used to publish messages to a topic
type Event interface {
// Publish publishes a message to the event topic
Publish(ctx context.Context, msg interface{}, opts ...client.PublishOption) error
}
type event struct {
c client.Client
topic string
}
// NewEvent creates a new event publisher
func NewEvent(topic string, c client.Client) Event {
return &event{c, topic}
}
func (e *event) Publish(ctx context.Context, msg interface{}, opts ...client.PublishOption) error {
return e.c.Publish(ctx, e.c.NewMessage(e.topic, msg), opts...)
}

View File

@@ -1,3 +1,5 @@
//go:build ignore
package flow
import (

View File

@@ -1,17 +1,19 @@
//go:build ignore
package flow
import (
"context"
"fmt"
"path/filepath"
"sync"
"github.com/silas/dag"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/store"
"go.unistack.org/micro/v3/util/id"
"github.com/heimdalr/dag"
"go.unistack.org/micro/v4/client"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/store"
"go.unistack.org/micro/v4/util/id"
)
type microFlow struct {
@@ -20,7 +22,7 @@ type microFlow struct {
type microWorkflow struct {
opts Options
g *dag.AcyclicGraph
g *dag.DAG
steps map[string]Step
id string
status Status
@@ -32,20 +34,20 @@ func (w *microWorkflow) ID() string {
return w.id
}
func (w *microWorkflow) Steps() ([][]Step, error) {
return w.getSteps("", false)
}
func (w *microWorkflow) Status() Status {
return w.status
}
func (w *microWorkflow) AppendSteps(steps ...Step) error {
var err error
w.Lock()
defer w.Unlock()
for _, s := range steps {
w.steps[s.String()] = s
w.g.Add(s)
if _, err = w.g.AddVertex(s); err != nil {
return err
}
}
for _, dst := range steps {
@@ -54,18 +56,13 @@ func (w *microWorkflow) AppendSteps(steps ...Step) error {
if !ok {
return ErrStepNotExists
}
w.g.Connect(dag.BasicEdge(src, dst))
if err = w.g.AddEdge(src.String(), dst.String()); err != nil {
return err
}
}
}
if err := w.g.Validate(); err != nil {
w.Unlock()
return err
}
w.g.TransitiveReduction()
w.Unlock()
w.g.ReduceTransitively()
return nil
}
@@ -74,10 +71,11 @@ func (w *microWorkflow) RemoveSteps(steps ...Step) error {
// TODO: handle case when some step requires or required by removed step
w.Lock()
defer w.Unlock()
for _, s := range steps {
delete(w.steps, s.String())
w.g.Remove(s)
w.g.DeleteVertex(s.String())
}
for _, dst := range steps {
@@ -86,91 +84,34 @@ func (w *microWorkflow) RemoveSteps(steps ...Step) error {
if !ok {
return ErrStepNotExists
}
w.g.Connect(dag.BasicEdge(src, dst))
w.g.AddEdge(src.String(), dst.String())
}
}
if err := w.g.Validate(); err != nil {
w.Unlock()
return err
}
w.g.TransitiveReduction()
w.Unlock()
w.g.ReduceTransitively()
return nil
}
func (w *microWorkflow) getSteps(start string, reverse bool) ([][]Step, error) {
var steps [][]Step
var root dag.Vertex
var err error
fn := func(n dag.Vertex, idx int) error {
if idx == 0 {
steps = make([][]Step, 1)
steps[0] = make([]Step, 0, 1)
} else if idx >= len(steps) {
tsteps := make([][]Step, idx+1)
copy(tsteps, steps)
steps = tsteps
steps[idx] = make([]Step, 0, 1)
}
steps[idx] = append(steps[idx], n.(Step))
return nil
}
if start != "" {
var ok bool
w.RLock()
root, ok = w.steps[start]
w.RUnlock()
if !ok {
return nil, ErrStepNotExists
}
} else {
root, err = w.g.Root()
if err != nil {
return nil, err
}
}
if reverse {
err = w.g.SortedReverseDepthFirstWalk([]dag.Vertex{root}, fn)
} else {
err = w.g.SortedDepthFirstWalk([]dag.Vertex{root}, fn)
}
if err != nil {
return nil, err
}
return steps, nil
}
func (w *microWorkflow) Abort(ctx context.Context, id string) error {
workflowStore := store.NewNamespaceStore(w.opts.Store, "workflows"+w.opts.Store.Options().Separator+id)
workflowStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("workflows", id))
return workflowStore.Write(ctx, "status", &codec.Frame{Data: []byte(StatusAborted.String())})
}
func (w *microWorkflow) Suspend(ctx context.Context, id string) error {
workflowStore := store.NewNamespaceStore(w.opts.Store, "workflows"+w.opts.Store.Options().Separator+id)
workflowStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("workflows", id))
return workflowStore.Write(ctx, "status", &codec.Frame{Data: []byte(StatusSuspend.String())})
}
func (w *microWorkflow) Resume(ctx context.Context, id string) error {
workflowStore := store.NewNamespaceStore(w.opts.Store, "workflows"+w.opts.Store.Options().Separator+id)
workflowStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("workflows", id))
return workflowStore.Write(ctx, "status", &codec.Frame{Data: []byte(StatusRunning.String())})
}
func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...ExecuteOption) (string, error) {
w.Lock()
if !w.init {
if err := w.g.Validate(); err != nil {
w.Unlock()
return "", err
}
w.g.TransitiveReduction()
w.g.ReduceTransitively()
w.init = true
}
w.Unlock()
@@ -180,26 +121,11 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
return "", err
}
stepStore := store.NewNamespaceStore(w.opts.Store, "steps"+w.opts.Store.Options().Separator+eid)
workflowStore := store.NewNamespaceStore(w.opts.Store, "workflows"+w.opts.Store.Options().Separator+eid)
// stepStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("steps", eid))
workflowStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("workflows", eid))
options := NewExecuteOptions(opts...)
steps, err := w.getSteps(options.Start, options.Reverse)
if err != nil {
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusPending.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
}
return "", err
}
var wg sync.WaitGroup
cherr := make(chan error, 1)
chstatus := make(chan Status, 1)
nctx, cancel := context.WithCancel(ctx)
defer cancel()
nopts := make([]ExecuteOption, 0, len(opts)+5)
nopts = append(nopts,
@@ -209,143 +135,274 @@ func (w *microWorkflow) Execute(ctx context.Context, req *Message, opts ...Execu
ExecuteMeter(w.opts.Meter),
)
nopts = append(nopts, opts...)
done := make(chan struct{})
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
if werr := workflowStore.Write(ctx, "status", &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
w.opts.Logger.Error(ctx, "store error: %v", werr)
return eid, werr
}
for idx := range steps {
for nidx := range steps[idx] {
cstep := steps[idx][nidx]
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusPending.String())}); werr != nil {
return eid, werr
var startID string
if options.Start == "" {
mp := w.g.GetRoots()
if len(mp) != 1 {
return eid, ErrStepNotExists
}
for k := range mp {
startID = k
}
} else {
for k, v := range w.g.GetVertices() {
if v == options.Start {
startID = k
}
}
}
go func() {
for idx := range steps {
for nidx := range steps[idx] {
wStatus := &codec.Frame{}
if werr := workflowStore.Read(w.opts.Context, "status", wStatus); werr != nil {
cherr <- werr
return
if startID == "" {
return eid, ErrStepNotExists
}
if options.Async {
go w.handleWorkflow(startID, nopts...)
return eid, nil
}
return eid, w.handleWorkflow(startID, nopts...)
}
func (w *microWorkflow) handleWorkflow(startID string, opts ...ExecuteOption) error {
w.RLock()
defer w.RUnlock()
// stepStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("steps", eid))
// workflowStore := store.NewNamespaceStore(w.opts.Store, filepath.Join("workflows", eid))
// Get IDs of all descendant vertices.
flowIDs, errDes := w.g.GetDescendants(startID)
if errDes != nil {
return errDes
}
// inputChannels provides for input channels for each of the descendant vertices (+ the start-vertex).
inputChannels := make(map[string]chan FlowResult, len(flowIDs)+1)
// Iterate vertex IDs and create an input channel for each of them and a single
// output channel for leaves. Note, this "pre-flight" is needed to ensure we
// really have an input channel regardless of how we traverse the tree and spawn
// workers.
leafCount := 0
for id := range flowIDs {
// Get all parents of this vertex.
parents, errPar := w.g.GetParents(id)
if errPar != nil {
return errPar
}
// Create a buffered input channel that has capacity for all parent results.
inputChannels[id] = make(chan FlowResult, len(parents))
if ok, err := w.g.IsLeaf(id); ok && err == nil {
leafCount += 1
}
}
// outputChannel caries the results of leaf vertices.
outputChannel := make(chan FlowResult, leafCount)
// To also process the start vertex and to have its results being passed to its
// children, add it to the vertex IDs. Also add an input channel for the start
// vertex and feed the inputs to this channel.
flowIDs[startID] = struct{}{}
inputChannels[startID] = make(chan FlowResult, len(inputs))
for _, i := range inputs {
inputChannels[startID] <- i
}
wg := sync.WaitGroup{}
// Iterate all vertex IDs (now incl. start vertex) and handle each worker (incl.
// inputs and outputs) in a separate goroutine.
for id := range flowIDs {
// Get all children of this vertex that later need to be notified. Note, we
// collect all children before the goroutine to be able to release the read
// lock as early as possible.
children, errChildren := w.g.GetChildren(id)
if errChildren != nil {
return errChildren
}
// Remember to wait for this goroutine.
wg.Add(1)
go func(id string) {
// Get this vertex's input channel.
// Note, only concurrent read here, which is fine.
c := inputChannels[id]
// Await all parent inputs and stuff them into a slice.
parentCount := cap(c)
parentResults := make([]FlowResult, parentCount)
for i := 0; i < parentCount; i++ {
parentResults[i] = <-c
}
// Execute the worker.
errWorker := callback(w.g, id, parentResults)
if errWorker != nil {
return errWorker
}
// Send this worker's FlowResult onto all children's input channels or, if it is
// a leaf (i.e. no children), send the result onto the output channel.
if len(children) > 0 {
for child := range children {
inputChannels[child] <- flowResult
}
if status := StringStatus[string(wStatus.Data)]; status != StatusRunning {
chstatus <- status
return
}
if w.opts.Logger.V(logger.TraceLevel) {
w.opts.Logger.Tracef(nctx, "will be executed %v", steps[idx][nidx])
}
cstep := steps[idx][nidx]
// nolint: nestif
if len(cstep.Requires()) == 0 {
wg.Add(1)
go func(step Step) {
defer wg.Done()
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"req", req); werr != nil {
} else {
outputChannel <- flowResult
}
// "Sign off".
wg.Done()
}(id)
}
// Wait for all go routines to finish.
wg.Wait()
// Await all leaf vertex results and stuff them into a slice.
resultCount := cap(outputChannel)
results := make([]FlowResult, resultCount)
for i := 0; i < resultCount; i++ {
results[i] = <-outputChannel
}
/*
go func() {
for idx := range steps {
for nidx := range steps[idx] {
wStatus := &codec.Frame{}
if werr := workflowStore.Read(w.opts.Context, "status", wStatus); werr != nil {
cherr <- werr
return
}
if status := StringStatus[string(wStatus.Data)]; status != StatusRunning {
chstatus <- status
return
}
if w.opts.Logger.V(logger.TraceLevel) {
w.opts.Logger.Tracef(nctx, "will be executed %v", steps[idx][nidx])
}
cstep := steps[idx][nidx]
// nolint: nestif
if len(cstep.Requires()) == 0 {
wg.Add(1)
go func(step Step) {
defer wg.Done()
if werr := stepStore.Write(ctx, filepath.Join(step.ID(), "req"), req); werr != nil {
cherr <- werr
return
}
if werr := stepStore.Write(ctx, filepath.Join(step.ID(), "status"), &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
cherr <- werr
return
}
rsp, serr := step.Execute(nctx, req, nopts...)
if serr != nil {
step.SetStatus(StatusFailure)
if werr := stepStore.Write(ctx, filepath.Join(step.ID(), "rsp"), serr); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
}
if werr := stepStore.Write(ctx, filepath.Join(step.ID(), "status"), &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
}
cherr <- serr
return
}
if werr := stepStore.Write(ctx, filepath.Join(step.ID(), "rsp"), rsp); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
cherr <- werr
return
}
if werr := stepStore.Write(ctx, filepath.Join(step.ID(), "status"), &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
cherr <- werr
return
}
}(cstep)
wg.Wait()
} else {
if werr := stepStore.Write(ctx, filepath.Join(cstep.ID(), "req"), req); werr != nil {
cherr <- werr
return
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
if werr := stepStore.Write(ctx, filepath.Join(cstep.ID(), "status"), &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
cherr <- werr
return
}
rsp, serr := step.Execute(nctx, req, nopts...)
rsp, serr := cstep.Execute(nctx, req, nopts...)
if serr != nil {
step.SetStatus(StatusFailure)
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"rsp", serr); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
cstep.SetStatus(StatusFailure)
if werr := stepStore.Write(ctx, filepath.Join(cstep.ID(), "rsp"), serr); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
if werr := stepStore.Write(ctx, filepath.Join(cstep.ID(), "status"), &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
}
cherr <- serr
return
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"rsp", rsp); werr != nil {
if werr := stepStore.Write(ctx, filepath.Join(cstep.ID(), "rsp"), rsp); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
cherr <- werr
return
}
if werr := stepStore.Write(ctx, step.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
if werr := stepStore.Write(ctx, filepath.Join(cstep.ID(), "status"), &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
cherr <- werr
return
}
}(cstep)
wg.Wait()
} else {
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"req", req); werr != nil {
cherr <- werr
return
}
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusRunning.String())}); werr != nil {
cherr <- werr
return
}
rsp, serr := cstep.Execute(nctx, req, nopts...)
if serr != nil {
cstep.SetStatus(StatusFailure)
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"rsp", serr); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
}
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil && w.opts.Logger.V(logger.ErrorLevel) {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
}
cherr <- serr
return
}
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"rsp", rsp); werr != nil {
w.opts.Logger.Errorf(ctx, "store write error: %v", werr)
cherr <- werr
return
}
if werr := stepStore.Write(ctx, cstep.ID()+w.opts.Store.Options().Separator+"status", &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
cherr <- werr
return
}
}
}
}
close(done)
}()
close(done)
}()
if options.Async {
return eid, nil
}
logger.Tracef(ctx, "wait for finish or error")
select {
case <-nctx.Done():
err = nctx.Err()
case cerr := <-cherr:
err = cerr
case <-done:
close(cherr)
case <-chstatus:
close(chstatus)
return eid, nil
}
switch {
case nctx.Err() != nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusAborted.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
if options.Async {
return eid, nil
}
case err == nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
}
case err != nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
}
}
return eid, err
logger.Tracef(ctx, "wait for finish or error")
select {
case <-nctx.Done():
err = nctx.Err()
case cerr := <-cherr:
err = cerr
case <-done:
close(cherr)
case <-chstatus:
close(chstatus)
return eid, nil
}
switch {
case nctx.Err() != nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusAborted.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
}
case err == nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusSuccess.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
}
case err != nil:
if werr := workflowStore.Write(w.opts.Context, "status", &codec.Frame{Data: []byte(StatusFailure.String())}); werr != nil {
w.opts.Logger.Errorf(w.opts.Context, "store error: %v", werr)
}
}
*/
return err
}
// NewFlow create new flow
@@ -385,11 +442,11 @@ func (f *microFlow) WorkflowList(ctx context.Context) ([]Workflow, error) {
}
func (f *microFlow) WorkflowCreate(ctx context.Context, id string, steps ...Step) (Workflow, error) {
w := &microWorkflow{opts: f.opts, id: id, g: &dag.AcyclicGraph{}, steps: make(map[string]Step, len(steps))}
w := &microWorkflow{opts: f.opts, id: id, g: &dag.DAG{}, steps: make(map[string]Step, len(steps))}
for _, s := range steps {
w.steps[s.String()] = s
w.g.Add(s)
w.g.AddVertex(s)
}
for _, dst := range steps {
@@ -398,14 +455,11 @@ func (f *microFlow) WorkflowCreate(ctx context.Context, id string, steps ...Step
if !ok {
return nil, ErrStepNotExists
}
w.g.Connect(dag.BasicEdge(src, dst))
w.g.AddEdge(src.String(), dst.String())
}
}
if err := w.g.Validate(); err != nil {
return nil, err
}
w.g.TransitiveReduction()
w.g.ReduceTransitively()
w.init = true

View File

@@ -1,5 +1,5 @@
// Package flow is an interface used for saga pattern microservice workflow
package flow // import "go.unistack.org/micro/v3/flow"
package flow // import "go.unistack.org/micro/v4/flow"
import (
"context"
@@ -7,7 +7,7 @@ import (
"sync"
"sync/atomic"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v4/metadata"
)
var (
@@ -125,8 +125,6 @@ type Workflow interface {
AppendSteps(steps ...Step) error
// Status returns workflow status
Status() Status
// Steps returns steps slice where parallel steps returned on the same level
Steps() ([][]Step, error)
// Suspend suspends execution
Suspend(ctx context.Context, id string) error
// Resume resumes execution

View File

@@ -4,11 +4,11 @@ import (
"context"
"time"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/store"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/client"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/store"
"go.unistack.org/micro/v4/tracer"
)
// Option func
@@ -123,8 +123,6 @@ type ExecuteOptions struct {
Start string
// Timeout for execution
Timeout time.Duration
// Reverse execution
Reverse bool
// Async enables async execution
Async bool
}
@@ -167,13 +165,6 @@ func ExecuteContext(ctx context.Context) ExecuteOption {
}
}
// ExecuteReverse says that dag must be run in reverse order
func ExecuteReverse(b bool) ExecuteOption {
return func(o *ExecuteOptions) {
o.Reverse = b
}
}
// ExecuteTimeout pass timeout time.Duration for execution
func ExecuteTimeout(td time.Duration) ExecuteOption {
return func(o *ExecuteOptions) {

View File

@@ -32,7 +32,7 @@ type fsm struct {
// NewFSM creates a new finite state machine having the specified initial state
// with specified options
func NewFSM(opts ...Option) *fsm {
func NewFSM(opts ...Option) FSM {
return &fsm{
statesMap: map[string]StateFunc{},
opts: NewOptions(opts...),

View File

@@ -1,4 +1,4 @@
package fsm // import "go.unistack.org/micro/v3/fsm"
package fsm
import (
"context"

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"testing"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v4/logger"
)
func TestFSMStart(t *testing.T) {
@@ -17,7 +17,7 @@ func TestFSMStart(t *testing.T) {
wrapper := func(next StateFunc) StateFunc {
return func(sctx context.Context, s State, opts ...StateOption) (State, error) {
sctx = logger.NewContext(sctx, logger.Fields("state", s.Name()))
sctx = logger.NewContext(sctx, logger.DefaultLogger.Fields("state", s.Name()))
return next(sctx, s, opts...)
}
}

36
go.mod
View File

@@ -1,20 +1,34 @@
module go.unistack.org/micro/v3
module go.unistack.org/micro/v4
go 1.20
go 1.22.0
require (
dario.cat/mergo v1.0.0
github.com/DATA-DOG/go-sqlmock v1.5.0
github.com/google/uuid v1.3.0
dario.cat/mergo v1.0.1
github.com/DATA-DOG/go-sqlmock v1.5.2
github.com/KimMachineGun/automemlimit v0.7.0
github.com/goccy/go-yaml v1.17.1
github.com/google/uuid v1.6.0
github.com/matoous/go-nanoid v1.5.1
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/silas/dag v0.0.0-20220518035006-a7e85ada93c5
golang.org/x/sync v0.3.0
google.golang.org/grpc v1.57.0
google.golang.org/protobuf v1.31.0
github.com/spf13/cast v1.7.1
github.com/stretchr/testify v1.10.0
go.uber.org/atomic v1.11.0
go.uber.org/automaxprocs v1.6.0
go.unistack.org/micro-proto/v4 v4.1.0
golang.org/x/sync v0.10.0
google.golang.org/grpc v1.69.4
google.golang.org/protobuf v1.36.3
)
require (
github.com/golang/protobuf v1.5.3 // indirect
golang.org/x/net v0.14.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230526203410-71b5a4ffd15e // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/sys v0.29.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241216192217-9240e9c98484 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

88
go.sum
View File

@@ -1,33 +1,69 @@
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
github.com/KimMachineGun/automemlimit v0.7.0 h1:7G06p/dMSf7G8E6oq+f2uOPuVncFyIlDI/pBWK49u88=
github.com/KimMachineGun/automemlimit v0.7.0/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/goccy/go-yaml v1.17.1 h1:LI34wktB2xEE3ONG/2Ar54+/HJVBriAGJ55PHls4YuY=
github.com/goccy/go-yaml v1.17.1/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/kisielk/sqlstruct v0.0.0-20201105191214-5f3e10d3ab46/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/matoous/go-nanoid v1.5.1 h1:aCjdvTyO9LLnTIi0fgdXhOPPvOHjpXN6Ik9DaNjIct4=
github.com/matoous/go-nanoid v1.5.1/go.mod h1:zyD2a71IubI24efhpvkJz+ZwfwagzgSO6UNiFsZKN7U=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/silas/dag v0.0.0-20220518035006-a7e85ada93c5 h1:G/FZtUu7a6NTWl3KUHMV9jkLAh/Rvtf03NWMHaEDl+E=
github.com/silas/dag v0.0.0-20220518035006-a7e85ada93c5/go.mod h1:7RTUFBdIRC9nZ7/3RyRNH1bdqIShrDejd1YbLwgPS+I=
golang.org/x/net v0.14.0 h1:BONx9s002vGdD9umnlX1Po8vOZmrgH34qlHcD1MfK14=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230526203410-71b5a4ffd15e h1:NumxXLPfHSndr3wBBdeKiVHjGVFzi9RX2HwwQke94iY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230526203410-71b5a4ffd15e/go.mod h1:66JfowdXAEgad5O9NnYcsNPLCPZJD++2L9X0PCMODrA=
google.golang.org/grpc v1.57.0 h1:kfzNeI/klCGD2YPMUlaGNT3pxvYfga7smW3Vth8Zsiw=
google.golang.org/grpc v1.57.0/go.mod h1:Sd+9RMTACXwmub0zcNY2c4arhtrbBYD1AUHI/dt16Mo=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y=
github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.unistack.org/micro-proto/v4 v4.1.0 h1:qPwL2n/oqh9RE3RTTDgt28XK3QzV597VugQPaw9lKUk=
go.unistack.org/micro-proto/v4 v4.1.0/go.mod h1:ArmK7o+uFvxSY3dbJhKBBX4Pm1rhWdLEFf3LxBrMtec=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241216192217-9240e9c98484 h1:Z7FRVJPSMaHQxD0uXU8WdgFh8PseLM8Q8NzhnpMrBhQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241216192217-9240e9c98484/go.mod h1:lcTa1sDdWEIHMWlITnIczmw5w60CF9ffkb8Z+DVmmjA=
google.golang.org/grpc v1.69.4 h1:MF5TftSMkd8GLw/m0KM6V8CMOCY6NZ1NQDPGFgbTt4A=
google.golang.org/grpc v1.69.4/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/protobuf v1.36.3 h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=
google.golang.org/protobuf v1.36.3/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

117
hooks/metadata/metadata.go Normal file
View File

@@ -0,0 +1,117 @@
package metadata
import (
"context"
"go.unistack.org/micro/v4/client"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/server"
)
type wrapper struct {
keys []string
client.Client
}
func NewClientWrapper(keys ...string) client.Wrapper {
return func(c client.Client) client.Client {
handler := &wrapper{
Client: c,
keys: keys,
}
return handler
}
}
func NewClientCallWrapper(keys ...string) client.CallWrapper {
return func(fn client.CallFunc) client.CallFunc {
return func(ctx context.Context, addr string, req client.Request, rsp interface{}, opts client.CallOptions) error {
if keys == nil {
return fn(ctx, addr, req, rsp, opts)
}
if imd, iok := metadata.FromIncomingContext(ctx); iok && imd != nil {
omd, ook := metadata.FromOutgoingContext(ctx)
if !ook || omd == nil {
omd = metadata.New(len(imd))
}
for _, k := range keys {
if v := imd.Get(k); v != nil {
omd.Set(k, v...)
}
}
if !ook {
ctx = metadata.NewOutgoingContext(ctx, omd)
}
}
return fn(ctx, addr, req, rsp, opts)
}
}
}
func (w *wrapper) Call(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {
if w.keys == nil {
return w.Client.Call(ctx, req, rsp, opts...)
}
if imd, iok := metadata.FromIncomingContext(ctx); iok && imd != nil {
omd, ook := metadata.FromOutgoingContext(ctx)
if !ook || omd == nil {
omd = metadata.New(len(imd))
}
for _, k := range w.keys {
if v := imd.Get(k); v != nil {
omd.Set(k, v...)
}
}
if !ook {
ctx = metadata.NewOutgoingContext(ctx, omd)
}
}
return w.Client.Call(ctx, req, rsp, opts...)
}
func (w *wrapper) Stream(ctx context.Context, req client.Request, opts ...client.CallOption) (client.Stream, error) {
if w.keys == nil {
return w.Client.Stream(ctx, req, opts...)
}
if imd, iok := metadata.FromIncomingContext(ctx); iok && imd != nil {
omd, ook := metadata.FromOutgoingContext(ctx)
if !ook || omd == nil {
omd = metadata.New(len(imd))
}
for _, k := range w.keys {
if v := imd.Get(k); v != nil {
omd.Set(k, v...)
}
}
if !ook {
ctx = metadata.NewOutgoingContext(ctx, omd)
}
}
return w.Client.Stream(ctx, req, opts...)
}
func NewServerHandlerWrapper(keys ...string) server.HandlerWrapper {
return func(fn server.HandlerFunc) server.HandlerFunc {
return func(ctx context.Context, req server.Request, rsp interface{}) error {
if keys == nil {
return fn(ctx, req, rsp)
}
if imd, iok := metadata.FromIncomingContext(ctx); iok && imd != nil {
omd, ook := metadata.FromOutgoingContext(ctx)
if !ook || omd == nil {
omd = metadata.New(len(imd))
}
for _, k := range keys {
if v := imd.Get(k); v != nil {
omd.Set(k, v...)
}
}
if !ook {
ctx = metadata.NewOutgoingContext(ctx, omd)
}
}
return fn(ctx, req, rsp)
}
}
}

View File

@@ -0,0 +1,63 @@
package recovery
import (
"context"
"fmt"
"go.unistack.org/micro/v4/errors"
"go.unistack.org/micro/v4/server"
)
func NewOptions(opts ...Option) Options {
options := Options{
ServerHandlerFn: DefaultServerHandlerFn,
}
for _, o := range opts {
o(&options)
}
return options
}
type Options struct {
ServerHandlerFn func(context.Context, server.Request, interface{}, error) error
}
type Option func(*Options)
func ServerHandlerFunc(fn func(context.Context, server.Request, interface{}, error) error) Option {
return func(o *Options) {
o.ServerHandlerFn = fn
}
}
var DefaultServerHandlerFn = func(ctx context.Context, req server.Request, rsp interface{}, err error) error {
return errors.BadRequest("", "%v", err)
}
var Hook = NewHook()
type hook struct {
opts Options
}
func NewHook(opts ...Option) *hook {
return &hook{opts: NewOptions(opts...)}
}
func (w *hook) ServerHandler(next server.FuncHandler) server.FuncHandler {
return func(ctx context.Context, req server.Request, rsp interface{}) (err error) {
defer func() {
r := recover()
switch verr := r.(type) {
case nil:
return
case error:
err = w.opts.ServerHandlerFn(ctx, req, rsp, verr)
default:
err = w.opts.ServerHandlerFn(ctx, req, rsp, fmt.Errorf("%v", r))
}
}()
err = next(ctx, req, rsp)
return err
}
}

View File

@@ -0,0 +1,103 @@
package requestid
import (
"context"
"net/textproto"
"go.unistack.org/micro/v4/client"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/server"
"go.unistack.org/micro/v4/util/id"
)
type XRequestIDKey struct{}
// DefaultMetadataKey contains metadata key
var DefaultMetadataKey = textproto.CanonicalMIMEHeaderKey("x-request-id")
// DefaultMetadataFunc wil be used if user not provide own func to fill metadata
var DefaultMetadataFunc = func(ctx context.Context) (context.Context, error) {
var xid string
cid, cok := ctx.Value(XRequestIDKey{}).(string)
if cok && cid != "" {
xid = cid
}
imd, iok := metadata.FromIncomingContext(ctx)
if !iok || imd == nil {
imd = metadata.New(1)
ctx = metadata.NewIncomingContext(ctx, imd)
}
omd, ook := metadata.FromOutgoingContext(ctx)
if !ook || omd == nil {
omd = metadata.New(1)
ctx = metadata.NewOutgoingContext(ctx, omd)
}
if xid == "" {
xid = imd.GetJoined(DefaultMetadataKey)
if xid == "" {
xid = omd.GetJoined(DefaultMetadataKey)
}
}
if xid == "" {
var err error
xid, err = id.New()
if err != nil {
return ctx, err
}
}
if !cok {
ctx = context.WithValue(ctx, XRequestIDKey{}, xid)
}
if !iok {
imd.Set(DefaultMetadataKey, xid)
}
if !ook {
omd.Set(DefaultMetadataKey, xid)
}
return ctx, nil
}
type hook struct{}
func NewHook() *hook {
return &hook{}
}
func (w *hook) ServerHandler(next server.FuncHandler) server.FuncHandler {
return func(ctx context.Context, req server.Request, rsp interface{}) error {
var err error
if ctx, err = DefaultMetadataFunc(ctx); err != nil {
return err
}
return next(ctx, req, rsp)
}
}
func (w *hook) ClientCall(next client.FuncCall) client.FuncCall {
return func(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {
var err error
if ctx, err = DefaultMetadataFunc(ctx); err != nil {
return err
}
return next(ctx, req, rsp, opts...)
}
}
func (w *hook) ClientStream(next client.FuncStream) client.FuncStream {
return func(ctx context.Context, req client.Request, opts ...client.CallOption) (client.Stream, error) {
var err error
if ctx, err = DefaultMetadataFunc(ctx); err != nil {
return nil, err
}
return next(ctx, req, opts...)
}
}

View File

@@ -0,0 +1,34 @@
package requestid
import (
"context"
"slices"
"testing"
"go.unistack.org/micro/v4/metadata"
)
func TestDefaultMetadataFunc(t *testing.T) {
ctx := context.TODO()
nctx, err := DefaultMetadataFunc(ctx)
if err != nil {
t.Fatalf("%v", err)
}
imd, ok := metadata.FromIncomingContext(nctx)
if !ok {
t.Fatalf("md missing in incoming context")
}
omd, ok := metadata.FromOutgoingContext(nctx)
if !ok {
t.Fatalf("md missing in outgoing context")
}
iv := imd.Get(DefaultMetadataKey)
ov := omd.Get(DefaultMetadataKey)
if !slices.Equal(iv, ov) {
t.Fatalf("missing metadata key value %v != %v", iv, ov)
}
}

51
hooks/sql/common.go Normal file
View File

@@ -0,0 +1,51 @@
package sql
import (
"database/sql/driver"
"errors"
"fmt"
"runtime"
)
//go:generate sh -c "go run gen.go > wrap_gen.go"
// namedValueToValue converts driver arguments of NamedValue format to Value format. Implemented in the same way as in
// database/sql ctxutil.go.
func namedValueToValue(named []driver.NamedValue) ([]driver.Value, error) {
dargs := make([]driver.Value, len(named))
for n, param := range named {
if len(param.Name) > 0 {
return nil, errors.New("sql: driver does not support the use of Named Parameters")
}
dargs[n] = param.Value
}
return dargs, nil
}
// namedValueToLabels convert driver arguments to interface{} slice
func namedValueToLabels(named []driver.NamedValue) []interface{} {
largs := make([]interface{}, 0, len(named)*2)
var name string
for _, param := range named {
if param.Name != "" {
name = param.Name
} else {
name = fmt.Sprintf("$%d", param.Ordinal)
}
largs = append(largs, fmt.Sprintf("%s=%v", name, param.Value))
}
return largs
}
// getCallerName get the name of the function A where A() -> B() -> GetFunctionCallerName()
func getCallerName() string {
pc, _, _, ok := runtime.Caller(3)
details := runtime.FuncForPC(pc)
var callerName string
if ok && details != nil {
callerName = details.Name()
} else {
callerName = labelUnknown
}
return callerName
}

467
hooks/sql/conn.go Normal file
View File

@@ -0,0 +1,467 @@
package sql
import (
"context"
"database/sql/driver"
"fmt"
"time"
"go.unistack.org/micro/v4/hooks/requestid"
"go.unistack.org/micro/v4/tracer"
)
var (
_ driver.Conn = (*wrapperConn)(nil)
_ driver.ConnBeginTx = (*wrapperConn)(nil)
_ driver.ConnPrepareContext = (*wrapperConn)(nil)
_ driver.Pinger = (*wrapperConn)(nil)
_ driver.Validator = (*wrapperConn)(nil)
_ driver.Queryer = (*wrapperConn)(nil) // nolint:staticcheck
_ driver.QueryerContext = (*wrapperConn)(nil)
_ driver.Execer = (*wrapperConn)(nil) // nolint:staticcheck
_ driver.ExecerContext = (*wrapperConn)(nil)
// _ driver.Connector
// _ driver.Driver
// _ driver.DriverContext
)
// wrapperConn defines a wrapper for driver.Conn
type wrapperConn struct {
d *wrapperDriver
dname string
conn driver.Conn
opts Options
ctx context.Context
//span tracer.Span
}
// Close implements driver.Conn Close
func (w *wrapperConn) Close() error {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
labels := []string{labelMethod, "Close"}
ts := time.Now()
err := w.conn.Close()
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Close", getCallerName(), td, err)...)
}
*/
return err
}
// Begin implements driver.Conn Begin
func (w *wrapperConn) Begin() (driver.Tx, error) {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
labels := []string{labelMethod, "Begin"}
ts := time.Now()
tx, err := w.conn.Begin() // nolint:staticcheck
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Begin", getCallerName(), td, err)...)
}
*/
return nil, err
}
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Begin", getCallerName(), td, err)...)
}
*/
return &wrapperTx{tx: tx, opts: w.opts, ctx: ctx}, nil
}
// BeginTx implements driver.ConnBeginTx BeginTx
func (w *wrapperConn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) {
name := getQueryName(ctx)
nctx, span := w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
span.AddLabels("db.method", "BeginTx")
span.AddLabels("db.statement", name)
if id, ok := ctx.Value(requestid.XRequestIDKey{}).(string); ok {
span.AddLabels("x-request-id", id)
}
labels := []string{labelMethod, "BeginTx", labelQuery, name}
connBeginTx, ok := w.conn.(driver.ConnBeginTx)
if !ok {
return w.Begin()
}
ts := time.Now()
tx, err := connBeginTx.BeginTx(nctx, opts)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
span.SetStatus(tracer.SpanStatusError, err.Error())
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "BeginTx", getCallerName(), td, err)...)
}
*/
return nil, err
}
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "BeginTx", getCallerName(), td, err)...)
}
*/
return &wrapperTx{tx: tx, opts: w.opts, ctx: ctx, span: span}, nil
}
// Prepare implements driver.Conn Prepare
func (w *wrapperConn) Prepare(query string) (driver.Stmt, error) {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
labels := []string{labelMethod, "Prepare", labelQuery, getCallerName()}
ts := time.Now()
stmt, err := w.conn.Prepare(query)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Prepare", getCallerName(), td, err)...)
}
*/
return nil, err
}
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Prepare", getCallerName(), td, err)...)
}
*/
return wrapStmt(stmt, query, w.opts), nil
}
// PrepareContext implements driver.ConnPrepareContext PrepareContext
func (w *wrapperConn) PrepareContext(ctx context.Context, query string) (driver.Stmt, error) {
var nctx context.Context
var span tracer.Span
name := getQueryName(ctx)
if w.ctx != nil {
nctx, span = w.opts.Tracer.Start(w.ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
} else {
nctx, span = w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
}
span.AddLabels("db.method", "PrepareContext")
span.AddLabels("db.statement", name)
if id, ok := ctx.Value(requestid.XRequestIDKey{}).(string); ok {
span.AddLabels("x-request-id", id)
}
labels := []string{labelMethod, "PrepareContext", labelQuery, name}
conn, ok := w.conn.(driver.ConnPrepareContext)
if !ok {
return w.Prepare(query)
}
ts := time.Now()
stmt, err := conn.PrepareContext(nctx, query)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
span.SetStatus(tracer.SpanStatusError, err.Error())
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "PrepareContext", getCallerName(), td, err)...)
}
*/
return nil, err
}
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "PrepareContext", getCallerName(), td, err)...)
}
*/
return wrapStmt(stmt, query, w.opts), nil
}
// Exec implements driver.Execer Exec
func (w *wrapperConn) Exec(query string, args []driver.Value) (driver.Result, error) {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
labels := []string{labelMethod, "Exec", labelQuery, getCallerName()}
// nolint:staticcheck
conn, ok := w.conn.(driver.Execer)
if !ok {
return nil, driver.ErrSkip
}
ts := time.Now()
res, err := conn.Exec(query, args)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Exec", getCallerName(), td, err)...)
}
*/
return res, err
}
// Exec implements driver.StmtExecContext ExecContext
func (w *wrapperConn) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
var nctx context.Context
var span tracer.Span
name := getQueryName(ctx)
if w.ctx != nil {
nctx, span = w.opts.Tracer.Start(w.ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
} else {
nctx, span = w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
}
span.AddLabels("db.method", "ExecContext")
span.AddLabels("db.statement", name)
if id, ok := ctx.Value(requestid.XRequestIDKey{}).(string); ok {
span.AddLabels("x-request-id", id)
}
defer span.Finish()
if len(args) > 0 {
span.AddLabels("db.args", fmt.Sprintf("%v", namedValueToLabels(args)))
}
labels := []string{labelMethod, "ExecContext", labelQuery, name}
conn, ok := w.conn.(driver.ExecerContext)
if !ok {
// nolint:staticcheck
return nil, driver.ErrSkip
}
ts := time.Now()
res, err := conn.ExecContext(nctx, query, args)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "ExecContext", getCallerName(), td, err)...)
}
*/
return res, err
}
// Ping implements driver.Pinger Ping
func (w *wrapperConn) Ping(ctx context.Context) error {
conn, ok := w.conn.(driver.Pinger)
if !ok {
// fallback path to check db alive
pc, err := w.d.Open(w.dname)
if err != nil {
return err
}
return pc.Close()
}
var nctx context.Context //nolint:gosimple
nctx = ctx
/*
var span tracer.Span
if w.ctx != nil {
nctx, span = w.opts.Tracer.Start(w.ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
} else {
nctx, span = w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
}
span.AddLabels("db.method", "Ping")
defer span.Finish()
*/
labels := []string{labelMethod, "Ping"}
ts := time.Now()
err := conn.Ping(nctx)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
// span.SetStatus(tracer.SpanStatusError, err.Error())
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Ping", getCallerName(), td, err)...)
}
*/
return err
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
return nil
}
// Query implements driver.Queryer Query
func (w *wrapperConn) Query(query string, args []driver.Value) (driver.Rows, error) {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
// nolint:staticcheck
conn, ok := w.conn.(driver.Queryer)
if !ok {
return nil, driver.ErrSkip
}
labels := []string{labelMethod, "Query", labelQuery, getCallerName()}
ts := time.Now()
rows, err := conn.Query(query, args)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Query", getCallerName(), td, err)...)
}
*/
return rows, err
}
// QueryContext implements Driver.QueryerContext QueryContext
func (w *wrapperConn) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error) {
var nctx context.Context
var span tracer.Span
name := getQueryName(ctx)
if w.ctx != nil {
nctx, span = w.opts.Tracer.Start(w.ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
} else {
nctx, span = w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
}
span.AddLabels("db.method", "QueryContext")
span.AddLabels("db.statement", name)
if id, ok := ctx.Value(requestid.XRequestIDKey{}).(string); ok {
span.AddLabels("x-request-id", id)
}
defer span.Finish()
if len(args) > 0 {
span.AddLabels("db.args", fmt.Sprintf("%v", namedValueToLabels(args)))
}
labels := []string{labelMethod, "QueryContext", labelQuery, name}
conn, ok := w.conn.(driver.QueryerContext)
if !ok {
return nil, driver.ErrSkip
}
ts := time.Now()
rows, err := conn.QueryContext(nctx, query, args)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "QueryContext", getCallerName(), td, err)...)
}
*/
return rows, err
}
// CheckNamedValue implements driver.NamedValueChecker
func (w *wrapperConn) CheckNamedValue(v *driver.NamedValue) error {
s, ok := w.conn.(driver.NamedValueChecker)
if !ok {
return driver.ErrSkip
}
return s.CheckNamedValue(v)
}
// IsValid implements driver.Validator
func (w *wrapperConn) IsValid() bool {
v, ok := w.conn.(driver.Validator)
if !ok {
return w.conn != nil
}
return v.IsValid()
}
func (w *wrapperConn) ResetSession(ctx context.Context) error {
s, ok := w.conn.(driver.SessionResetter)
if !ok {
return driver.ErrSkip
}
return s.ResetSession(ctx)
}

94
hooks/sql/driver.go Normal file
View File

@@ -0,0 +1,94 @@
package sql
import (
"context"
"database/sql/driver"
"time"
)
var (
// _ driver.DriverContext = (*wrapperDriver)(nil)
// _ driver.Connector = (*wrapperDriver)(nil)
)
/*
type conn interface {
driver.Pinger
driver.Execer
driver.ExecerContext
driver.Queryer
driver.QueryerContext
driver.Conn
driver.ConnPrepareContext
driver.ConnBeginTx
}
*/
// wrapperDriver defines a wrapper for driver.Driver
type wrapperDriver struct {
driver driver.Driver
opts Options
ctx context.Context
}
// NewWrapper creates and returns a new SQL driver with passed capabilities
func NewWrapper(d driver.Driver, opts ...Option) driver.Driver {
return &wrapperDriver{driver: d, opts: NewOptions(opts...), ctx: context.Background()}
}
type wrappedConnector struct {
connector driver.Connector
// name string
opts Options
ctx context.Context
}
func NewWrapperConnector(c driver.Connector, opts ...Option) driver.Connector {
return &wrappedConnector{connector: c, opts: NewOptions(opts...), ctx: context.Background()}
}
// Connect implements driver.Driver Connect
func (w *wrappedConnector) Connect(ctx context.Context) (driver.Conn, error) {
return w.connector.Connect(ctx)
}
// Driver implements driver.Driver Driver
func (w *wrappedConnector) Driver() driver.Driver {
return w.connector.Driver()
}
/*
// Connect implements driver.Driver OpenConnector
func (w *wrapperDriver) OpenConnector(name string) (driver.Conn, error) {
return &wrapperConnector{driver: w.driver, name: name, opts: w.opts}, nil
}
*/
// Open implements driver.Driver Open
func (w *wrapperDriver) Open(name string) (driver.Conn, error) {
// ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) // Ensure eventual timeout
// defer cancel()
/*
connector, err := w.OpenConnector(name)
if err != nil {
return nil, err
}
return connector.Connect(ctx)
*/
ts := time.Now()
c, err := w.driver.Open(name)
td := time.Since(ts)
/*
if w.opts.LoggerEnabled {
w.opts.Logger.Log(w.ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(w.ctx, "Open", getCallerName(), td, err)...)
}
*/
_ = td
if err != nil {
return nil, err
}
return wrapConn(c, w.opts), nil
}

167
hooks/sql/gen.go Normal file
View File

@@ -0,0 +1,167 @@
//go:build ignore
package main
import (
"bytes"
"crypto/md5"
"fmt"
"io"
"sort"
"strings"
)
var connIfaces = []string{
"driver.ConnBeginTx",
"driver.ConnPrepareContext",
"driver.Execer",
"driver.ExecerContext",
"driver.NamedValueChecker",
"driver.Pinger",
"driver.Queryer",
"driver.QueryerContext",
"driver.SessionResetter",
"driver.Validator",
}
var stmtIfaces = []string{
"driver.StmtExecContext",
"driver.StmtQueryContext",
"driver.ColumnConverter",
"driver.NamedValueChecker",
}
func getHash(s []string) string {
h := md5.New()
io.WriteString(h, strings.Join(s, "|"))
return fmt.Sprintf("%x", h.Sum(nil))
}
func main() {
comboConn := all(connIfaces)
sort.Slice(comboConn, func(i, j int) bool {
return len(comboConn[i]) < len(comboConn[j])
})
comboStmt := all(stmtIfaces)
sort.Slice(comboStmt, func(i, j int) bool {
return len(comboStmt[i]) < len(comboStmt[j])
})
b := bytes.NewBuffer(nil)
b.WriteString("// Code generated. DO NOT EDIT.\n\n")
b.WriteString("package sql\n\n")
b.WriteString(`import "database/sql/driver"`)
b.WriteString("\n\n")
b.WriteString("func wrapConn(dc driver.Conn, opts Options) driver.Conn {\n")
b.WriteString("\tc := &wrapperConn{conn: dc, opts: opts}\n")
for idx := len(comboConn) - 1; idx >= 0; idx-- {
ifaces := comboConn[idx]
n := len(ifaces)
if n == 0 {
continue
}
h := getHash(ifaces)
b.WriteString(fmt.Sprintf("\tif _, ok := dc.(wrapConn%04d_%s); ok {\n", n, h))
b.WriteString("\treturn struct {\n")
b.WriteString("\t\tdriver.Conn\n")
b.WriteString(fmt.Sprintf("\t\t\t%s", strings.Join(ifaces, "\n\t\t\t")))
b.WriteString("\t\t\n}{")
for idx := range ifaces {
if idx > 0 {
b.WriteString(", ")
b.WriteString("c")
} else if idx == 0 {
b.WriteString("c")
} else {
b.WriteString("c")
}
}
b.WriteString(", c}\n")
b.WriteString("}\n\n")
}
b.WriteString("return c\n")
b.WriteString("}\n")
for idx := len(comboConn) - 1; idx >= 0; idx-- {
ifaces := comboConn[idx]
n := len(ifaces)
if n == 0 {
continue
}
h := getHash(ifaces)
b.WriteString(fmt.Sprintf("// %s\n", strings.Join(ifaces, "|")))
b.WriteString(fmt.Sprintf("type wrapConn%04d_%s interface {\n", n, h))
for _, iface := range ifaces {
b.WriteString(fmt.Sprintf("\t%s\n", iface))
}
b.WriteString("}\n\n")
}
b.WriteString("func wrapStmt(stmt driver.Stmt, query string, opts Options) driver.Stmt {\n")
b.WriteString("\tc := &wrapperStmt{stmt: stmt, query: query, opts: opts}\n")
for idx := len(comboStmt) - 1; idx >= 0; idx-- {
ifaces := comboStmt[idx]
n := len(ifaces)
if n == 0 {
continue
}
h := getHash(ifaces)
b.WriteString(fmt.Sprintf("\tif _, ok := stmt.(wrapStmt%04d_%s); ok {\n", n, h))
b.WriteString("\treturn struct {\n")
b.WriteString("\t\tdriver.Stmt\n")
b.WriteString(fmt.Sprintf("\t\t\t%s", strings.Join(ifaces, "\n\t\t\t")))
b.WriteString("\t\t\n}{")
for idx := range ifaces {
if idx > 0 {
b.WriteString(", ")
b.WriteString("c")
} else if idx == 0 {
b.WriteString("c")
} else {
b.WriteString("c")
}
}
b.WriteString(", c}\n")
b.WriteString("}\n\n")
}
b.WriteString("return c\n")
b.WriteString("}\n")
for idx := len(comboStmt) - 1; idx >= 0; idx-- {
ifaces := comboStmt[idx]
n := len(ifaces)
if n == 0 {
continue
}
h := getHash(ifaces)
b.WriteString(fmt.Sprintf("// %s\n", strings.Join(ifaces, "|")))
b.WriteString(fmt.Sprintf("type wrapStmt%04d_%s interface {\n", n, h))
for _, iface := range ifaces {
b.WriteString(fmt.Sprintf("\t%s\n", iface))
}
b.WriteString("}\n\n")
}
fmt.Printf("%s\n", b.String())
}
// all returns all combinations for a given string array.
func all[T any](set []T) (subsets [][]T) {
length := uint(len(set))
for subsetBits := 1; subsetBits < (1 << length); subsetBits++ {
var subset []T
for object := uint(0); object < length; object++ {
if (subsetBits>>object)&1 == 1 {
subset = append(subset, set[object])
}
}
subsets = append(subsets, subset)
}
return subsets
}

172
hooks/sql/options.go Normal file
View File

@@ -0,0 +1,172 @@
package sql
import (
"context"
"fmt"
"time"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/tracer"
)
var (
// DefaultMeterStatsInterval holds default stats interval
DefaultMeterStatsInterval = 5 * time.Second
// DefaultLoggerObserver used to prepare labels for logger
DefaultLoggerObserver = func(ctx context.Context, method string, query string, td time.Duration, err error) []interface{} {
labels := []interface{}{"db.method", method, "took", fmt.Sprintf("%v", td)}
if err != nil {
labels = append(labels, "error", err.Error())
}
if query != labelUnknown {
labels = append(labels, "query", query)
}
return labels
}
)
var (
MaxOpenConnections = "micro_sql_max_open_conn"
OpenConnections = "micro_sql_open_conn"
InuseConnections = "micro_sql_inuse_conn"
IdleConnections = "micro_sql_idle_conn"
WaitConnections = "micro_sql_waited_conn"
BlockedSeconds = "micro_sql_blocked_seconds"
MaxIdleClosed = "micro_sql_max_idle_closed"
MaxIdletimeClosed = "micro_sql_closed_max_idle"
MaxLifetimeClosed = "micro_sql_closed_max_lifetime"
meterRequestTotal = "micro_sql_request_total"
meterRequestLatencyMicroseconds = "micro_sql_latency_microseconds"
meterRequestDurationSeconds = "micro_sql_request_duration_seconds"
labelUnknown = "unknown"
labelQuery = "db_statement"
labelMethod = "db_method"
labelStatus = "status"
labelSuccess = "success"
labelFailure = "failure"
labelHost = "db_host"
labelDatabase = "db_name"
)
// Options struct holds wrapper options
type Options struct {
Logger logger.Logger
Meter meter.Meter
Tracer tracer.Tracer
DatabaseHost string
DatabaseName string
MeterStatsInterval time.Duration
LoggerLevel logger.Level
LoggerEnabled bool
LoggerObserver func(ctx context.Context, method string, name string, td time.Duration, err error) []interface{}
}
// Option func signature
type Option func(*Options)
// NewOptions create new Options struct from provided option slice
func NewOptions(opts ...Option) Options {
options := Options{
Logger: logger.DefaultLogger,
Meter: meter.DefaultMeter,
Tracer: tracer.DefaultTracer,
MeterStatsInterval: DefaultMeterStatsInterval,
LoggerLevel: logger.ErrorLevel,
LoggerObserver: DefaultLoggerObserver,
}
for _, o := range opts {
o(&options)
}
options.Meter = options.Meter.Clone(
meter.Labels(
labelHost, options.DatabaseHost,
labelDatabase, options.DatabaseName,
),
)
options.Logger = options.Logger.Clone(logger.WithAddCallerSkipCount(1))
return options
}
// MetricInterval specifies stats interval for *sql.DB
func MetricInterval(td time.Duration) Option {
return func(o *Options) {
o.MeterStatsInterval = td
}
}
func DatabaseHost(host string) Option {
return func(o *Options) {
o.DatabaseHost = host
}
}
func DatabaseName(name string) Option {
return func(o *Options) {
o.DatabaseName = name
}
}
// Meter passes meter.Meter to wrapper
func Meter(m meter.Meter) Option {
return func(o *Options) {
o.Meter = m
}
}
// Logger passes logger.Logger to wrapper
func Logger(l logger.Logger) Option {
return func(o *Options) {
o.Logger = l
}
}
// LoggerEnabled enable sql logging
func LoggerEnabled(b bool) Option {
return func(o *Options) {
o.LoggerEnabled = b
}
}
// LoggerLevel passes logger.Level option
func LoggerLevel(lvl logger.Level) Option {
return func(o *Options) {
o.LoggerLevel = lvl
}
}
// LoggerObserver passes observer to fill logger fields
func LoggerObserver(obs func(context.Context, string, string, time.Duration, error) []interface{}) Option {
return func(o *Options) {
o.LoggerObserver = obs
}
}
// Tracer passes tracer.Tracer to wrapper
func Tracer(t tracer.Tracer) Option {
return func(o *Options) {
o.Tracer = t
}
}
type queryNameKey struct{}
// QueryName passes query name to wrapper func
func QueryName(ctx context.Context, name string) context.Context {
if ctx == nil {
ctx = context.Background()
}
return context.WithValue(ctx, queryNameKey{}, name)
}
func getQueryName(ctx context.Context) string {
if v, ok := ctx.Value(queryNameKey{}).(string); ok && v != labelUnknown {
return v
}
return getCallerName()
}

41
hooks/sql/stats.go Normal file
View File

@@ -0,0 +1,41 @@
package sql
import (
"context"
"database/sql"
"time"
)
type Statser interface {
Stats() sql.DBStats
}
func NewStatsMeter(ctx context.Context, db Statser, opts ...Option) {
options := NewOptions(opts...)
go func() {
ticker := time.NewTicker(options.MeterStatsInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if db == nil {
return
}
stats := db.Stats()
options.Meter.Counter(MaxOpenConnections).Set(uint64(stats.MaxOpenConnections))
options.Meter.Counter(OpenConnections).Set(uint64(stats.OpenConnections))
options.Meter.Counter(InuseConnections).Set(uint64(stats.InUse))
options.Meter.Counter(IdleConnections).Set(uint64(stats.Idle))
options.Meter.Counter(WaitConnections).Set(uint64(stats.WaitCount))
options.Meter.FloatCounter(BlockedSeconds).Set(stats.WaitDuration.Seconds())
options.Meter.Counter(MaxIdleClosed).Set(uint64(stats.MaxIdleClosed))
options.Meter.Counter(MaxIdletimeClosed).Set(uint64(stats.MaxIdleTimeClosed))
options.Meter.Counter(MaxLifetimeClosed).Set(uint64(stats.MaxLifetimeClosed))
}
}
}()
}

287
hooks/sql/stmt.go Normal file
View File

@@ -0,0 +1,287 @@
package sql
import (
"context"
"database/sql/driver"
"fmt"
"time"
requestid "go.unistack.org/micro/v4/hooks/requestid"
"go.unistack.org/micro/v4/tracer"
)
var (
_ driver.Stmt = (*wrapperStmt)(nil)
_ driver.StmtQueryContext = (*wrapperStmt)(nil)
_ driver.StmtExecContext = (*wrapperStmt)(nil)
_ driver.NamedValueChecker = (*wrapperStmt)(nil)
)
// wrapperStmt defines a wrapper for driver.Stmt
type wrapperStmt struct {
stmt driver.Stmt
opts Options
query string
ctx context.Context
}
// Close implements driver.Stmt Close
func (w *wrapperStmt) Close() error {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
labels := []string{labelMethod, "Close"}
ts := time.Now()
err := w.stmt.Close()
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Close", getCallerName(), td, err)...)
}
*/
return err
}
// NumInput implements driver.Stmt NumInput
func (w *wrapperStmt) NumInput() int {
return w.stmt.NumInput()
}
// CheckNamedValue implements driver.NamedValueChecker
func (w *wrapperStmt) CheckNamedValue(v *driver.NamedValue) error {
s, ok := w.stmt.(driver.NamedValueChecker)
if !ok {
return driver.ErrSkip
}
return s.CheckNamedValue(v)
}
// Exec implements driver.Stmt Exec
func (w *wrapperStmt) Exec(args []driver.Value) (driver.Result, error) {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
labels := []string{labelMethod, "Exec"}
ts := time.Now()
res, err := w.stmt.Exec(args) // nolint:staticcheck
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Exec", getCallerName(), td, err)...)
}
*/
return res, err
}
// Query implements driver.Stmt Query
func (w *wrapperStmt) Query(args []driver.Value) (driver.Rows, error) {
var ctx context.Context
if w.ctx != nil {
ctx = w.ctx
} else {
ctx = context.Background()
}
_ = ctx
labels := []string{labelMethod, "Query"}
ts := time.Now()
rows, err := w.stmt.Query(args) // nolint:staticcheck
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "Query", getCallerName(), td, err)...)
}
*/
return rows, err
}
// ColumnConverter implements driver.ColumnConverter
func (w *wrapperStmt) ColumnConverter(idx int) driver.ValueConverter {
s, ok := w.stmt.(driver.ColumnConverter) // nolint:staticcheck
if !ok {
return nil
}
return s.ColumnConverter(idx)
}
// ExecContext implements driver.StmtExecContext ExecContext
func (w *wrapperStmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {
var nctx context.Context
var span tracer.Span
name := getQueryName(ctx)
if w.ctx != nil {
nctx, span = w.opts.Tracer.Start(w.ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
} else {
nctx, span = w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
}
span.AddLabels("db.method", "ExecContext")
span.AddLabels("db.statement", name)
defer span.Finish()
if len(args) > 0 {
span.AddLabels("db.args", fmt.Sprintf("%v", namedValueToLabels(args)))
}
if id, ok := ctx.Value(requestid.XRequestIDKey{}).(string); ok {
span.AddLabels("x-request-id", id)
}
labels := []string{labelMethod, "ExecContext", labelQuery, name}
if conn, ok := w.stmt.(driver.StmtExecContext); ok {
ts := time.Now()
res, err := conn.ExecContext(nctx, args)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "ExecContext", name, td, err)...)
}
*/
return res, err
}
values, err := namedValueToValue(args)
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "ExecContext", name, 0, err)...)
}
*/
return nil, err
}
ts := time.Now()
res, err := w.Exec(values) // nolint:staticcheck
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "ExecContext", name, td, err)...)
}
*/
return res, err
}
// QueryContext implements driver.StmtQueryContext StmtQueryContext
func (w *wrapperStmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) {
var nctx context.Context
var span tracer.Span
name := getQueryName(ctx)
if w.ctx != nil {
nctx, span = w.opts.Tracer.Start(w.ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
} else {
nctx, span = w.opts.Tracer.Start(ctx, "sdk.database", tracer.WithSpanKind(tracer.SpanKindClient))
}
span.AddLabels("db.method", "QueryContext")
span.AddLabels("db.statement", name)
defer span.Finish()
if len(args) > 0 {
span.AddLabels("db.args", fmt.Sprintf("%v", namedValueToLabels(args)))
}
if id, ok := ctx.Value(requestid.XRequestIDKey{}).(string); ok {
span.AddLabels("x-request-id", id)
}
labels := []string{labelMethod, "QueryContext", labelQuery, name}
if conn, ok := w.stmt.(driver.StmtQueryContext); ok {
ts := time.Now()
rows, err := conn.QueryContext(nctx, args)
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "QueryContext", name, td, err)...)
}
*/
return rows, err
}
values, err := namedValueToValue(args)
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "QueryContext", name, 0, err)...)
}
*/
return nil, err
}
ts := time.Now()
rows, err := w.Query(values) // nolint:staticcheck
td := time.Since(ts)
te := td.Seconds()
if err != nil {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelFailure)...).Inc()
span.SetStatus(tracer.SpanStatusError, err.Error())
} else {
w.opts.Meter.Counter(meterRequestTotal, append(labels, labelStatus, labelSuccess)...).Inc()
}
w.opts.Meter.Summary(meterRequestLatencyMicroseconds, labels...).Update(te)
w.opts.Meter.Histogram(meterRequestDurationSeconds, labels...).Update(te)
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(ctx, "QueryContext", name, td, err)...)
}
*/
return rows, err
}

63
hooks/sql/tx.go Normal file
View File

@@ -0,0 +1,63 @@
package sql
import (
"context"
"database/sql/driver"
"time"
"go.unistack.org/micro/v4/tracer"
)
var _ driver.Tx = (*wrapperTx)(nil)
// wrapperTx defines a wrapper for driver.Tx
type wrapperTx struct {
tx driver.Tx
span tracer.Span
opts Options
ctx context.Context
}
// Commit implements driver.Tx Commit
func (w *wrapperTx) Commit() error {
ts := time.Now()
err := w.tx.Commit()
td := time.Since(ts)
_ = td
if w.span != nil {
if err != nil {
w.span.SetStatus(tracer.SpanStatusError, err.Error())
}
w.span.Finish()
}
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(w.ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(w.ctx, "Commit", getCallerName(), td, err)...)
}
*/
w.ctx = nil
return err
}
// Rollback implements driver.Tx Rollback
func (w *wrapperTx) Rollback() error {
ts := time.Now()
err := w.tx.Rollback()
td := time.Since(ts)
_ = td
if w.span != nil {
if err != nil {
w.span.SetStatus(tracer.SpanStatusError, err.Error())
}
w.span.Finish()
}
/*
if w.opts.LoggerEnabled && w.opts.Logger.V(w.opts.LoggerLevel) {
w.opts.Logger.Log(w.ctx, w.opts.LoggerLevel, w.opts.LoggerObserver(w.ctx, "Rollback", getCallerName(), td, err)...)
}
*/
w.ctx = nil
return err
}

19
hooks/sql/wrap.go Normal file
View File

@@ -0,0 +1,19 @@
package sql
import (
"database/sql/driver"
)
/*
func wrapDriver(d driver.Driver, opts Options) driver.Driver {
if _, ok := d.(driver.DriverContext); ok {
return &wrapperDriver{driver: d, opts: opts}
}
return struct{ driver.Driver }{&wrapperDriver{driver: d, opts: opts}}
}
*/
// WrapConn allows an existing driver.Conn to be wrapped.
func WrapConn(c driver.Conn, opts ...Option) driver.Conn {
return wrapConn(c, NewOptions(opts...))
}

20699
hooks/sql/wrap_gen.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,133 @@
package validator
import (
"context"
"go.unistack.org/micro/v4/client"
"go.unistack.org/micro/v4/errors"
"go.unistack.org/micro/v4/server"
)
var (
DefaultClientErrorFunc = func(req client.Request, rsp interface{}, err error) error {
if rsp != nil {
return errors.BadGateway(req.Service(), "%v", err)
}
return errors.BadRequest(req.Service(), "%v", err)
}
DefaultServerErrorFunc = func(req server.Request, rsp interface{}, err error) error {
if rsp != nil {
return errors.BadGateway(req.Service(), "%v", err)
}
return errors.BadRequest(req.Service(), "%v", err)
}
)
type (
ClientErrorFunc func(client.Request, interface{}, error) error
ServerErrorFunc func(server.Request, interface{}, error) error
)
// Options struct holds wrapper options
type Options struct {
ClientErrorFn ClientErrorFunc
ServerErrorFn ServerErrorFunc
ClientValidateResponse bool
ServerValidateResponse bool
}
// Option func signature
type Option func(*Options)
func ClientValidateResponse(b bool) Option {
return func(o *Options) {
o.ClientValidateResponse = b
}
}
func ServerValidateResponse(b bool) Option {
return func(o *Options) {
o.ClientValidateResponse = b
}
}
func ClientReqErrorFn(fn ClientErrorFunc) Option {
return func(o *Options) {
o.ClientErrorFn = fn
}
}
func ServerErrorFn(fn ServerErrorFunc) Option {
return func(o *Options) {
o.ServerErrorFn = fn
}
}
func NewOptions(opts ...Option) Options {
options := Options{
ClientErrorFn: DefaultClientErrorFunc,
ServerErrorFn: DefaultServerErrorFunc,
}
for _, o := range opts {
o(&options)
}
return options
}
func NewHook(opts ...Option) *hook {
return &hook{opts: NewOptions(opts...)}
}
type validator interface {
Validate() error
}
type hook struct {
opts Options
}
func (w *hook) ClientCall(next client.FuncCall) client.FuncCall {
return func(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {
if v, ok := req.Body().(validator); ok {
if err := v.Validate(); err != nil {
return w.opts.ClientErrorFn(req, nil, err)
}
}
err := next(ctx, req, rsp, opts...)
if v, ok := rsp.(validator); ok && w.opts.ClientValidateResponse {
if verr := v.Validate(); verr != nil {
return w.opts.ClientErrorFn(req, rsp, verr)
}
}
return err
}
}
func (w *hook) ClientStream(next client.FuncStream) client.FuncStream {
return func(ctx context.Context, req client.Request, opts ...client.CallOption) (client.Stream, error) {
if v, ok := req.Body().(validator); ok {
if err := v.Validate(); err != nil {
return nil, w.opts.ClientErrorFn(req, nil, err)
}
}
return next(ctx, req, opts...)
}
}
func (w *hook) ServerHandler(next server.FuncHandler) server.FuncHandler {
return func(ctx context.Context, req server.Request, rsp interface{}) error {
if v, ok := req.Body().(validator); ok {
if err := v.Validate(); err != nil {
return w.opts.ServerErrorFn(req, nil, err)
}
}
err := next(ctx, req, rsp)
if v, ok := rsp.(validator); ok && w.opts.ServerValidateResponse {
if verr := v.Validate(); verr != nil {
return w.opts.ServerErrorFn(req, rsp, verr)
}
}
return err
}
}

View File

@@ -4,17 +4,6 @@ import "context"
type loggerKey struct{}
// MustContext returns logger from passed context or DefaultLogger if empty
func MustContext(ctx context.Context) Logger {
if ctx == nil {
return DefaultLogger
}
if l, ok := ctx.Value(loggerKey{}).(Logger); ok && l != nil {
return l
}
return DefaultLogger
}
// FromContext returns logger from passed context
func FromContext(ctx context.Context) (Logger, bool) {
if ctx == nil {
@@ -24,6 +13,15 @@ func FromContext(ctx context.Context) (Logger, bool) {
return l, ok
}
// MustContext returns logger from passed context or DefaultLogger if empty
func MustContext(ctx context.Context) Logger {
l, ok := FromContext(ctx)
if !ok {
panic("missing logger")
}
return l
}
// NewContext stores logger into passed context
func NewContext(ctx context.Context, l Logger) context.Context {
if ctx == nil {

View File

@@ -4,18 +4,20 @@ package logger
type Level int8
const (
// TraceLevel level usually used to find bugs, very verbose
// TraceLevel usually used to find bugs, very verbose
TraceLevel Level = iota - 2
// DebugLevel level used only when enabled debugging
// DebugLevel used only when enabled debugging
DebugLevel
// InfoLevel level used for general info about what's going on inside the application
// InfoLevel used for general info about what's going on inside the application
InfoLevel
// WarnLevel level used for non-critical entries
// WarnLevel used for non-critical entries
WarnLevel
// ErrorLevel level used for errors that should definitely be noted
// ErrorLevel used for errors that should definitely be noted
ErrorLevel
// FatalLevel level used for critical errors and then calls `os.Exit(1)`
// FatalLevel used for critical errors and then calls `os.Exit(1)`
FatalLevel
// NoneLevel used to disable logging
NoneLevel
)
// String returns logger level string representation
@@ -33,6 +35,8 @@ func (l Level) String() string {
return "error"
case FatalLevel:
return "fatal"
case NoneLevel:
return "none"
}
return "info"
}
@@ -58,6 +62,8 @@ func ParseLevel(lvl string) Level {
return ErrorLevel
case FatalLevel.String():
return FatalLevel
case NoneLevel.String():
return NoneLevel
}
return InfoLevel
}

View File

@@ -1,5 +1,5 @@
// Package logger provides a log interface
package logger // import "go.unistack.org/micro/v3/logger"
package logger
import (
"context"
@@ -11,11 +11,9 @@ var DefaultContextAttrFuncs []ContextAttrFunc
var (
// DefaultLogger variable
DefaultLogger Logger = NewLogger()
DefaultLogger = NewLogger()
// DefaultLevel used by logger
DefaultLevel = InfoLevel
// DefaultCallerSkipCount used by logger
DefaultCallerSkipCount = 2
)
// Logger is a generic logging interface
@@ -33,33 +31,19 @@ type Logger interface {
// Fields set fields to always be logged with keyval pairs
Fields(fields ...interface{}) Logger
// Info level message
Info(ctx context.Context, args ...interface{})
Info(ctx context.Context, msg string, args ...interface{})
// Trace level message
Trace(ctx context.Context, args ...interface{})
Trace(ctx context.Context, msg string, args ...interface{})
// Debug level message
Debug(ctx context.Context, args ...interface{})
Debug(ctx context.Context, msg string, args ...interface{})
// Warn level message
Warn(ctx context.Context, args ...interface{})
Warn(ctx context.Context, msg string, args ...interface{})
// Error level message
Error(ctx context.Context, args ...interface{})
Error(ctx context.Context, msg string, args ...interface{})
// Fatal level message
Fatal(ctx context.Context, args ...interface{})
// Infof level message
Infof(ctx context.Context, msg string, args ...interface{})
// Tracef level message
Tracef(ctx context.Context, msg string, args ...interface{})
// Debug level message
Debugf(ctx context.Context, msg string, args ...interface{})
// Warn level message
Warnf(ctx context.Context, msg string, args ...interface{})
// Error level message
Errorf(ctx context.Context, msg string, args ...interface{})
// Fatal level message
Fatalf(ctx context.Context, msg string, args ...interface{})
Fatal(ctx context.Context, msg string, args ...interface{})
// Log logs message with needed level
Log(ctx context.Context, level Level, args ...interface{})
// Logf logs message with needed level
Logf(ctx context.Context, level Level, msg string, args ...interface{})
Log(ctx context.Context, level Level, msg string, args ...interface{})
// Name returns broker instance name
Name() string
// String returns the type of logger
@@ -68,108 +52,3 @@ type Logger interface {
// Field contains keyval pair
type Field interface{}
// Info writes msg to default logger on info level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Info(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Info(ctx, args...)
}
// Error writes msg to default logger on error level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Error(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Error(ctx, args...)
}
// Debug writes msg to default logger on debug level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Debug(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Debug(ctx, args...)
}
// Warn writes msg to default logger on warn level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Warn(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Warn(ctx, args...)
}
// Trace writes msg to default logger on trace level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Trace(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Trace(ctx, args...)
}
// Fatal writes msg to default logger on fatal level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Fatal(ctx context.Context, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Fatal(ctx, args...)
}
// Infof writes formatted msg to default logger on info level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Infof(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Infof(ctx, msg, args...)
}
// Errorf writes formatted msg to default logger on error level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Errorf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Errorf(ctx, msg, args...)
}
// Debugf writes formatted msg to default logger on debug level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Debugf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Debugf(ctx, msg, args...)
}
// Warnf writes formatted msg to default logger on warn level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Warnf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Warnf(ctx, msg, args...)
}
// Tracef writes formatted msg to default logger on trace level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Tracef(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Tracef(ctx, msg, args...)
}
// Fatalf writes formatted msg to default logger on fatal level
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Fatalf(ctx context.Context, msg string, args ...interface{}) {
DefaultLogger.Clone(WithCallerSkipCount(DefaultCallerSkipCount+1)).Fatalf(ctx, msg, args...)
}
// V returns true if passed level enabled in default logger
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func V(level Level) bool {
return DefaultLogger.V(level)
}
// Init initialize logger
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Init(opts ...Option) error {
return DefaultLogger.Init(opts...)
}
// Fields create logger with specific fields
//
// Deprecated: Dont use logger methods directly, use instance of logger to avoid additional allocations
func Fields(fields ...interface{}) Logger {
return DefaultLogger.Fields(fields...)
}

View File

@@ -4,12 +4,17 @@ import (
"context"
)
const (
defaultCallerSkipCount = 2
)
type noopLogger struct {
opts Options
}
func NewLogger(opts ...Option) Logger {
options := NewOptions(opts...)
options.CallerSkipCount = defaultCallerSkipCount
return &noopLogger{opts: options}
}
@@ -51,44 +56,23 @@ func (l *noopLogger) String() string {
return "noop"
}
func (l *noopLogger) Log(ctx context.Context, lvl Level, attrs ...interface{}) {
func (l *noopLogger) Log(ctx context.Context, lvl Level, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Info(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Info(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Debug(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Debug(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Error(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Error(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Trace(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Trace(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Warn(ctx context.Context, attrs ...interface{}) {
func (l *noopLogger) Warn(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Fatal(ctx context.Context, attrs ...interface{}) {
}
func (l *noopLogger) Logf(ctx context.Context, lvl Level, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Infof(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Debugf(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Errorf(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Tracef(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Warnf(ctx context.Context, msg string, attrs ...interface{}) {
}
func (l *noopLogger) Fatalf(ctx context.Context, msg string, attrs ...interface{}) {
func (l *noopLogger) Fatal(ctx context.Context, msg string, attrs ...interface{}) {
}

View File

@@ -5,9 +5,10 @@ import (
"io"
"log/slog"
"os"
"slices"
"time"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v4/meter"
)
// Option func signature
@@ -15,18 +16,6 @@ type Option func(*Options)
// Options holds logger options
type Options struct {
// Out holds the output writer
Out io.Writer
// Context holds exernal options
Context context.Context
// Name holds the logger name
Name string
// Fields holds additional metadata
Fields []interface{}
// CallerSkipCount number of frmaes to skip
CallerSkipCount int
// ContextAttrFuncs contains funcs that executed before log func on context
ContextAttrFuncs []ContextAttrFunc
// TimeKey is the key used for the time of the log call
TimeKey string
// LevelKey is the key used for the level of the log call
@@ -39,16 +28,30 @@ type Options struct {
SourceKey string
// StacktraceKey is the key used for the stacktrace
StacktraceKey string
// AddStacktrace controls writing of stacktaces on error
AddStacktrace bool
// AddSource enabled writing source file and position in log
AddSource bool
// The logging level the logger should log
Level Level
// TimeFunc used to obtain current time
TimeFunc func() time.Time
// Name holds the logger name
Name string
// Out holds the output writer
Out io.Writer
// Context holds exernal options
Context context.Context
// Meter used to count logs for specific level
Meter meter.Meter
// TimeFunc used to obtain current time
TimeFunc func() time.Time
// Fields holds additional metadata
Fields []interface{}
// ContextAttrFuncs contains funcs that executed before log func on context
ContextAttrFuncs []ContextAttrFunc
// callerSkipCount number of frmaes to skip
CallerSkipCount int
// The logging level the logger should log
Level Level
// AddSource enabled writing source file and position in log
AddSource bool
// AddStacktrace controls writing of stacktaces on error
AddStacktrace bool
// DedupKeys deduplicate keys in log output
DedupKeys bool
}
// NewOptions creates new options struct
@@ -57,7 +60,6 @@ func NewOptions(opts ...Option) Options {
Level: DefaultLevel,
Fields: make([]interface{}, 0, 6),
Out: os.Stderr,
CallerSkipCount: DefaultCallerSkipCount,
Context: context.Background(),
ContextAttrFuncs: DefaultContextAttrFuncs,
AddSource: true,
@@ -74,13 +76,43 @@ func NewOptions(opts ...Option) Options {
return options
}
// WithContextAttrFuncs appends default funcs for the context arrts filler
// WithContextAttrFuncs appends default funcs for the context attrs filler
func WithContextAttrFuncs(fncs ...ContextAttrFunc) Option {
return func(o *Options) {
o.ContextAttrFuncs = append(o.ContextAttrFuncs, fncs...)
}
}
// WithDedupKeys dont log duplicate keys
func WithDedupKeys(b bool) Option {
return func(o *Options) {
o.DedupKeys = b
}
}
// WithAddFields add fields for the logger
func WithAddFields(fields ...interface{}) Option {
return func(o *Options) {
if o.DedupKeys {
for i := 0; i < len(o.Fields); i += 2 {
for j := 0; j < len(fields); j += 2 {
iv, iok := o.Fields[i].(string)
jv, jok := fields[j].(string)
if iok && jok && iv == jv {
o.Fields[i+1] = fields[j+1]
fields = slices.Delete(fields, j, j+2)
}
}
}
if len(fields) > 0 {
o.Fields = append(o.Fields, fields...)
}
} else {
o.Fields = append(o.Fields, fields...)
}
}
}
// WithFields set default fields for the logger
func WithFields(fields ...interface{}) Option {
return func(o *Options) {
@@ -102,27 +134,20 @@ func WithOutput(out io.Writer) Option {
}
}
// WitAddStacktrace controls writing stacktrace on error
// WithAddStacktrace controls writing stacktrace on error
func WithAddStacktrace(v bool) Option {
return func(o *Options) {
o.AddStacktrace = v
}
}
// WitAddSource controls writing source file and pos in log
// WithAddSource controls writing source file and pos in log
func WithAddSource(v bool) Option {
return func(o *Options) {
o.AddSource = v
}
}
// WithCallerSkipCount set frame count to skip
func WithCallerSkipCount(c int) Option {
return func(o *Options) {
o.CallerSkipCount = c
}
}
// WithContext set context
func WithContext(ctx context.Context) Option {
return func(o *Options) {
@@ -137,6 +162,13 @@ func WithName(n string) Option {
}
}
// WithMeter sets the meter
func WithMeter(m meter.Meter) Option {
return func(o *Options) {
o.Meter = m
}
}
// WithTimeFunc sets the func to obtain current time
func WithTimeFunc(fn func() time.Time) Option {
return func(o *Options) {
@@ -147,8 +179,8 @@ func WithTimeFunc(fn func() time.Time) Option {
func WithZapKeys() Option {
return func(o *Options) {
o.TimeKey = "@timestamp"
o.LevelKey = "level"
o.MessageKey = "msg"
o.LevelKey = slog.LevelKey
o.MessageKey = slog.MessageKey
o.SourceKey = "caller"
o.StacktraceKey = "stacktrace"
o.ErrorKey = "error"
@@ -157,8 +189,8 @@ func WithZapKeys() Option {
func WithZerologKeys() Option {
return func(o *Options) {
o.TimeKey = "time"
o.LevelKey = "level"
o.TimeKey = slog.TimeKey
o.LevelKey = slog.LevelKey
o.MessageKey = "message"
o.SourceKey = "caller"
o.StacktraceKey = "stacktrace"
@@ -180,8 +212,8 @@ func WithSlogKeys() Option {
func WithMicroKeys() Option {
return func(o *Options) {
o.TimeKey = "timestamp"
o.LevelKey = "level"
o.MessageKey = "msg"
o.LevelKey = slog.LevelKey
o.MessageKey = slog.MessageKey
o.SourceKey = "caller"
o.StacktraceKey = "stacktrace"
o.ErrorKey = "error"
@@ -191,6 +223,8 @@ func WithMicroKeys() Option {
// WithAddCallerSkipCount add skip count for copy logger
func WithAddCallerSkipCount(n int) Option {
return func(o *Options) {
o.CallerSkipCount += n
if n > 0 {
o.CallerSkipCount += n
}
}
}

View File

@@ -2,17 +2,27 @@ package slog
import (
"context"
"fmt"
"io"
"log/slog"
"os"
"reflect"
"regexp"
"runtime"
"strconv"
"sync"
"sync/atomic"
"time"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/semconv"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/semconv"
"go.unistack.org/micro/v4/tracer"
)
const (
badKey = "!BADKEY"
// defaultCallerSkipCount used by logger
defaultCallerSkipCount = 3
timeFormat = "2006-01-02T15:04:05.000000000Z07:00"
)
var reTrace = regexp.MustCompile(`.*/slog/logger\.go.*\n`)
@@ -24,8 +34,30 @@ var (
warnValue = slog.StringValue("warn")
errorValue = slog.StringValue("error")
fatalValue = slog.StringValue("fatal")
noneValue = slog.StringValue("none")
)
type wrapper struct {
h slog.Handler
level atomic.Int64
}
func (h *wrapper) Enabled(ctx context.Context, level slog.Level) bool {
return level >= slog.Level(int(h.level.Load()))
}
func (h *wrapper) Handle(ctx context.Context, rec slog.Record) error {
return h.h.Handle(ctx, rec)
}
func (h *wrapper) WithAttrs(attrs []slog.Attr) slog.Handler {
return h.h.WithAttrs(attrs)
}
func (h *wrapper) WithGroup(name string) slog.Handler {
return h.h.WithGroup(name)
}
func (s *slogLogger) renameAttr(_ []string, a slog.Attr) slog.Attr {
switch a.Key {
case slog.SourceKey:
@@ -34,6 +66,7 @@ func (s *slogLogger) renameAttr(_ []string, a slog.Attr) slog.Attr {
a.Key = s.opts.SourceKey
case slog.TimeKey:
a.Key = s.opts.TimeKey
a.Value = slog.StringValue(a.Value.Time().Format(timeFormat))
case slog.MessageKey:
a.Key = s.opts.MessageKey
case slog.LevelKey:
@@ -53,6 +86,8 @@ func (s *slogLogger) renameAttr(_ []string, a slog.Attr) slog.Attr {
a.Value = errorValue
case lvl >= logger.FatalLevel:
a.Value = fatalValue
case lvl >= logger.NoneLevel:
a.Value = noneValue
default:
a.Value = infoValue
}
@@ -62,8 +97,7 @@ func (s *slogLogger) renameAttr(_ []string, a slog.Attr) slog.Attr {
}
type slogLogger struct {
leveler *slog.LevelVar
handler slog.Handler
handler *wrapper
opts logger.Options
mu sync.RWMutex
}
@@ -77,51 +111,53 @@ func (s *slogLogger) Clone(opts ...logger.Option) logger.Logger {
o(&options)
}
l := &slogLogger{
opts: options,
if len(options.ContextAttrFuncs) == 0 {
options.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
l.leveler = new(slog.LevelVar)
handleOpt := &slog.HandlerOptions{
ReplaceAttr: l.renameAttr,
Level: l.leveler,
AddSource: l.opts.AddSource,
attrs, _ := s.argsAttrs(options.Fields)
l := &slogLogger{
handler: &wrapper{h: s.handler.h.WithAttrs(attrs)},
opts: options,
}
l.leveler.Set(loggerToSlogLevel(l.opts.Level))
l.handler = slog.New(slog.NewJSONHandler(options.Out, handleOpt)).With(options.Fields...).Handler()
l.handler.level.Store(int64(loggerToSlogLevel(options.Level)))
return l
}
func (s *slogLogger) V(level logger.Level) bool {
return s.opts.Level.Enabled(level)
s.mu.Lock()
v := s.opts.Level.Enabled(level)
s.mu.Unlock()
return v
}
func (s *slogLogger) Level(level logger.Level) {
s.leveler.Set(loggerToSlogLevel(level))
s.mu.Lock()
s.opts.Level = level
s.handler.level.Store(int64(loggerToSlogLevel(level)))
s.mu.Unlock()
}
func (s *slogLogger) Options() logger.Options {
return s.opts
}
func (s *slogLogger) Fields(attrs ...interface{}) logger.Logger {
func (s *slogLogger) Fields(fields ...interface{}) logger.Logger {
s.mu.RLock()
level := s.leveler.Level()
options := s.opts
s.mu.RUnlock()
l := &slogLogger{opts: options}
l.leveler = new(slog.LevelVar)
l.leveler.Set(level)
logger.WithAddFields(fields...)(&l.opts)
handleOpt := &slog.HandlerOptions{
ReplaceAttr: l.renameAttr,
Level: l.leveler,
AddSource: l.opts.AddSource,
if len(options.ContextAttrFuncs) == 0 {
options.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
l.handler = slog.New(slog.NewJSONHandler(l.opts.Out, handleOpt)).With(attrs...).Handler()
attrs, _ := s.argsAttrs(fields)
l.handler = &wrapper{h: s.handler.h.WithAttrs(attrs)}
l.handler.level.Store(int64(loggerToSlogLevel(l.opts.Level)))
return l
}
@@ -129,407 +165,81 @@ func (s *slogLogger) Fields(attrs ...interface{}) logger.Logger {
func (s *slogLogger) Init(opts ...logger.Option) error {
s.mu.Lock()
if len(s.opts.ContextAttrFuncs) == 0 {
s.opts.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
for _, o := range opts {
o(&s.opts)
}
s.leveler = new(slog.LevelVar)
if len(s.opts.ContextAttrFuncs) == 0 {
s.opts.ContextAttrFuncs = logger.DefaultContextAttrFuncs
}
handleOpt := &slog.HandlerOptions{
ReplaceAttr: s.renameAttr,
Level: s.leveler,
Level: loggerToSlogLevel(logger.TraceLevel),
AddSource: s.opts.AddSource,
}
s.leveler.Set(loggerToSlogLevel(s.opts.Level))
s.handler = slog.New(slog.NewJSONHandler(s.opts.Out, handleOpt)).With(s.opts.Fields...).Handler()
attrs, _ := s.argsAttrs(s.opts.Fields)
var h slog.Handler
if s.opts.Context != nil {
if v, ok := s.opts.Context.Value(handlerKey{}).(slog.Handler); ok && v != nil {
h = v
}
if fn := s.opts.Context.Value(handlerFnKey{}); fn != nil {
if rfn := reflect.ValueOf(fn); rfn.Kind() == reflect.Func {
if ret := rfn.Call([]reflect.Value{reflect.ValueOf(s.opts.Out), reflect.ValueOf(handleOpt)}); len(ret) == 1 {
if iface, ok := ret[0].Interface().(slog.Handler); ok && iface != nil {
h = iface
}
}
}
}
}
if h == nil {
h = slog.NewJSONHandler(s.opts.Out, handleOpt)
}
s.handler = &wrapper{h: h.WithAttrs(attrs)}
s.handler.level.Store(int64(loggerToSlogLevel(s.opts.Level)))
s.mu.Unlock()
return nil
}
func (s *slogLogger) Log(ctx context.Context, lvl logger.Level, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", lvl.String()).Inc()
if !s.V(lvl) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), loggerToSlogLevel(lvl), fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace && lvl == logger.ErrorLevel {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String(s.opts.StacktraceKey, traceLines[len(traceLines)-1]))
}
}
}
r.Add(attrs[1:]...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Log(ctx context.Context, lvl logger.Level, msg string, attrs ...interface{}) {
s.printLog(ctx, lvl, msg, attrs...)
}
func (s *slogLogger) Logf(ctx context.Context, lvl logger.Level, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", lvl.String()).Inc()
if !s.V(lvl) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), loggerToSlogLevel(lvl), msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace && lvl == logger.ErrorLevel {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, (slog.String(s.opts.StacktraceKey, traceLines[len(traceLines)-1])))
}
}
}
r.Add(attrs[1:]...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Info(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.InfoLevel, msg, attrs...)
}
func (s *slogLogger) Info(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.InfoLevel.String()).Inc()
if !s.V(logger.InfoLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelInfo, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Debug(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.DebugLevel, msg, attrs...)
}
func (s *slogLogger) Infof(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.InfoLevel.String()).Inc()
if !s.V(logger.InfoLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelInfo, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Trace(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.TraceLevel, msg, attrs...)
}
func (s *slogLogger) Debug(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.DebugLevel.String()).Inc()
if !s.V(logger.DebugLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Error(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.ErrorLevel, msg, attrs...)
}
func (s *slogLogger) Debugf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.DebugLevel.String()).Inc()
if !s.V(logger.DebugLevel) {
return
func (s *slogLogger) Fatal(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.FatalLevel, msg, attrs...)
if closer, ok := s.opts.Out.(io.Closer); ok {
closer.Close()
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Trace(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.TraceLevel.String()).Inc()
if !s.V(logger.TraceLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug-1, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Tracef(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.TraceLevel.String()).Inc()
if !s.V(logger.TraceLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelDebug-1, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Error(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.ErrorLevel.String()).Inc()
if !s.V(logger.ErrorLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String("stacktrace", traceLines[len(traceLines)-1]))
}
}
}
r.Add(attrs[1:]...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Errorf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.ErrorLevel.String()).Inc()
if !s.V(logger.ErrorLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
if s.opts.AddStacktrace {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String("stacktrace", traceLines[len(traceLines)-1]))
}
}
}
r.Add(attrs...)
r.Attrs(func(a slog.Attr) bool {
if a.Key == s.opts.ErrorKey {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, a.Value.String())
return false
}
}
return true
})
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Fatal(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.FatalLevel.String()).Inc()
if !s.V(logger.FatalLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError+1, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
time.Sleep(1 * time.Second)
os.Exit(1)
}
func (s *slogLogger) Fatalf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.FatalLevel.String()).Inc()
if !s.V(logger.FatalLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelError+1, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs...)
_ = s.handler.Handle(ctx, r)
os.Exit(1)
}
func (s *slogLogger) Warn(ctx context.Context, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.WarnLevel.String()).Inc()
if !s.V(logger.WarnLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelWarn, fmt.Sprintf("%s", attrs[0]), pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
}
func (s *slogLogger) Warnf(ctx context.Context, msg string, attrs ...interface{}) {
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", logger.WarnLevel.String()).Inc()
if !s.V(logger.WarnLevel) {
return
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, Infof]
r := slog.NewRecord(s.opts.TimeFunc(), slog.LevelWarn, msg, pcs[0])
for _, fn := range s.opts.ContextAttrFuncs {
attrs = append(attrs, fn(ctx)...)
}
for idx, attr := range attrs {
if ve, ok := attr.(error); ok && ve != nil {
attrs[idx] = slog.String(s.opts.ErrorKey, ve.Error())
break
}
}
r.Add(attrs[1:]...)
_ = s.handler.Handle(ctx, r)
func (s *slogLogger) Warn(ctx context.Context, msg string, attrs ...interface{}) {
s.printLog(ctx, logger.WarnLevel, msg, attrs...)
}
func (s *slogLogger) Name() string {
@@ -540,10 +250,59 @@ func (s *slogLogger) String() string {
return "slog"
}
func (s *slogLogger) printLog(ctx context.Context, lvl logger.Level, msg string, args ...interface{}) {
if !s.V(lvl) {
return
}
var argError error
s.opts.Meter.Counter(semconv.LoggerMessageTotal, "level", lvl.String()).Inc()
attrs, err := s.argsAttrs(args)
if err != nil {
argError = err
}
if argError != nil {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, argError.Error())
}
}
for _, fn := range s.opts.ContextAttrFuncs {
ctxAttrs, err := s.argsAttrs(fn(ctx))
if err != nil {
argError = err
}
attrs = append(attrs, ctxAttrs...)
}
if argError != nil {
if span, ok := tracer.SpanFromContext(ctx); ok {
span.SetStatus(tracer.SpanStatusError, argError.Error())
}
}
if s.opts.AddStacktrace && (lvl == logger.FatalLevel || lvl == logger.ErrorLevel) {
stackInfo := make([]byte, 1024*1024)
if stackSize := runtime.Stack(stackInfo, false); stackSize > 0 {
traceLines := reTrace.Split(string(stackInfo[:stackSize]), -1)
if len(traceLines) != 0 {
attrs = append(attrs, slog.String(s.opts.StacktraceKey, traceLines[len(traceLines)-1]))
}
}
}
var pcs [1]uintptr
runtime.Callers(s.opts.CallerSkipCount, pcs[:]) // skip [Callers, printLog, LogLvlMethod]
r := slog.NewRecord(s.opts.TimeFunc(), loggerToSlogLevel(lvl), msg, pcs[0])
r.AddAttrs(attrs...)
_ = s.handler.Handle(ctx, r)
}
func NewLogger(opts ...logger.Option) logger.Logger {
s := &slogLogger{
opts: logger.NewOptions(opts...),
}
s.opts.CallerSkipCount = defaultCallerSkipCount
return s
}
@@ -560,6 +319,8 @@ func loggerToSlogLevel(level logger.Level) slog.Level {
return slog.LevelDebug - 1
case logger.FatalLevel:
return slog.LevelError + 1
case logger.NoneLevel:
return slog.LevelError + 2
default:
return slog.LevelInfo
}
@@ -577,7 +338,45 @@ func slogToLoggerLevel(level slog.Level) logger.Level {
return logger.TraceLevel
case slog.LevelError + 1:
return logger.FatalLevel
case slog.LevelError + 2:
return logger.NoneLevel
default:
return logger.InfoLevel
}
}
func (s *slogLogger) argsAttrs(args []interface{}) ([]slog.Attr, error) {
attrs := make([]slog.Attr, 0, len(args))
var err error
for idx := 0; idx < len(args); idx++ {
switch arg := args[idx].(type) {
case slog.Attr:
attrs = append(attrs, arg)
case string:
if idx+1 < len(args) {
attrs = append(attrs, slog.Any(arg, args[idx+1]))
idx++
} else {
attrs = append(attrs, slog.String(badKey, arg))
}
case error:
attrs = append(attrs, slog.String(s.opts.ErrorKey, arg.Error()))
err = arg
}
}
return attrs, err
}
type handlerKey struct{}
func WithHandler(h slog.Handler) logger.Option {
return logger.SetOption(handlerKey{}, h)
}
type handlerFnKey struct{}
func WithHandlerFunc(fn any) logger.Option {
return logger.SetOption(handlerFnKey{}, fn)
}

View File

@@ -3,13 +3,245 @@ package slog
import (
"bytes"
"context"
"errors"
"fmt"
"log"
"log/slog"
"strings"
"testing"
"time"
"go.unistack.org/micro/v3/logger"
"github.com/google/uuid"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/util/buffer"
)
// always first to have proper check
func TestStacktrace(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.DebugLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithAddStacktrace(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Error(ctx, "msg1", errors.New("err"))
if !bytes.Contains(buf.Bytes(), []byte(`slog_test.go:32`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestNoneLevel(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.NoneLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithAddStacktrace(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Error(ctx, "msg1", errors.New("err"))
if buf.Len() != 0 {
t.Fatalf("logger none level not works, buf contains: %s", buf.Bytes())
}
}
func TestDelayedBuffer(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
dbuf := buffer.NewDelayedBuffer(100, 100*time.Millisecond, buf)
l := NewLogger(logger.WithLevel(logger.ErrorLevel), logger.WithOutput(dbuf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithAddStacktrace(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Error(ctx, "msg1", errors.New("err"))
time.Sleep(120 * time.Millisecond)
if !bytes.Contains(buf.Bytes(), []byte(`key1=val1`)) {
t.Fatalf("logger delayed buffer not works, buf contains: %s", buf.Bytes())
}
}
func TestTime(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.ErrorLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithAddStacktrace(true),
logger.WithTimeFunc(func() time.Time {
return time.Unix(0, 0)
}),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Error(ctx, "msg1", errors.New("err"))
if !bytes.Contains(buf.Bytes(), []byte(`timestamp=1970-01-01T03:00:00.000000000+03:00`)) &&
!bytes.Contains(buf.Bytes(), []byte(`timestamp=1970-01-01T00:00:00.000000000Z`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithDedupKeys(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
l = l.Fields("key1", "val2")
l.Info(ctx, "msg2")
if !bytes.Contains(buf.Bytes(), []byte(`msg=msg2 key1=val1`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithDedupKeysWithAddFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithDedupKeys(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
if err := l.Init(logger.WithAddFields("key2", "val2")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg2")
if err := l.Init(logger.WithAddFields("key2", "val3", "key1", "val4")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg3")
if !bytes.Contains(buf.Bytes(), []byte(`msg=msg3 key1=val4 key2=val3`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithHandlerFunc(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf),
WithHandlerFunc(slog.NewTextHandler),
)
if err := l.Init(); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
if !bytes.Contains(buf.Bytes(), []byte(`msg=msg1`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestWithAddFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg1")
if err := l.Init(logger.WithAddFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg2")
if err := l.Init(logger.WithAddFields("key2", "val2")); err != nil {
t.Fatal(err)
}
l.Info(ctx, "msg3")
if !bytes.Contains(buf.Bytes(), []byte(`"key1"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"key2"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestMultipleFieldsWithLevel(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l = l.Fields("key", "val")
l.Info(ctx, "msg1")
nl := l.Clone(logger.WithLevel(logger.DebugLevel))
nl.Debug(ctx, "msg2")
l.Debug(ctx, "msg3")
if !bytes.Contains(buf.Bytes(), []byte(`"key":"val"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"msg1"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"msg2"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if bytes.Contains(buf.Bytes(), []byte(`"msg3"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestMultipleFields(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.InfoLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l = l.Fields("key", "val")
l = l.Fields("key1", "val1")
l.Info(ctx, "msg")
if !bytes.Contains(buf.Bytes(), []byte(`"key":"val"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"key1":"val1"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
}
func TestError(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
@@ -29,13 +261,22 @@ func TestError(t *testing.T) {
func TestErrorf(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.ErrorLevel), logger.WithOutput(buf), logger.WithAddStacktrace(true))
if err := l.Init(); err != nil {
if err := l.Init(logger.WithContextAttrFuncs(func(_ context.Context) []interface{} {
return nil
})); err != nil {
t.Fatal(err)
}
l.Errorf(ctx, "message", fmt.Errorf("error message"))
l.Log(ctx, logger.ErrorLevel, "message", errors.New("error msg"))
l.Log(ctx, logger.ErrorLevel, "", errors.New("error msg"))
if !bytes.Contains(buf.Bytes(), []byte(`"error":"error msg"`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
}
if !bytes.Contains(buf.Bytes(), []byte(`"stacktrace":"`)) {
t.Fatalf("logger stacktrace not works, buf contains: %s", buf.Bytes())
}
@@ -99,6 +340,11 @@ func TestFromContextWithFields(t *testing.T) {
if !bytes.Contains(buf.Bytes(), []byte(`"key":"val"`)) {
t.Fatalf("logger fields not works, buf contains: %s", buf.Bytes())
}
l.Info(ctx, "test", "uncorrected number attributes")
if !bytes.Contains(buf.Bytes(), []byte(`"!BADKEY":"uncorrected number attributes"`)) {
t.Fatalf("logger fields not works, buf contains: %s", buf.Bytes())
}
}
func TestClone(t *testing.T) {
@@ -174,3 +420,53 @@ func TestLogger(t *testing.T) {
t.Fatalf("logger warn, buf %s", buf.Bytes())
}
}
func Test_WithContextAttrFunc(t *testing.T) {
loggerContextAttrFuncs := []logger.ContextAttrFunc{
func(ctx context.Context) []interface{} {
md, ok := metadata.FromOutgoingContext(ctx)
if !ok {
return nil
}
attrs := make([]interface{}, 0, 10)
for k, v := range md {
key := strings.ToLower(k)
switch key {
case "x-request-id", "phone", "external-Id", "source-service", "x-app-install-id", "client-id", "client-ip":
attrs = append(attrs, key, v[0])
}
}
return attrs
},
}
logger.DefaultContextAttrFuncs = append(logger.DefaultContextAttrFuncs, loggerContextAttrFuncs...)
ctx := context.TODO()
ctx = metadata.AppendOutgoingContext(ctx, "X-Request-Id", uuid.New().String(),
"Source-Service", "Test-System")
buf := bytes.NewBuffer(nil)
l := NewLogger(logger.WithLevel(logger.TraceLevel), logger.WithOutput(buf))
if err := l.Init(); err != nil {
t.Fatal(err)
}
l.Info(ctx, "test message")
if !(bytes.Contains(buf.Bytes(), []byte(`"level":"info"`)) && bytes.Contains(buf.Bytes(), []byte(`"msg":"test message"`))) {
t.Fatalf("logger info, buf %s", buf.Bytes())
}
if !(bytes.Contains(buf.Bytes(), []byte(`"x-request-id":"`))) {
t.Fatalf("logger info, buf %s", buf.Bytes())
}
if !(bytes.Contains(buf.Bytes(), []byte(`"source-service":"Test-System"`))) {
t.Fatalf("logger info, buf %s", buf.Bytes())
}
buf.Reset()
omd, _ := metadata.FromOutgoingContext(ctx)
l.Info(ctx, "test message1")
omd.Set("Source-Service", "Test-System2")
l.Info(ctx, "test message2")
// t.Logf("xxx %s", buf.Bytes())
}

View File

@@ -8,7 +8,7 @@ import (
"strconv"
"strings"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v4/codec"
)
const sf = "0-+# "
@@ -36,14 +36,14 @@ var (
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
filteredBytes = []byte("<filtered>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("{")
closeMapBytes = []byte("}")
// openBracketBytes = []byte("[")
// closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("{")
closeMapBytes = []byte("}")
)
type protoMessage interface {
@@ -52,13 +52,15 @@ type protoMessage interface {
}
type Wrapper struct {
val interface{}
s fmt.State
pointers map[uintptr]int
opts *Options
pointers map[uintptr]int
takeMap map[int]bool
val interface{}
s fmt.State
opts *Options
depth int
ignoreNextType bool
takeMap map[int]bool
protoWrapperType bool
sqlWrapperType bool
}

View File

@@ -5,7 +5,7 @@ import (
"strings"
"testing"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v4/codec"
)
func TestUnwrap(t *testing.T) {
@@ -82,12 +82,12 @@ func TestTagged(t *testing.T) {
func TestTaggedNested(t *testing.T) {
type val struct {
key string `logger:"take"`
val string `logger:"omit"`
// val string `logger:"omit"`
unk string
}
type str struct {
key string `logger:"omit"`
val *val `logger:"take"`
// key string `logger:"omit"`
val *val `logger:"take"`
}
var iface interface{}

View File

@@ -1,399 +0,0 @@
// Package wrapper provides wrapper for Logger
package wrapper
import (
"context"
"fmt"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/server"
)
var (
// DefaultClientCallObserver called by wrapper in client Call
DefaultClientCallObserver = func(ctx context.Context, req client.Request, rsp interface{}, opts []client.CallOption, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultClientStreamObserver called by wrapper in client Stream
DefaultClientStreamObserver = func(ctx context.Context, req client.Request, opts []client.CallOption, stream client.Stream, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultClientPublishObserver called by wrapper in client Publish
DefaultClientPublishObserver = func(ctx context.Context, msg client.Message, opts []client.PublishOption, err error) []string {
labels := []string{"endpoint", msg.Topic()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultServerHandlerObserver called by wrapper in server Handler
DefaultServerHandlerObserver = func(ctx context.Context, req server.Request, rsp interface{}, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultServerSubscriberObserver called by wrapper in server Subscriber
DefaultServerSubscriberObserver = func(ctx context.Context, msg server.Message, err error) []string {
labels := []string{"endpoint", msg.Topic()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultClientCallFuncObserver called by wrapper in client CallFunc
DefaultClientCallFuncObserver = func(ctx context.Context, addr string, req client.Request, rsp interface{}, opts client.CallOptions, err error) []string {
labels := []string{"service", req.Service(), "endpoint", req.Endpoint()}
if err != nil {
labels = append(labels, "error", err.Error())
}
return labels
}
// DefaultSkipEndpoints wrapper not called for this endpoints
DefaultSkipEndpoints = []string{"Meter.Metrics", "Health.Live", "Health.Ready", "Health.Version"}
)
type lWrapper struct {
client.Client
serverHandler server.HandlerFunc
serverSubscriber server.SubscriberFunc
clientCallFunc client.CallFunc
opts Options
}
type (
// ClientCallObserver func signature
ClientCallObserver func(context.Context, client.Request, interface{}, []client.CallOption, error) []string
// ClientStreamObserver func signature
ClientStreamObserver func(context.Context, client.Request, []client.CallOption, client.Stream, error) []string
// ClientPublishObserver func signature
ClientPublishObserver func(context.Context, client.Message, []client.PublishOption, error) []string
// ClientCallFuncObserver func signature
ClientCallFuncObserver func(context.Context, string, client.Request, interface{}, client.CallOptions, error) []string
// ServerHandlerObserver func signature
ServerHandlerObserver func(context.Context, server.Request, interface{}, error) []string
// ServerSubscriberObserver func signature
ServerSubscriberObserver func(context.Context, server.Message, error) []string
)
// Options struct for wrapper
type Options struct {
// Logger that used for log
Logger logger.Logger
// ServerHandlerObservers funcs
ServerHandlerObservers []ServerHandlerObserver
// ServerSubscriberObservers funcs
ServerSubscriberObservers []ServerSubscriberObserver
// ClientCallObservers funcs
ClientCallObservers []ClientCallObserver
// ClientStreamObservers funcs
ClientStreamObservers []ClientStreamObserver
// ClientPublishObservers funcs
ClientPublishObservers []ClientPublishObserver
// ClientCallFuncObservers funcs
ClientCallFuncObservers []ClientCallFuncObserver
// SkipEndpoints
SkipEndpoints []string
// Level for logger
Level logger.Level
// Enabled flag
Enabled bool
}
// Option func signature
type Option func(*Options)
// NewOptions creates Options from Option slice
func NewOptions(opts ...Option) Options {
options := Options{
Logger: logger.DefaultLogger,
Level: logger.TraceLevel,
ClientCallObservers: []ClientCallObserver{DefaultClientCallObserver},
ClientStreamObservers: []ClientStreamObserver{DefaultClientStreamObserver},
ClientPublishObservers: []ClientPublishObserver{DefaultClientPublishObserver},
ClientCallFuncObservers: []ClientCallFuncObserver{DefaultClientCallFuncObserver},
ServerHandlerObservers: []ServerHandlerObserver{DefaultServerHandlerObserver},
ServerSubscriberObservers: []ServerSubscriberObserver{DefaultServerSubscriberObserver},
SkipEndpoints: DefaultSkipEndpoints,
}
for _, o := range opts {
o(&options)
}
return options
}
// WithEnabled enable/diable flag
func WithEnabled(b bool) Option {
return func(o *Options) {
o.Enabled = b
}
}
// WithLevel log level
func WithLevel(l logger.Level) Option {
return func(o *Options) {
o.Level = l
}
}
// WithLogger logger
func WithLogger(l logger.Logger) Option {
return func(o *Options) {
o.Logger = l
}
}
// WithClientCallObservers funcs
func WithClientCallObservers(ob ...ClientCallObserver) Option {
return func(o *Options) {
o.ClientCallObservers = ob
}
}
// WithClientStreamObservers funcs
func WithClientStreamObservers(ob ...ClientStreamObserver) Option {
return func(o *Options) {
o.ClientStreamObservers = ob
}
}
// WithClientPublishObservers funcs
func WithClientPublishObservers(ob ...ClientPublishObserver) Option {
return func(o *Options) {
o.ClientPublishObservers = ob
}
}
// WithClientCallFuncObservers funcs
func WithClientCallFuncObservers(ob ...ClientCallFuncObserver) Option {
return func(o *Options) {
o.ClientCallFuncObservers = ob
}
}
// WithServerHandlerObservers funcs
func WithServerHandlerObservers(ob ...ServerHandlerObserver) Option {
return func(o *Options) {
o.ServerHandlerObservers = ob
}
}
// WithServerSubscriberObservers funcs
func WithServerSubscriberObservers(ob ...ServerSubscriberObserver) Option {
return func(o *Options) {
o.ServerSubscriberObservers = ob
}
}
// SkipEndpoins
func SkipEndpoints(eps ...string) Option {
return func(o *Options) {
o.SkipEndpoints = append(o.SkipEndpoints, eps...)
}
}
func (l *lWrapper) Call(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {
err := l.Client.Call(ctx, req, rsp, opts...)
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ClientCallObservers {
labels = append(labels, o(ctx, req, rsp, opts, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
func (l *lWrapper) Stream(ctx context.Context, req client.Request, opts ...client.CallOption) (client.Stream, error) {
stream, err := l.Client.Stream(ctx, req, opts...)
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return stream, err
}
}
if !l.opts.Enabled {
return stream, err
}
var labels []string
for _, o := range l.opts.ClientStreamObservers {
labels = append(labels, o(ctx, req, opts, stream, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return stream, err
}
func (l *lWrapper) Publish(ctx context.Context, msg client.Message, opts ...client.PublishOption) error {
err := l.Client.Publish(ctx, msg, opts...)
endpoint := msg.Topic()
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ClientPublishObservers {
labels = append(labels, o(ctx, msg, opts, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
func (l *lWrapper) ServerHandler(ctx context.Context, req server.Request, rsp interface{}) error {
err := l.serverHandler(ctx, req, rsp)
endpoint := req.Endpoint()
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ServerHandlerObservers {
labels = append(labels, o(ctx, req, rsp, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
func (l *lWrapper) ServerSubscriber(ctx context.Context, msg server.Message) error {
err := l.serverSubscriber(ctx, msg)
endpoint := msg.Topic()
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ServerSubscriberObservers {
labels = append(labels, o(ctx, msg, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
// NewClientWrapper accepts an open options and returns a Client Wrapper
func NewClientWrapper(opts ...Option) client.Wrapper {
return func(c client.Client) client.Client {
options := NewOptions()
for _, o := range opts {
o(&options)
}
return &lWrapper{opts: options, Client: c}
}
}
// NewClientCallWrapper accepts an options and returns a Call Wrapper
func NewClientCallWrapper(opts ...Option) client.CallWrapper {
return func(h client.CallFunc) client.CallFunc {
options := NewOptions()
for _, o := range opts {
o(&options)
}
l := &lWrapper{opts: options, clientCallFunc: h}
return l.ClientCallFunc
}
}
func (l *lWrapper) ClientCallFunc(ctx context.Context, addr string, req client.Request, rsp interface{}, opts client.CallOptions) error {
err := l.clientCallFunc(ctx, addr, req, rsp, opts)
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range l.opts.SkipEndpoints {
if ep == endpoint {
return err
}
}
if !l.opts.Enabled {
return err
}
var labels []string
for _, o := range l.opts.ClientCallFuncObservers {
labels = append(labels, o(ctx, addr, req, rsp, opts, err)...)
}
l.opts.Logger.Fields(labels).Log(ctx, l.opts.Level)
return err
}
// NewServerHandlerWrapper accepts an options and returns a Handler Wrapper
func NewServerHandlerWrapper(opts ...Option) server.HandlerWrapper {
return func(h server.HandlerFunc) server.HandlerFunc {
options := NewOptions()
for _, o := range opts {
o(&options)
}
l := &lWrapper{opts: options, serverHandler: h}
return l.ServerHandler
}
}
// NewServerSubscriberWrapper accepts an options and returns a Subscriber Wrapper
func NewServerSubscriberWrapper(opts ...Option) server.SubscriberWrapper {
return func(h server.SubscriberFunc) server.SubscriberFunc {
options := NewOptions()
for _, o := range opts {
o(&options)
}
l := &lWrapper{opts: options, serverSubscriber: h}
return l.ServerSubscriber
}
}

View File

@@ -1,142 +1,294 @@
// Package metadata is a way of defining message headers
package metadata
import (
"context"
"fmt"
"strings"
)
// In the metadata package, context and metadata are treated as immutable.
// Deep copies of metadata are made to keep things safe and correct.
// If a user takes a map and changes it across threads, it's their responsibility.
//
// 1. Incoming Context
//
// This context is provided by an external system and populated by the server or broker of the micro framework.
// It should not be modified. The idea is to extract all necessary data from it,
// validate the data, and transfer it into the current context.
// After that, only the current context should be used throughout the code.
//
// 2. Current Context
//
// This is the context used during the execution flow.
// You can add any needed metadata to it and pass it through your code.
//
// 3. Outgoing Context
//
// This context is for sending data to external systems.
// You can add what you need before sending it out.
// But its usually better to build and prepare this context right before making the external call,
// instead of changing it in many places.
//
// Execution Flow:
//
// [External System]
// ↓
// [Incoming Context]
// ↓
// [Extract & Validate Metadata from Incoming Context]
// ↓
// [Prepare Current Context]
// ↓
// [Enrich Current Context]
// ↓
// [Business Logic]
// ↓
// [Prepare Outgoing Context]
// ↓
// [External System Call]
type (
mdIncomingKey struct{}
mdOutgoingKey struct{}
mdKey struct{}
metadataCurrentKey struct{}
metadataIncomingKey struct{}
metadataOutgoingKey struct{}
rawMetadata struct {
md Metadata
added [][]string
}
)
// FromIncomingContext returns metadata from incoming ctx
// returned metadata shoud not be modified or race condition happens
func FromIncomingContext(ctx context.Context) (Metadata, bool) {
if ctx == nil {
return nil, false
}
md, ok := ctx.Value(mdIncomingKey{}).(*rawMetadata)
if !ok || md.md == nil {
return nil, false
}
return md.md, ok
}
// FromOutgoingContext returns metadata from outgoing ctx
// returned metadata shoud not be modified or race condition happens
func FromOutgoingContext(ctx context.Context) (Metadata, bool) {
if ctx == nil {
return nil, false
}
md, ok := ctx.Value(mdOutgoingKey{}).(*rawMetadata)
if !ok || md.md == nil {
return nil, false
}
return md.md, ok
}
// FromContext returns metadata from the given context
// returned metadata shoud not be modified or race condition happens
func FromContext(ctx context.Context) (Metadata, bool) {
if ctx == nil {
return nil, false
}
md, ok := ctx.Value(mdKey{}).(*rawMetadata)
if !ok || md.md == nil {
return nil, false
}
return md.md, ok
}
// NewContext creates a new context with the given metadata
// NewContext creates a new context with the provided Metadata attached.
// The Metadata must not be modified after calling this function.
func NewContext(ctx context.Context, md Metadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
ctx = context.WithValue(ctx, mdKey{}, &rawMetadata{md})
ctx = context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{})
ctx = context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{})
return ctx
return context.WithValue(ctx, metadataCurrentKey{}, rawMetadata{md: md})
}
// SetOutgoingContext modify outgoing context with given metadata
func SetOutgoingContext(ctx context.Context, md Metadata) bool {
if ctx == nil {
return false
}
if omd, ok := ctx.Value(mdOutgoingKey{}).(*rawMetadata); ok {
omd.md = md
return true
}
return false
}
// SetIncomingContext modify incoming context with given metadata
func SetIncomingContext(ctx context.Context, md Metadata) bool {
if ctx == nil {
return false
}
if omd, ok := ctx.Value(mdIncomingKey{}).(*rawMetadata); ok {
omd.md = md
return true
}
return false
}
// NewIncomingContext creates a new context with incoming metadata attached
// NewIncomingContext creates a new context with the provided incoming Metadata attached.
// The Metadata must not be modified after calling this function.
func NewIncomingContext(ctx context.Context, md Metadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
ctx = context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{md})
if v, ok := ctx.Value(mdOutgoingKey{}).(*rawMetadata); !ok || v == nil {
ctx = context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{})
}
return ctx
return context.WithValue(ctx, metadataIncomingKey{}, rawMetadata{md: md})
}
// NewOutgoingContext creates a new context with outcoming metadata attached
// NewOutgoingContext creates a new context with the provided outgoing Metadata attached.
// The Metadata must not be modified after calling this function.
func NewOutgoingContext(ctx context.Context, md Metadata) context.Context {
if ctx == nil {
ctx = context.Background()
}
ctx = context.WithValue(ctx, mdOutgoingKey{}, &rawMetadata{md})
if v, ok := ctx.Value(mdIncomingKey{}).(*rawMetadata); !ok || v == nil {
ctx = context.WithValue(ctx, mdIncomingKey{}, &rawMetadata{})
}
return ctx
return context.WithValue(ctx, metadataOutgoingKey{}, rawMetadata{md: md})
}
// AppendOutgoingContext apends new md to context
// AppendContext returns a new context with the provided key-value pairs (kv)
// merged with any existing metadata in the context. For a description of kv,
// please refer to the Pairs documentation.
func AppendContext(ctx context.Context, kv ...string) context.Context {
if len(kv)%2 == 1 {
panic(fmt.Sprintf("metadata: AppendContext got an odd number of input pairs for metadata: %d", len(kv)))
}
md, _ := ctx.Value(metadataCurrentKey{}).(rawMetadata)
added := make([][]string, len(md.added)+1)
copy(added, md.added)
kvCopy := make([]string, 0, len(kv))
for i := 0; i < len(kv); i += 2 {
kvCopy = append(kvCopy, strings.ToLower(kv[i]), kv[i+1])
}
added[len(added)-1] = kvCopy
return context.WithValue(ctx, metadataCurrentKey{}, rawMetadata{md: md.md, added: added})
}
// AppendOutgoingContext returns a new context with the provided key-value pairs (kv)
// merged with any existing metadata in the context. For a description of kv,
// please refer to the Pairs documentation.
func AppendOutgoingContext(ctx context.Context, kv ...string) context.Context {
md, ok := Pairs(kv...)
if !ok {
return ctx
if len(kv)%2 == 1 {
panic(fmt.Sprintf("metadata: AppendOutgoingContext got an odd number of input pairs for metadata: %d", len(kv)))
}
omd, ok := FromOutgoingContext(ctx)
if !ok {
return NewOutgoingContext(ctx, md)
md, _ := ctx.Value(metadataOutgoingKey{}).(rawMetadata)
added := make([][]string, len(md.added)+1)
copy(added, md.added)
kvCopy := make([]string, 0, len(kv))
for i := 0; i < len(kv); i += 2 {
kvCopy = append(kvCopy, strings.ToLower(kv[i]), kv[i+1])
}
for k, v := range md {
omd.Set(k, v)
}
return NewOutgoingContext(ctx, omd)
added[len(added)-1] = kvCopy
return context.WithValue(ctx, metadataOutgoingKey{}, rawMetadata{md: md.md, added: added})
}
// AppendIncomingContext apends new md to context
func AppendIncomingContext(ctx context.Context, kv ...string) context.Context {
md, ok := Pairs(kv...)
// FromContext retrieves a deep copy of the metadata from the context and returns it
// with a boolean indicating if it was found.
func FromContext(ctx context.Context) (Metadata, bool) {
raw, ok := ctx.Value(metadataCurrentKey{}).(rawMetadata)
if !ok {
return ctx
return nil, false
}
omd, ok := FromIncomingContext(ctx)
if !ok {
return NewIncomingContext(ctx, md)
metadataSize := len(raw.md)
for i := range raw.added {
metadataSize += len(raw.added[i]) / 2
}
for k, v := range md {
omd.Set(k, v)
out := make(Metadata, metadataSize)
for k, v := range raw.md {
out[k] = copyOf(v)
}
return NewIncomingContext(ctx, omd)
for _, added := range raw.added {
if len(added)%2 == 1 {
panic(fmt.Sprintf("metadata: FromContext got an odd number of input pairs for metadata: %d", len(added)))
}
for i := 0; i < len(added); i += 2 {
out[added[i]] = append(out[added[i]], added[i+1])
}
}
return out, true
}
// MustContext retrieves a deep copy of the metadata from the context and panics
// if the metadata is not found.
func MustContext(ctx context.Context) Metadata {
md, ok := FromContext(ctx)
if !ok {
panic("missing metadata")
}
return md
}
// FromIncomingContext retrieves a deep copy of the metadata from the context and returns it
// with a boolean indicating if it was found.
func FromIncomingContext(ctx context.Context) (Metadata, bool) {
raw, ok := ctx.Value(metadataIncomingKey{}).(rawMetadata)
if !ok {
return nil, false
}
metadataSize := len(raw.md)
for i := range raw.added {
metadataSize += len(raw.added[i]) / 2
}
out := make(Metadata, metadataSize)
for k, v := range raw.md {
out[k] = copyOf(v)
}
for _, added := range raw.added {
if len(added)%2 == 1 {
panic(fmt.Sprintf("metadata: FromIncomingContext got an odd number of input pairs for metadata: %d", len(added)))
}
for i := 0; i < len(added); i += 2 {
out[added[i]] = append(out[added[i]], added[i+1])
}
}
return out, true
}
// MustIncomingContext retrieves a deep copy of the metadata from the context and panics
// if the metadata is not found.
func MustIncomingContext(ctx context.Context) Metadata {
md, ok := FromIncomingContext(ctx)
if !ok {
panic("missing metadata")
}
return md
}
// FromOutgoingContext retrieves a deep copy of the metadata from the context and returns it
// with a boolean indicating if it was found.
func FromOutgoingContext(ctx context.Context) (Metadata, bool) {
raw, ok := ctx.Value(metadataOutgoingKey{}).(rawMetadata)
if !ok {
return nil, false
}
metadataSize := len(raw.md)
for i := range raw.added {
metadataSize += len(raw.added[i]) / 2
}
out := make(Metadata, metadataSize)
for k, v := range raw.md {
out[k] = copyOf(v)
}
for _, added := range raw.added {
if len(added)%2 == 1 {
panic(fmt.Sprintf("metadata: FromOutgoingContext got an odd number of input pairs for metadata: %d", len(added)))
}
for i := 0; i < len(added); i += 2 {
out[added[i]] = append(out[added[i]], added[i+1])
}
}
return out, ok
}
// MustOutgoingContext retrieves a deep copy of the metadata from the context and panics
// if the metadata is not found.
func MustOutgoingContext(ctx context.Context) Metadata {
md, ok := FromOutgoingContext(ctx)
if !ok {
panic("missing metadata")
}
return md
}
// ValueFromCurrentContext retrieves a deep copy of the metadata for the given key
// from the context, performing a case-insensitive search if needed. Returns nil if not found.
func ValueFromCurrentContext(ctx context.Context, key string) []string {
md, ok := ctx.Value(metadataCurrentKey{}).(rawMetadata)
if !ok {
return nil
}
if v, ok := md.md[key]; ok {
return copyOf(v)
}
for k, v := range md.md {
// Case-insensitive comparison: Metadata is a map, and there's no guarantee
// that the Metadata attached to the context is created using our helper
// functions.
if strings.EqualFold(k, key) {
return copyOf(v)
}
}
return nil
}
// ValueFromIncomingContext retrieves a deep copy of the metadata for the given key
// from the context, performing a case-insensitive search if needed. Returns nil if not found.
func ValueFromIncomingContext(ctx context.Context, key string) []string {
raw, ok := ctx.Value(metadataIncomingKey{}).(rawMetadata)
if !ok {
return nil
}
if v, ok := raw.md[key]; ok {
return copyOf(v)
}
for k, v := range raw.md {
// Case-insensitive comparison: Metadata is a map, and there's no guarantee
// that the Metadata attached to the context is created using our helper
// functions.
if strings.EqualFold(k, key) {
return copyOf(v)
}
}
return nil
}
// ValueFromOutgoingContext retrieves a deep copy of the metadata for the given key
// from the context, performing a case-insensitive search if needed. Returns nil if not found.
func ValueFromOutgoingContext(ctx context.Context, key string) []string {
md, ok := ctx.Value(metadataOutgoingKey{}).(rawMetadata)
if !ok {
return nil
}
if v, ok := md.md[key]; ok {
return copyOf(v)
}
for k, v := range md.md {
// Case-insensitive comparison: Metadata is a map, and there's no guarantee
// that the Metadata attached to the context is created using our helper
// functions.
if strings.EqualFold(k, key) {
return copyOf(v)
}
}
return nil
}

View File

@@ -1,140 +0,0 @@
package metadata
import (
"context"
"testing"
)
func TestFromNilContext(t *testing.T) {
// nolint: staticcheck
c, ok := FromContext(nil)
if ok || c != nil {
t.Fatal("FromContext not works")
}
}
func TestNewNilContext(t *testing.T) {
// nolint: staticcheck
ctx := NewContext(nil, New(0))
c, ok := FromContext(ctx)
if c == nil || !ok {
t.Fatal("NewContext not works")
}
}
func TestFromContext(t *testing.T) {
ctx := context.WithValue(context.TODO(), mdKey{}, &rawMetadata{New(0)})
c, ok := FromContext(ctx)
if c == nil || !ok {
t.Fatal("FromContext not works")
}
}
func TestNewContext(t *testing.T) {
ctx := NewContext(context.TODO(), New(0))
c, ok := FromContext(ctx)
if c == nil || !ok {
t.Fatal("NewContext not works")
}
}
func TestFromIncomingContext(t *testing.T) {
ctx := context.WithValue(context.TODO(), mdIncomingKey{}, &rawMetadata{New(0)})
c, ok := FromIncomingContext(ctx)
if c == nil || !ok {
t.Fatal("FromIncomingContext not works")
}
}
func TestFromOutgoingContext(t *testing.T) {
ctx := context.WithValue(context.TODO(), mdOutgoingKey{}, &rawMetadata{New(0)})
c, ok := FromOutgoingContext(ctx)
if c == nil || !ok {
t.Fatal("FromOutgoingContext not works")
}
}
func TestSetIncomingContext(t *testing.T) {
md := New(1)
md.Set("key", "val")
ctx := context.WithValue(context.TODO(), mdIncomingKey{}, &rawMetadata{})
if !SetIncomingContext(ctx, md) {
t.Fatal("SetIncomingContext not works")
}
md, ok := FromIncomingContext(ctx)
if md == nil || !ok {
t.Fatal("SetIncomingContext not works")
} else if v, ok := md.Get("key"); !ok || v != "val" {
t.Fatal("SetIncomingContext not works")
}
}
func TestSetOutgoingContext(t *testing.T) {
md := New(1)
md.Set("key", "val")
ctx := context.WithValue(context.TODO(), mdOutgoingKey{}, &rawMetadata{})
if !SetOutgoingContext(ctx, md) {
t.Fatal("SetOutgoingContext not works")
}
md, ok := FromOutgoingContext(ctx)
if md == nil || !ok {
t.Fatal("SetOutgoingContext not works")
} else if v, ok := md.Get("key"); !ok || v != "val" {
t.Fatal("SetOutgoingContext not works")
}
}
func TestNewIncomingContext(t *testing.T) {
md := New(1)
md.Set("key", "val")
ctx := NewIncomingContext(context.TODO(), md)
c, ok := FromIncomingContext(ctx)
if c == nil || !ok {
t.Fatal("NewIncomingContext not works")
}
}
func TestNewOutgoingContext(t *testing.T) {
md := New(1)
md.Set("key", "val")
ctx := NewOutgoingContext(context.TODO(), md)
c, ok := FromOutgoingContext(ctx)
if c == nil || !ok {
t.Fatal("NewOutgoingContext not works")
}
}
func TestAppendIncomingContext(t *testing.T) {
md := New(1)
md.Set("key1", "val1")
ctx := AppendIncomingContext(context.TODO(), "key2", "val2")
nmd, ok := FromIncomingContext(ctx)
if nmd == nil || !ok {
t.Fatal("AppendIncomingContext not works")
}
if v, ok := nmd.Get("key2"); !ok || v != "val2" {
t.Fatal("AppendIncomingContext not works")
}
}
func TestAppendOutgoingContext(t *testing.T) {
md := New(1)
md.Set("key1", "val1")
ctx := AppendOutgoingContext(context.TODO(), "key2", "val2")
nmd, ok := FromOutgoingContext(ctx)
if nmd == nil || !ok {
t.Fatal("AppendOutgoingContext not works")
}
if v, ok := nmd.Get("key2"); !ok || v != "val2" {
t.Fatal("AppendOutgoingContext not works")
}
}

19
metadata/headers.go Normal file
View File

@@ -0,0 +1,19 @@
// Package metadata is a way of defining message headers
package metadata
var (
// HeaderTopic is the header name that contains topic name.
HeaderTopic = "Micro-Topic"
// HeaderContentType specifies content type of message.
HeaderContentType = "Content-Type"
// HeaderEndpoint specifies endpoint in service.
HeaderEndpoint = "Micro-Endpoint"
// HeaderService specifies service.
HeaderService = "Micro-Service"
// HeaderTimeout specifies timeout of operation.
HeaderTimeout = "Micro-Timeout"
// HeaderAuthorization specifies Authorization header.
HeaderAuthorization = "Authorization"
// HeaderXRequestID specifies request id.
HeaderXRequestID = "X-Request-Id"
)

7
metadata/helpers.go Normal file
View File

@@ -0,0 +1,7 @@
package metadata
func copyOf(v []string) []string {
vals := make([]string, len(v))
copy(vals, v)
return vals
}

37
metadata/iterator.go Normal file
View File

@@ -0,0 +1,37 @@
package metadata
import "sort"
type Iterator struct {
md Metadata
keys []string
cur int
cnt int
}
// Next advances the iterator to the next element.
func (iter *Iterator) Next(k *string, v *[]string) bool {
if iter.cur+1 > iter.cnt {
return false
}
if k != nil && v != nil {
*k = iter.keys[iter.cur]
vv := iter.md[*k]
*v = make([]string, len(vv))
copy(*v, vv)
iter.cur++
}
return true
}
// Iterator returns an iterator for iterating over metadata in sorted order.
func (md Metadata) Iterator() *Iterator {
iter := &Iterator{md: md, cnt: len(md)}
iter.keys = make([]string, 0, iter.cnt)
for k := range md {
iter.keys = append(iter.keys, k)
}
sort.Strings(iter.keys)
return iter
}

View File

@@ -1,144 +1,160 @@
// Package metadata is a way of defining message headers
package metadata // import "go.unistack.org/micro/v3/metadata"
package metadata
import (
"fmt"
"net/textproto"
"sort"
"strings"
)
var (
// HeaderTopic is the header name that contains topic name
HeaderTopic = "Micro-Topic"
// HeaderContentType specifies content type of message
HeaderContentType = "Content-Type"
// HeaderEndpoint specifies endpoint in service
HeaderEndpoint = "Micro-Endpoint"
// HeaderService specifies service
HeaderService = "Micro-Service"
// HeaderTimeout specifies timeout of operation
HeaderTimeout = "Micro-Timeout"
// HeaderAuthorization specifies Authorization header
HeaderAuthorization = "Authorization"
// HeaderXRequestID specifies request id
HeaderXRequestID = "X-Request-Id"
)
// Metadata is our way of representing request headers internally.
// They're used at the RPC level and translate back and forth
// from Transport headers.
type Metadata map[string]string
type rawMetadata struct {
md Metadata
}
// defaultMetadataSize used when need to init new Metadata
// defaultMetadataSize is used when initializing new Metadata.
var defaultMetadataSize = 2
// Iterator used to iterate over metadata with order
type Iterator struct {
md Metadata
keys []string
cur int
cnt int
}
// Metadata maps keys to values. Use the New, NewWithMetadata and Pairs functions to create it.
type Metadata map[string][]string
// Next advance iterator to next element
func (iter *Iterator) Next(k, v *string) bool {
if iter.cur+1 > iter.cnt {
return false
// New creates a zero-value Metadata with the specified size.
func New(l int) Metadata {
if l == 0 {
l = defaultMetadataSize
}
*k = iter.keys[iter.cur]
*v = iter.md[*k]
iter.cur++
return true
md := make(Metadata, l)
return md
}
// Iterator returns the itarator for metadata in sorted order
func (md Metadata) Iterator() *Iterator {
iter := &Iterator{md: md, cnt: len(md)}
iter.keys = make([]string, 0, iter.cnt)
for k := range md {
iter.keys = append(iter.keys, k)
// NewWithMetadata creates a Metadata from the provided key-value map.
func NewWithMetadata(m map[string]string) Metadata {
md := make(Metadata, len(m))
for key, val := range m {
md[key] = append(md[key], val)
}
sort.Strings(iter.keys)
return iter
return md
}
// Get returns value from metadata by key
func (md Metadata) Get(key string) (string, bool) {
// fast path
val, ok := md[key]
if !ok {
// slow path
val, ok = md[textproto.CanonicalMIMEHeaderKey(key)]
}
return val, ok
}
// Set is used to store value in metadata
func (md Metadata) Set(kv ...string) {
// Pairs returns a Metadata formed from the key-value mapping. It panics if the length of kv is odd.
func Pairs(kv ...string) Metadata {
if len(kv)%2 == 1 {
kv = kv[:len(kv)-1]
panic(fmt.Sprintf("metadata: Pairs got the odd number of input pairs for metadata: %d", len(kv)))
}
for idx := 0; idx < len(kv); idx += 2 {
md[textproto.CanonicalMIMEHeaderKey(kv[idx])] = kv[idx+1]
md := make(Metadata, len(kv)/2)
for i := 0; i < len(kv); i += 2 {
md[kv[i]] = append(md[kv[i]], kv[i+1])
}
return md
}
// Del is used to remove value from metadata
func (md Metadata) Del(keys ...string) {
for _, key := range keys {
// fast path
delete(md, key)
// slow path
delete(md, textproto.CanonicalMIMEHeaderKey(key))
}
}
// Copy makes a copy of the metadata
func Copy(md Metadata, exclude ...string) Metadata {
nmd := New(len(md))
for key, val := range md {
nmd.Set(key, val)
}
nmd.Del(exclude...)
return nmd
}
// New return new sized metadata
func New(size int) Metadata {
if size == 0 {
size = defaultMetadataSize
}
return make(Metadata, size)
}
// Merge merges metadata to existing metadata, overwriting if specified
func Merge(omd Metadata, mmd Metadata, overwrite bool) Metadata {
var ok bool
nmd := Copy(omd)
for key, val := range mmd {
_, ok = nmd[key]
switch {
case ok && !overwrite:
continue
case val != "":
nmd.Set(key, val)
case ok && val == "":
nmd.Del(key)
// Join combines multiple Metadatas into a single Metadata.
// The order of values for each key is determined by the order in which the Metadatas are provided to Join.
func Join(mds ...Metadata) Metadata {
out := Metadata{}
for _, md := range mds {
for k, v := range md {
out[k] = append(out[k], v...)
}
}
return nmd
return out
}
// Pairs from which metadata created
func Pairs(kv ...string) (Metadata, bool) {
if len(kv)%2 == 1 {
return nil, false
// Copy returns a deep copy of Metadata.
func Copy(src Metadata) Metadata {
out := make(Metadata, len(src))
for k, v := range src {
out[k] = copyOf(v)
}
return out
}
// Copy returns a deep copy of Metadata.
func (md Metadata) Copy() Metadata {
out := make(Metadata, len(md))
for k, v := range md {
out[k] = copyOf(v)
}
return out
}
// CopyTo performs a deep copy of Metadata to the out.
func (md Metadata) CopyTo(out Metadata) {
for k, v := range md {
out[k] = copyOf(v)
}
}
// Len returns the number of items in Metadata.
func (md Metadata) Len() int {
return len(md)
}
// AsMap returns a deep copy of Metadata as a map[string]string
func (md Metadata) AsMap() map[string]string {
out := make(map[string]string, len(md))
for k, v := range md {
out[k] = strings.Join(v, ",")
}
return out
}
// AsHTTP1 returns a deep copy of Metadata with keys converted to canonical MIME header key format.
func (md Metadata) AsHTTP1() map[string][]string {
out := make(map[string][]string, len(md))
for k, v := range md {
out[textproto.CanonicalMIMEHeaderKey(k)] = copyOf(v)
}
return out
}
// AsHTTP2 returns a deep copy of Metadata with keys converted to lowercase.
func (md Metadata) AsHTTP2() map[string][]string {
out := make(map[string][]string, len(md))
for k, v := range md {
out[strings.ToLower(k)] = copyOf(v)
}
return out
}
// Get retrieves the values for a given key, checking the key in three formats:
// - exact case,
// - lower case,
// - canonical MIME header key format.
func (md Metadata) Get(k string) []string {
v, ok := md[k]
if !ok {
v, ok = md[strings.ToLower(k)]
}
if !ok {
v = md[textproto.CanonicalMIMEHeaderKey(k)]
}
return v
}
// GetJoined retrieves the values for a given key and joins them into a single string, separated by commas.
func (md Metadata) GetJoined(k string) string {
return strings.Join(md.Get(k), ",")
}
// Set assigns the values to the given key.
func (md Metadata) Set(key string, vals ...string) {
if len(vals) == 0 {
return
}
md[key] = vals
}
// Append adds values to the existing values for the given key.
func (md Metadata) Append(key string, vals ...string) {
if len(vals) == 0 {
return
}
md[key] = append(md[key], vals...)
}
// Del removes the values for the given keys k. It checks and removes the keys in the following formats:
// - exact case,
// - lower case,
// - canonical MIME header key format.
func (md Metadata) Del(k ...string) {
for i := range k {
delete(md, k[i])
delete(md, strings.ToLower(k[i]))
delete(md, textproto.CanonicalMIMEHeaderKey(k[i]))
}
md := New(len(kv) / 2)
md.Set(kv...)
return md, true
}

View File

@@ -5,50 +5,64 @@ import (
"testing"
)
func TestMetadataSetMultiple(t *testing.T) {
md := New(4)
md.Set("key1", "val1", "key2", "val2", "key3")
if v, ok := md.Get("key1"); !ok || v != "val1" {
t.Fatalf("invalid kv %#+v", md)
}
if v, ok := md.Get("key2"); !ok || v != "val2" {
t.Fatalf("invalid kv %#+v", md)
}
if _, ok := md.Get("key3"); ok {
t.Fatalf("invalid kv %#+v", md)
func TesSet(t *testing.T) {
md := Pairs("key1", "val1", "key2", "val2")
md.Set("key1", "val2", "val3")
v := md.GetJoined("X-Request-Id")
if v != "val2, val3" {
t.Fatal("set not works")
}
}
func TestAppend(t *testing.T) {
ctx := context.Background()
ctx = AppendIncomingContext(ctx, "key1", "val1", "key2", "val2")
md, ok := FromIncomingContext(ctx)
if !ok {
t.Fatal("metadata empty")
}
if _, ok := md.Get("key1"); !ok {
t.Fatal("key1 not found")
/*
func TestAppendOutgoingContextModify(t *testing.T) {
md := Pairs("key1", "val1")
ctx := NewOutgoingContext(context.TODO(), md)
nctx := AppendOutgoingContext(ctx, "key1", "val3", "key2", "val2")
_ = nctx
omd := MustOutgoingContext(nctx)
fmt.Printf("%#+v\n", omd)
}
*/
func TestLowercase(t *testing.T) {
md := New(1)
md["x-request-id"] = []string{"12345"}
v := md.GetJoined("X-Request-Id")
if v == "" {
t.Fatalf("metadata invalid %#+v", md)
}
}
func TestMultipleUsage(t *testing.T) {
ctx := context.TODO()
md := New(0)
md.Set("key1_1", "val1_1", "key1_2", "val1_2", "key1_3", "val1_3")
ctx = NewIncomingContext(ctx, Copy(md))
ctx = NewOutgoingContext(ctx, Copy(md))
imd, _ := FromIncomingContext(ctx)
omd, _ := FromOutgoingContext(ctx)
_ = func(x context.Context) context.Context {
m, _ := FromIncomingContext(x)
m.Del("key1_2")
return ctx
}(ctx)
_ = func(x context.Context) context.Context {
m, _ := FromIncomingContext(x)
m.Del("key1_3")
return ctx
}(ctx)
_ = imd
_ = omd
}
func TestPairs(t *testing.T) {
md, ok := Pairs("key1", "val1", "key2", "val2")
if !ok {
t.Fatal("odd number of kv")
}
if _, ok = md.Get("key1"); !ok {
md := Pairs("key1", "val1", "key2", "val2")
if v := md.Get("key1"); v == nil {
t.Fatal("key1 not found")
}
}
func testCtx(ctx context.Context) {
md := New(2)
md.Set("Key1", "Val1_new")
md.Set("Key3", "Val3")
SetOutgoingContext(ctx, md)
}
func TestPassing(t *testing.T) {
ctx := context.TODO()
md1 := New(2)
@@ -56,63 +70,63 @@ func TestPassing(t *testing.T) {
md1.Set("Key2", "Val2")
ctx = NewIncomingContext(ctx, md1)
testCtx(ctx)
_, ok := FromOutgoingContext(ctx)
if ok {
t.Fatalf("create outgoing context")
}
ctx = NewOutgoingContext(ctx, md1)
md, ok := FromOutgoingContext(ctx)
if !ok {
t.Fatalf("missing metadata from outgoing context")
}
if v, ok := md.Get("Key1"); !ok || v != "Val1_new" {
if v := md.Get("Key1"); v == nil || v[0] != "Val1" {
t.Fatalf("invalid metadata value %#+v", md)
}
}
func TestMerge(t *testing.T) {
omd := Metadata{
"key1": "val1",
}
mmd := Metadata{
"key2": "val2",
}
nmd := Merge(omd, mmd, true)
if len(nmd) != 2 {
t.Fatalf("merge failed: %v", nmd)
}
}
func TestIterator(t *testing.T) {
md := Metadata{
"1Last": "last",
"2First": "first",
"3Second": "second",
}
md := Pairs(
"1Last", "last",
"2First", "first",
"3Second", "second",
)
iter := md.Iterator()
var k, v string
var k string
var v []string
chk := New(3)
for iter.Next(&k, &v) {
// fmt.Printf("k: %s, v: %s\n", k, v)
chk[k] = v
}
for k, v := range chk {
if cv, ok := md[k]; !ok || len(cv) != len(v) || cv[0] != v[0] {
t.Fatalf("XXXX %#+v %#+v", chk, md)
}
}
}
func TestMedataCanonicalKey(t *testing.T) {
md := New(1)
md.Set("x-request-id", "12345")
v, ok := md.Get("x-request-id")
if !ok {
v := md.GetJoined("x-request-id")
if v == "" {
t.Fatalf("failed to get x-request-id")
} else if v != "12345" {
t.Fatalf("invalid metadata value: %s != %s", "12345", v)
}
v, ok = md.Get("X-Request-Id")
if !ok {
v = md.GetJoined("X-Request-Id")
if v == "" {
t.Fatalf("failed to get x-request-id")
} else if v != "12345" {
t.Fatalf("invalid metadata value: %s != %s", "12345", v)
}
v, ok = md.Get("X-Request-ID")
if !ok {
v = md.GetJoined("X-Request-ID")
if v == "" {
t.Fatalf("failed to get x-request-id")
} else if v != "12345" {
t.Fatalf("invalid metadata value: %s != %s", "12345", v)
@@ -124,8 +138,8 @@ func TestMetadataSet(t *testing.T) {
md.Set("Key", "val")
val, ok := md.Get("Key")
if !ok {
val := md.GetJoined("Key")
if val == "" {
t.Fatal("key Key not found")
}
if val != "val" {
@@ -135,36 +149,27 @@ func TestMetadataSet(t *testing.T) {
func TestMetadataDelete(t *testing.T) {
md := Metadata{
"Foo": "bar",
"Baz": "empty",
"Foo": []string{"bar"},
"Baz": []string{"empty"},
}
md.Del("Baz")
_, ok := md.Get("Baz")
if ok {
v := md.Get("Baz")
if v != nil {
t.Fatal("key Baz not deleted")
}
}
func TestNilContext(t *testing.T) {
var ctx context.Context
_, ok := FromContext(ctx)
if ok {
t.Fatal("nil context")
}
}
func TestMetadataCopy(t *testing.T) {
md := Metadata{
"Foo": "bar",
"Bar": "baz",
"Foo": []string{"bar"},
"Bar": []string{"baz"},
}
cp := Copy(md)
for k, v := range md {
if cv := cp[k]; cv != v {
if cv := cp[k]; cv[0] != v[0] {
t.Fatalf("Got %s:%s for %s:%s", k, cv, k, v)
}
}
@@ -172,7 +177,7 @@ func TestMetadataCopy(t *testing.T) {
func TestMetadataContext(t *testing.T) {
md := Metadata{
"Foo": "bar",
"Foo": []string{"bar"},
}
ctx := NewContext(context.TODO(), md)
@@ -182,7 +187,7 @@ func TestMetadataContext(t *testing.T) {
t.Errorf("Unexpected error retrieving metadata, got %t", ok)
}
if emd["Foo"] != md["Foo"] {
if emd["Foo"][0] != md["Foo"][0] {
t.Errorf("Expected key: %s val: %s, got key: %s val: %s", "Foo", md["Foo"], "Foo", emd["Foo"])
}
@@ -191,13 +196,74 @@ func TestMetadataContext(t *testing.T) {
}
}
func TestCopy(t *testing.T) {
md := New(2)
md.Set("key1", "val1", "key2", "val2")
nmd := Copy(md, "key2")
if len(nmd) != 1 {
t.Fatal("Copy exclude not works")
} else if nmd["Key1"] != "val1" {
t.Fatal("Copy exclude not works")
func TestFromContext(t *testing.T) {
ctx := context.WithValue(context.TODO(), metadataCurrentKey{}, rawMetadata{md: New(0)})
c, ok := FromContext(ctx)
if c == nil || !ok {
t.Fatal("FromContext not works")
}
}
func TestNewContext(t *testing.T) {
ctx := NewContext(context.TODO(), New(0))
c, ok := FromContext(ctx)
if c == nil || !ok {
t.Fatal("NewContext not works")
}
}
func TestFromIncomingContext(t *testing.T) {
ctx := context.WithValue(context.TODO(), metadataIncomingKey{}, rawMetadata{md: New(0)})
c, ok := FromIncomingContext(ctx)
if c == nil || !ok {
t.Fatal("FromIncomingContext not works")
}
}
func TestFromOutgoingContext(t *testing.T) {
ctx := context.WithValue(context.TODO(), metadataOutgoingKey{}, rawMetadata{md: New(0)})
c, ok := FromOutgoingContext(ctx)
if c == nil || !ok {
t.Fatal("FromOutgoingContext not works")
}
}
func TestNewIncomingContext(t *testing.T) {
md := New(1)
md.Set("key", "val")
ctx := NewIncomingContext(context.TODO(), md)
c, ok := FromIncomingContext(ctx)
if c == nil || !ok {
t.Fatal("NewIncomingContext not works")
}
}
func TestNewOutgoingContext(t *testing.T) {
md := New(1)
md.Set("key", "val")
ctx := NewOutgoingContext(context.TODO(), md)
c, ok := FromOutgoingContext(ctx)
if c == nil || !ok {
t.Fatal("NewOutgoingContext not works")
}
}
func TestAppendOutgoingContext(t *testing.T) {
md := New(1)
md.Set("key1", "val1")
ctx := AppendOutgoingContext(context.TODO(), "key2", "val2")
nmd, ok := FromOutgoingContext(ctx)
if nmd == nil || !ok {
t.Fatal("AppendOutgoingContext not works")
}
if v := nmd.GetJoined("key2"); v != "val2" {
t.Fatal("AppendOutgoingContext not works")
}
}

View File

@@ -15,6 +15,15 @@ func FromContext(ctx context.Context) (Meter, bool) {
return c, ok
}
// MustContext get meter from context
func MustContext(ctx context.Context) Meter {
m, ok := FromContext(ctx)
if !ok {
panic("missing meter")
}
return m
}
// NewContext put meter in context
func NewContext(ctx context.Context, c Meter) context.Context {
if ctx == nil {

View File

@@ -11,15 +11,13 @@ import (
var (
// DefaultMeter is the default meter
DefaultMeter Meter = NewMeter()
DefaultMeter = NewMeter()
// DefaultAddress data will be made available on this host:port
DefaultAddress = ":9090"
// DefaultPath the meter endpoint where the Meter data will be made available
DefaultPath = "/metrics"
// DefaultMetricPrefix holds the string that prepends to all metrics
DefaultMetricPrefix = "micro_"
// DefaultLabelPrefix holds the string that prepends to all labels
DefaultLabelPrefix = "micro_"
// DefaultMeterStatsInterval specifies interval for meter updating
DefaultMeterStatsInterval = 5 * time.Second
// DefaultSummaryQuantiles is the default spread of stats for summary
DefaultSummaryQuantiles = []float64{0.5, 0.9, 0.97, 0.99, 1}
// DefaultSummaryWindow is the default window for summary

View File

@@ -37,32 +37,32 @@ func (r *noopMeter) Init(opts ...Option) error {
}
// Counter implements the Meter interface
func (r *noopMeter) Counter(name string, labels ...string) Counter {
func (r *noopMeter) Counter(_ string, labels ...string) Counter {
return &noopCounter{labels: labels}
}
// FloatCounter implements the Meter interface
func (r *noopMeter) FloatCounter(name string, labels ...string) FloatCounter {
func (r *noopMeter) FloatCounter(_ string, labels ...string) FloatCounter {
return &noopFloatCounter{labels: labels}
}
// Gauge implements the Meter interface
func (r *noopMeter) Gauge(name string, f func() float64, labels ...string) Gauge {
func (r *noopMeter) Gauge(_ string, _ func() float64, labels ...string) Gauge {
return &noopGauge{labels: labels}
}
// Summary implements the Meter interface
func (r *noopMeter) Summary(name string, labels ...string) Summary {
func (r *noopMeter) Summary(_ string, labels ...string) Summary {
return &noopSummary{labels: labels}
}
// SummaryExt implements the Meter interface
func (r *noopMeter) SummaryExt(name string, window time.Duration, quantiles []float64, labels ...string) Summary {
func (r *noopMeter) SummaryExt(_ string, _ time.Duration, _ []float64, labels ...string) Summary {
return &noopSummary{labels: labels}
}
// Histogram implements the Meter interface
func (r *noopMeter) Histogram(name string, labels ...string) Histogram {
func (r *noopMeter) Histogram(_ string, labels ...string) Histogram {
return &noopHistogram{labels: labels}
}
@@ -77,7 +77,7 @@ func (r *noopMeter) Set(opts ...Option) Meter {
return m
}
func (r *noopMeter) Write(w io.Writer, opts ...Option) error {
func (r *noopMeter) Write(_ io.Writer, _ ...Option) error {
return nil
}

View File

@@ -17,10 +17,6 @@ type Options struct {
Address string
// Path holds the path for metrics
Path string
// MetricPrefix holds the prefix for all metrics
MetricPrefix string
// LabelPrefix holds the prefix for all labels
LabelPrefix string
// Labels holds the default labels
Labels []string
// WriteProcessMetrics flag to write process metrics
@@ -32,11 +28,9 @@ type Options struct {
// NewOptions prepares a set of options:
func NewOptions(opt ...Option) Options {
opts := Options{
Address: DefaultAddress,
Path: DefaultPath,
Context: context.Background(),
MetricPrefix: DefaultMetricPrefix,
LabelPrefix: DefaultLabelPrefix,
Address: DefaultAddress,
Path: DefaultPath,
Context: context.Background(),
}
for _, o := range opt {
@@ -46,20 +40,6 @@ func NewOptions(opt ...Option) Options {
return opts
}
// LabelPrefix sets the labels prefix
func LabelPrefix(pref string) Option {
return func(o *Options) {
o.LabelPrefix = pref
}
}
// MetricPrefix sets the metric prefix
func MetricPrefix(pref string) Option {
return func(o *Options) {
o.MetricPrefix = pref
}
}
// Context sets the metrics context
func Context(ctx context.Context) Option {
return func(o *Options) {
@@ -90,7 +70,7 @@ func TimingObjectives(value map[float64]float64) Option {
}
*/
// Labels sets the meter labels
// Labels add the meter labels
func Labels(ls ...string) Option {
return func(o *Options) {
o.Labels = append(o.Labels, ls...)

View File

@@ -1,347 +0,0 @@
package wrapper // import "go.unistack.org/micro/v3/meter/wrapper"
import (
"context"
"fmt"
"time"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/server"
)
var (
// ClientRequestDurationSeconds specifies meter metric name
ClientRequestDurationSeconds = "client_request_duration_seconds"
// ClientRequestLatencyMicroseconds specifies meter metric name
ClientRequestLatencyMicroseconds = "client_request_latency_microseconds"
// ClientRequestTotal specifies meter metric name
ClientRequestTotal = "client_request_total"
// ClientRequestInflight specifies meter metric name
ClientRequestInflight = "client_request_inflight"
// ServerRequestDurationSeconds specifies meter metric name
ServerRequestDurationSeconds = "server_request_duration_seconds"
// ServerRequestLatencyMicroseconds specifies meter metric name
ServerRequestLatencyMicroseconds = "server_request_latency_microseconds"
// ServerRequestTotal specifies meter metric name
ServerRequestTotal = "server_request_total"
// ServerRequestInflight specifies meter metric name
ServerRequestInflight = "server_request_inflight"
// PublishMessageDurationSeconds specifies meter metric name
PublishMessageDurationSeconds = "publish_message_duration_seconds"
// PublishMessageLatencyMicroseconds specifies meter metric name
PublishMessageLatencyMicroseconds = "publish_message_latency_microseconds"
// PublishMessageTotal specifies meter metric name
PublishMessageTotal = "publish_message_total"
// PublishMessageInflight specifies meter metric name
PublishMessageInflight = "publish_message_inflight"
// SubscribeMessageDurationSeconds specifies meter metric name
SubscribeMessageDurationSeconds = "subscribe_message_duration_seconds"
// SubscribeMessageLatencyMicroseconds specifies meter metric name
SubscribeMessageLatencyMicroseconds = "subscribe_message_latency_microseconds"
// SubscribeMessageTotal specifies meter metric name
SubscribeMessageTotal = "subscribe_message_total"
// SubscribeMessageInflight specifies meter metric name
SubscribeMessageInflight = "subscribe_message_inflight"
labelSuccess = "success"
labelFailure = "failure"
labelStatus = "status"
labelEndpoint = "endpoint"
// DefaultSkipEndpoints contains list of endpoints that not evaluted by wrapper
DefaultSkipEndpoints = []string{"Meter.Metrics", "Health.Live", "Health.Ready", "Health.Version"}
)
// Options struct
type Options struct {
Meter meter.Meter
lopts []meter.Option
SkipEndpoints []string
}
// Option func signature
type Option func(*Options)
// NewOptions creates new Options struct
func NewOptions(opts ...Option) Options {
options := Options{
Meter: meter.DefaultMeter,
lopts: make([]meter.Option, 0, 5),
SkipEndpoints: DefaultSkipEndpoints,
}
for _, o := range opts {
o(&options)
}
return options
}
// ServiceName passes service name to meter label
func ServiceName(name string) Option {
return func(o *Options) {
o.lopts = append(o.lopts, meter.Labels("name", name))
}
}
// ServiceVersion passes service version to meter label
func ServiceVersion(version string) Option {
return func(o *Options) {
o.lopts = append(o.lopts, meter.Labels("version", version))
}
}
// ServiceID passes service id to meter label
func ServiceID(id string) Option {
return func(o *Options) {
o.lopts = append(o.lopts, meter.Labels("id", id))
}
}
// Meter passes meter
func Meter(m meter.Meter) Option {
return func(o *Options) {
o.Meter = m
}
}
// SkipEndoints add endpoint to skip
func SkipEndoints(eps ...string) Option {
return func(o *Options) {
o.SkipEndpoints = append(o.SkipEndpoints, eps...)
}
}
type wrapper struct {
client.Client
callFunc client.CallFunc
opts Options
}
// NewClientWrapper create new client wrapper
func NewClientWrapper(opts ...Option) client.Wrapper {
return func(c client.Client) client.Client {
handler := &wrapper{
opts: NewOptions(opts...),
Client: c,
}
return handler
}
}
// NewCallWrapper create new call wrapper
func NewCallWrapper(opts ...Option) client.CallWrapper {
return func(fn client.CallFunc) client.CallFunc {
handler := &wrapper{
opts: NewOptions(opts...),
callFunc: fn,
}
return handler.CallFunc
}
}
func (w *wrapper) CallFunc(ctx context.Context, addr string, req client.Request, rsp interface{}, opts client.CallOptions) error {
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range w.opts.SkipEndpoints {
if ep == endpoint {
return w.callFunc(ctx, addr, req, rsp, opts)
}
}
labels := make([]string, 0, 4)
labels = append(labels, labelEndpoint, endpoint)
w.opts.Meter.Counter(ClientRequestInflight, labels...).Inc()
ts := time.Now()
err := w.callFunc(ctx, addr, req, rsp, opts)
te := time.Since(ts)
w.opts.Meter.Counter(ClientRequestInflight, labels...).Dec()
w.opts.Meter.Summary(ClientRequestLatencyMicroseconds, labels...).Update(te.Seconds())
w.opts.Meter.Histogram(ClientRequestDurationSeconds, labels...).Update(te.Seconds())
if err == nil {
labels = append(labels, labelStatus, labelSuccess)
} else {
labels = append(labels, labelStatus, labelFailure)
}
w.opts.Meter.Counter(ClientRequestTotal, labels...).Inc()
return err
}
func (w *wrapper) Call(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range w.opts.SkipEndpoints {
if ep == endpoint {
return w.Client.Call(ctx, req, rsp, opts...)
}
}
labels := make([]string, 0, 4)
labels = append(labels, labelEndpoint, endpoint)
w.opts.Meter.Counter(ClientRequestInflight, labels...).Inc()
ts := time.Now()
err := w.Client.Call(ctx, req, rsp, opts...)
te := time.Since(ts)
w.opts.Meter.Counter(ClientRequestInflight, labels...).Dec()
w.opts.Meter.Summary(ClientRequestLatencyMicroseconds, labels...).Update(te.Seconds())
w.opts.Meter.Histogram(ClientRequestDurationSeconds, labels...).Update(te.Seconds())
if err == nil {
labels = append(labels, labelStatus, labelSuccess)
} else {
labels = append(labels, labelStatus, labelFailure)
}
w.opts.Meter.Counter(ClientRequestTotal, labels...).Inc()
return err
}
func (w *wrapper) Stream(ctx context.Context, req client.Request, opts ...client.CallOption) (client.Stream, error) {
endpoint := fmt.Sprintf("%s.%s", req.Service(), req.Endpoint())
for _, ep := range w.opts.SkipEndpoints {
if ep == endpoint {
return w.Client.Stream(ctx, req, opts...)
}
}
labels := make([]string, 0, 4)
labels = append(labels, labelEndpoint, endpoint)
w.opts.Meter.Counter(ClientRequestInflight, labels...).Inc()
ts := time.Now()
stream, err := w.Client.Stream(ctx, req, opts...)
te := time.Since(ts)
w.opts.Meter.Counter(ClientRequestInflight, labels...).Dec()
w.opts.Meter.Summary(ClientRequestLatencyMicroseconds, labels...).Update(te.Seconds())
w.opts.Meter.Histogram(ClientRequestDurationSeconds, labels...).Update(te.Seconds())
if err == nil {
labels = append(labels, labelStatus, labelSuccess)
} else {
labels = append(labels, labelStatus, labelFailure)
}
w.opts.Meter.Counter(ClientRequestTotal, labels...).Inc()
return stream, err
}
func (w *wrapper) Publish(ctx context.Context, p client.Message, opts ...client.PublishOption) error {
endpoint := p.Topic()
labels := make([]string, 0, 4)
labels = append(labels, labelEndpoint, endpoint)
w.opts.Meter.Counter(PublishMessageInflight, labels...).Inc()
ts := time.Now()
err := w.Client.Publish(ctx, p, opts...)
te := time.Since(ts)
w.opts.Meter.Counter(PublishMessageInflight, labels...).Dec()
w.opts.Meter.Summary(PublishMessageLatencyMicroseconds, labels...).Update(te.Seconds())
w.opts.Meter.Histogram(PublishMessageDurationSeconds, labels...).Update(te.Seconds())
if err == nil {
labels = append(labels, labelStatus, labelSuccess)
} else {
labels = append(labels, labelStatus, labelFailure)
}
w.opts.Meter.Counter(PublishMessageTotal, labels...).Inc()
return err
}
// NewHandlerWrapper create new server handler wrapper
// deprecated
func NewHandlerWrapper(opts ...Option) server.HandlerWrapper {
handler := &wrapper{
opts: NewOptions(opts...),
}
return handler.HandlerFunc
}
// NewServerHandlerWrapper create new server handler wrapper
func NewServerHandlerWrapper(opts ...Option) server.HandlerWrapper {
handler := &wrapper{
opts: NewOptions(opts...),
}
return handler.HandlerFunc
}
func (w *wrapper) HandlerFunc(fn server.HandlerFunc) server.HandlerFunc {
return func(ctx context.Context, req server.Request, rsp interface{}) error {
endpoint := req.Service() + "." + req.Endpoint()
for _, ep := range w.opts.SkipEndpoints {
if ep == endpoint {
return fn(ctx, req, rsp)
}
}
labels := make([]string, 0, 4)
labels = append(labels, labelEndpoint, endpoint)
w.opts.Meter.Counter(ServerRequestInflight, labels...).Inc()
ts := time.Now()
err := fn(ctx, req, rsp)
te := time.Since(ts)
w.opts.Meter.Counter(ServerRequestInflight, labels...).Dec()
w.opts.Meter.Summary(ServerRequestLatencyMicroseconds, labels...).Update(te.Seconds())
w.opts.Meter.Histogram(ServerRequestDurationSeconds, labels...).Update(te.Seconds())
if err == nil {
labels = append(labels, labelStatus, labelSuccess)
} else {
labels = append(labels, labelStatus, labelFailure)
}
w.opts.Meter.Counter(ServerRequestTotal, labels...).Inc()
return err
}
}
// NewSubscriberWrapper create server subscribe wrapper
// deprecated
func NewSubscriberWrapper(opts ...Option) server.SubscriberWrapper {
handler := &wrapper{
opts: NewOptions(opts...),
}
return handler.SubscriberFunc
}
func NewServerSubscriberWrapper(opts ...Option) server.SubscriberWrapper {
handler := &wrapper{
opts: NewOptions(opts...),
}
return handler.SubscriberFunc
}
func (w *wrapper) SubscriberFunc(fn server.SubscriberFunc) server.SubscriberFunc {
return func(ctx context.Context, msg server.Message) error {
endpoint := msg.Topic()
labels := make([]string, 0, 4)
labels = append(labels, labelEndpoint, endpoint)
w.opts.Meter.Counter(SubscribeMessageInflight, labels...).Inc()
ts := time.Now()
err := fn(ctx, msg)
te := time.Since(ts)
w.opts.Meter.Counter(SubscribeMessageInflight, labels...).Dec()
w.opts.Meter.Summary(SubscribeMessageLatencyMicroseconds, labels...).Update(te.Seconds())
w.opts.Meter.Histogram(SubscribeMessageDurationSeconds, labels...).Update(te.Seconds())
if err == nil {
labels = append(labels, labelStatus, labelSuccess)
} else {
labels = append(labels, labelStatus, labelFailure)
}
w.opts.Meter.Counter(SubscribeMessageTotal, labels...).Inc()
return err
}
}

View File

@@ -3,21 +3,21 @@ package micro
import (
"reflect"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/client"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/flow"
"go.unistack.org/micro/v3/fsm"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/meter"
"go.unistack.org/micro/v3/register"
"go.unistack.org/micro/v3/resolver"
"go.unistack.org/micro/v3/router"
"go.unistack.org/micro/v3/selector"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v3/store"
"go.unistack.org/micro/v3/sync"
"go.unistack.org/micro/v3/tracer"
"go.unistack.org/micro/v4/broker"
"go.unistack.org/micro/v4/client"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/flow"
"go.unistack.org/micro/v4/fsm"
"go.unistack.org/micro/v4/logger"
"go.unistack.org/micro/v4/meter"
"go.unistack.org/micro/v4/register"
"go.unistack.org/micro/v4/resolver"
"go.unistack.org/micro/v4/router"
"go.unistack.org/micro/v4/selector"
"go.unistack.org/micro/v4/server"
"go.unistack.org/micro/v4/store"
"go.unistack.org/micro/v4/sync"
"go.unistack.org/micro/v4/tracer"
)
func As(b any, target any) bool {
@@ -65,6 +65,8 @@ func As(b any, target any) bool {
break
case targetType.Implements(routerType):
break
case targetType.Implements(tracerType):
break
default:
return false
}
@@ -76,19 +78,21 @@ func As(b any, target any) bool {
return false
}
var brokerType = reflect.TypeOf((*broker.Broker)(nil)).Elem()
var loggerType = reflect.TypeOf((*logger.Logger)(nil)).Elem()
var clientType = reflect.TypeOf((*client.Client)(nil)).Elem()
var serverType = reflect.TypeOf((*server.Server)(nil)).Elem()
var codecType = reflect.TypeOf((*codec.Codec)(nil)).Elem()
var flowType = reflect.TypeOf((*flow.Flow)(nil)).Elem()
var fsmType = reflect.TypeOf((*fsm.FSM)(nil)).Elem()
var meterType = reflect.TypeOf((*meter.Meter)(nil)).Elem()
var registerType = reflect.TypeOf((*register.Register)(nil)).Elem()
var resolverType = reflect.TypeOf((*resolver.Resolver)(nil)).Elem()
var routerType = reflect.TypeOf((*router.Router)(nil)).Elem()
var selectorType = reflect.TypeOf((*selector.Selector)(nil)).Elem()
var storeType = reflect.TypeOf((*store.Store)(nil)).Elem()
var syncType = reflect.TypeOf((*sync.Sync)(nil)).Elem()
var tracerType = reflect.TypeOf((*tracer.Tracer)(nil)).Elem()
var serviceType = reflect.TypeOf((*Service)(nil)).Elem()
var (
brokerType = reflect.TypeOf((*broker.Broker)(nil)).Elem()
loggerType = reflect.TypeOf((*logger.Logger)(nil)).Elem()
clientType = reflect.TypeOf((*client.Client)(nil)).Elem()
serverType = reflect.TypeOf((*server.Server)(nil)).Elem()
codecType = reflect.TypeOf((*codec.Codec)(nil)).Elem()
flowType = reflect.TypeOf((*flow.Flow)(nil)).Elem()
fsmType = reflect.TypeOf((*fsm.FSM)(nil)).Elem()
meterType = reflect.TypeOf((*meter.Meter)(nil)).Elem()
registerType = reflect.TypeOf((*register.Register)(nil)).Elem()
resolverType = reflect.TypeOf((*resolver.Resolver)(nil)).Elem()
routerType = reflect.TypeOf((*router.Router)(nil)).Elem()
selectorType = reflect.TypeOf((*selector.Selector)(nil)).Elem()
storeType = reflect.TypeOf((*store.Store)(nil)).Elem()
syncType = reflect.TypeOf((*sync.Sync)(nil)).Elem()
tracerType = reflect.TypeOf((*tracer.Tracer)(nil)).Elem()
serviceType = reflect.TypeOf((*Service)(nil)).Elem()
)

View File

@@ -6,8 +6,9 @@ import (
"reflect"
"testing"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/fsm"
"go.unistack.org/micro/v4/broker"
"go.unistack.org/micro/v4/fsm"
"go.unistack.org/micro/v4/metadata"
)
func TestAs(t *testing.T) {
@@ -18,26 +19,27 @@ func TestAs(t *testing.T) {
testCases := []struct {
b any
target any
match bool
want any
match bool
}{
{
broTarget,
&b,
true,
broTarget,
b: broTarget,
target: &b,
match: true,
want: broTarget,
},
{
nil,
&b,
false,
nil,
b: nil,
target: &b,
match: false,
want: nil,
},
{
fsmTarget,
&b,
false,
nil,
b: fsmTarget,
target: &b,
match: false,
want: nil,
},
}
for i, tc := range testCases {
@@ -60,13 +62,21 @@ func TestAs(t *testing.T) {
}
}
var _ broker.Broker = (*bro)(nil)
type bro struct {
name string
}
func (p *bro) Name() string { return p.name }
func (p *bro) Init(opts ...broker.Option) error { return nil }
func (p *bro) Live() bool { return true }
func (p *bro) Ready() bool { return true }
func (p *bro) Health() bool { return true }
func (p *bro) Init(_ ...broker.Option) error { return nil }
// Options returns broker options
func (p *bro) Options() broker.Options { return broker.Options{} }
@@ -75,28 +85,23 @@ func (p *bro) Options() broker.Options { return broker.Options{} }
func (p *bro) Address() string { return "" }
// Connect connects to broker
func (p *bro) Connect(ctx context.Context) error { return nil }
func (p *bro) Connect(_ context.Context) error { return nil }
// Disconnect disconnect from broker
func (p *bro) Disconnect(ctx context.Context) error { return nil }
func (p *bro) Disconnect(_ context.Context) error { return nil }
// Publish message, msg can be single broker.Message or []broker.Message
func (p *bro) Publish(ctx context.Context, topic string, msg *broker.Message, opts ...broker.PublishOption) error {
return nil
}
// BatchPublish messages to broker with multiple topics
func (p *bro) BatchPublish(ctx context.Context, msgs []*broker.Message, opts ...broker.PublishOption) error {
return nil
}
// BatchSubscribe subscribes to topic messages via handler
func (p *bro) BatchSubscribe(ctx context.Context, topic string, h broker.BatchHandler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
// NewMessage creates new message
func (p *bro) NewMessage(_ context.Context, _ metadata.Metadata, _ interface{}, _ ...broker.PublishOption) (broker.Message, error) {
return nil, nil
}
// Publish message, msg can be single broker.Message or []broker.Message
func (p *bro) Publish(_ context.Context, _ string, _ ...broker.Message) error {
return nil
}
// Subscribe subscribes to topic message via handler
func (p *bro) Subscribe(ctx context.Context, topic string, handler broker.Handler, opts ...broker.SubscribeOption) (broker.Subscriber, error) {
func (p *bro) Subscribe(_ context.Context, _ string, _ interface{}, _ ...broker.SubscribeOption) (broker.Subscriber, error) {
return nil, nil
}
@@ -107,9 +112,9 @@ type fsmT struct {
name string
}
func (f *fsmT) Start(ctx context.Context, a interface{}, o ...Option) (interface{}, error) {
func (f *fsmT) Start(_ context.Context, _ interface{}, _ ...Option) (interface{}, error) {
return nil, nil
}
func (f *fsmT) Current() string { return f.name }
func (f *fsmT) Reset() {}
func (f *fsmT) State(s string, sf fsm.StateFunc) {}
func (f *fsmT) Current() string { return f.name }
func (f *fsmT) Reset() {}
func (f *fsmT) State(_ string, _ fsm.StateFunc) {}

Some files were not shown because too many files have changed in this diff Show More