Compare commits

...

58 Commits

Author SHA1 Message Date
2b3c413adc fixup grpc error codes in unary and stream processing
All checks were successful
sync / sync (push) Successful in 26s
coverage / build (push) Successful in 1m51s
test / test (push) Successful in 2m40s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-06-08 11:16:43 +03:00
20fb19fee9 changed embedded mutex to private field (#278)
Some checks failed
coverage / build (push) Successful in 2m26s
test / test (push) Failing after 16m18s
sync / sync (push) Successful in 8s
2025-05-14 01:25:57 +03:00
8e3c56f4ed update ci (#277)
All checks were successful
sync / sync (push) Successful in 10s
2025-05-05 19:19:33 +03:00
fcbae6f94a added commit hash check to avoid unnecessary repository cloning (#275)
All checks were successful
sync / sync (push) Successful in 10s
2025-05-05 13:44:43 +03:00
2f818d389b Merge branch 'v4' of https://git.unistack.org/unistack-org/micro-server-grpc into v4
All checks were successful
sync / sync (push) Successful in 14s
2025-05-04 15:02:46 +03:00
d55cb59531 fixup sync
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-05-04 15:02:02 +03:00
8c416b10ef Обновить .github/workflows/job_sync.yml
Some checks failed
sync / sync (push) Has been cancelled
2025-05-02 18:55:11 +03:00
d3c2ae5f54 convert headers to HTTP/2 format before SendHeader() (#273)
All checks were successful
coverage / build (push) Has been skipped
test / test (push) Successful in 2m19s
2025-05-02 18:51:30 +03:00
3bb8ec0753 Обновить .github/workflows/job_sync.yml
All checks were successful
sync / sync (push) Has been skipped
2025-05-01 22:07:43 +03:00
27bca35eb6 Обновить .github/workflows/job_sync.yml
All checks were successful
sync / sync (push) Has been skipped
2025-05-01 22:01:33 +03:00
498218912c Обновить .github/workflows/job_sync.yml
All checks were successful
sync / sync (push) Successful in 23s
2025-05-01 21:59:18 +03:00
bd04f5b9cb Обновить .github/workflows/job_sync.yml 2025-05-01 21:58:56 +03:00
dc976006ad Обновить .github/workflows/job_sync.yml
All checks were successful
sync / sync (push) Has been skipped
2025-05-01 21:05:02 +03:00
934ebf6c0a fix uninitialized response metadata for incoming context (#264)
All checks were successful
coverage / build (push) Has been skipped
test / test (push) Successful in 2m8s
sync / sync (push) Successful in 18s
2025-05-01 20:42:59 +03:00
6d8fce53dd update ci (#265) 2025-05-01 20:42:52 +03:00
f12f3fb2c2 Обновить .github/workflows/job_coverage.yml
All checks were successful
sync / sync (push) Successful in 15s
2025-05-01 20:34:33 +03:00
vtolstov
5a755437c9 Apply Code Coverage Badge 2025-04-29 15:43:08 +00:00
05db1f3dae [v4] fix ci pipeline (#260)
All checks were successful
coverage / build (push) Successful in 1m54s
test / test (push) Successful in 2m42s
* attempt to fix coverage job

* Apply Code Coverage Badge

---------

Co-authored-by: pugnack <pugnack@users.noreply.github.com>
2025-04-29 18:39:27 +03:00
e4ba134fa6 [v4] breaking change: modify API for working with response metadata (#255)
Some checks failed
coverage / build (push) Failing after 40s
test / test (push) Successful in 3m3s
sync / sync (push) Successful in 17s
* implement functions to append/get metadata

* сhanged behavior to return nil instead of empty metadata for getResponseMetadata()

* removed metadata copy when passing to gRPC headers

---------

Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 18:34:11 +03:00
8a85989b79 update all
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 18:30:08 +03:00
56185faabe fixup coverage job
All checks were successful
sync / sync (push) Successful in 26s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 14:16:34 +03:00
823b2bdc52 fixup for latest micro
Some checks failed
coverage / build (push) Failing after 1m21s
test / test (push) Successful in 4m8s
sync / sync (push) Failing after 8s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-29 14:08:59 +03:00
48906c5612 improve sync
Some checks failed
coverage / build (push) Failing after 5m36s
test / test (push) Successful in 11m32s
sync / sync (push) Failing after 41s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2025-04-27 21:38:23 +03:00
23a1ff424e prepare v4 (need swap target!) (#195)
All checks were successful
test / test (push) Successful in 1m53s
move to v4 micro

Co-authored-by: Vasiliy Tolstov <v.tolstov@unistack.org>
Reviewed-on: #195
Co-authored-by: Evstigneev Denis <danteevstigneev@yandex.ru>
Co-committed-by: Evstigneev Denis <danteevstigneev@yandex.ru>
2025-03-02 21:13:25 +03:00
134f7374aa Merge pull request 'update for latest micro' (#194) from register into v3
All checks were successful
test / test (push) Successful in 2m59s
Reviewed-on: #194
2024-12-27 01:39:00 +03:00
828f211a4e Merge branch 'v3' into register
Some checks failed
lint / lint (pull_request) Failing after 9m10s
test / test (pull_request) Successful in 11m57s
2024-12-27 01:25:45 +03:00
75c9d467e7 update for latest micro
Some checks failed
prbuild / test (pull_request) Failing after 2m37s
prbuild / lint (pull_request) Successful in 5m19s
codeql / analyze (go) (pull_request) Failing after 10m32s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-27 01:25:15 +03:00
699358383d Merge pull request 'update call Hooks' (#193) from devstigneev/micro-server-grpc:v3 into v3
All checks were successful
test / test (push) Successful in 2m27s
Reviewed-on: #193
2024-12-22 22:53:44 +03:00
82b95f8605 update call Hooks
All checks were successful
test / test (pull_request) Successful in 1m48s
lint / lint (pull_request) Successful in 1m23s
2024-12-20 21:37:01 +03:00
8ec0a4ef77 Update actions (#191)
All checks were successful
test / test (push) Successful in 1m42s
Update actions:
- build -> job_test
- pr -> job_lint

Co-authored-by: Aleksandr Tolstikhin <atolstikhin@mtsbank.ru>
Co-authored-by: Василий Толстов <v.tolstov@unistack.org>
Reviewed-on: #191
Co-authored-by: Александр Толстихин <tolstihin1996@mail.ru>
Co-committed-by: Александр Толстихин <tolstihin1996@mail.ru>
2024-12-11 01:37:14 +03:00
fb8e9ccb75 fixup lint and tests
Some checks failed
build / test (push) Failing after 28s
build / lint (push) Failing after 14m27s
codeql / analyze (go) (push) Failing after 14m22s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-11 01:34:24 +03:00
129f02b3bc fixup lint and tests
Some checks failed
build / test (push) Failing after 1m33s
build / lint (push) Successful in 2m15s
codeql / analyze (go) (push) Failing after 3m38s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-11 01:29:39 +03:00
55cdcfb3af rename workflow dir
Some checks failed
build / test (push) Failing after 4m53s
build / lint (push) Successful in 9m29s
codeql / analyze (go) (push) Failing after 14m56s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-07 00:52:51 +03:00
ac5e6dffa0 create outgoing metadata automatic on request
Some checks failed
build / test (push) Failing after 1s
build / lint (push) Failing after 1s
codeql / analyze (go) (push) Failing after 0s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-04 11:32:23 +03:00
66756f74dd update micro, add Health/Live/Ready checks
Some checks failed
codeql / analyze (go) (push) Failing after 55s
build / test (push) Failing after 4m55s
build / lint (push) Successful in 9m33s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-12-02 15:55:55 +03:00
28234bcc3c add server type in metrics and tracing
Some checks failed
build / test (push) Failing after 4m56s
build / lint (push) Successful in 9m32s
codeql / analyze (go) (push) Failing after 1m6s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-10 20:16:08 +03:00
7723dcaddf fixup endpoint name in tracing and metrics
Some checks failed
codeql / analyze (go) (push) Failing after 43s
build / test (push) Failing after 4m54s
build / lint (push) Successful in 9m28s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-11-10 18:36:00 +03:00
f6ec5ae624 update for latest micro logger changes
Some checks failed
build / test (push) Failing after 4m56s
build / lint (push) Successful in 9m32s
codeql / analyze (go) (push) Failing after 1m27s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-10-12 13:18:36 +03:00
Кирилл Горбунов
295f26dd2c #348 add check method in should be skipped (#175)
Some checks failed
build / test (push) Failing after 7s
build / lint (push) Failing after 7s
codeql / analyze (go) (push) Failing after 13m47s
Co-authored-by: Gorbunov Kirill Andreevich <kgorbunov@mtsbank.ru>
Reviewed-on: #175
Reviewed-by: Василий Толстов <v.tolstov@unistack.org>
Co-authored-by: Кирилл Горбунов <kirya_gorbunov_2015@mail.ru>
Co-committed-by: Кирилл Горбунов <kirya_gorbunov_2015@mail.ru>
2024-09-20 17:43:08 +03:00
7b5ce8c49a update to latest micro
Some checks failed
build / test (push) Failing after 9s
build / lint (push) Failing after 9s
codeql / analyze (go) (push) Failing after 10s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-09-17 12:52:41 +03:00
3fc05ae291 fix metadata extract and trace span creating
Some checks failed
build / lint (push) Successful in 27s
build / test (push) Failing after 33s
codeql / analyze (go) (push) Failing after 10m56s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-07-18 11:09:05 +03:00
0b29668fe5 wrap stream for tracing
Some checks failed
build / test (push) Failing after 1m48s
build / lint (push) Successful in 9m12s
codeql / analyze (go) (push) Failing after 7m32s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-05-06 00:28:56 +03:00
eecc3854b2 add metrics and tracing
Some checks failed
build / test (push) Failing after 1m53s
build / lint (push) Successful in 9m16s
codeql / analyze (go) (push) Failing after 3m13s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-04-23 08:01:44 +03:00
e6e64ff070 improve meter
Some checks failed
build / test (push) Failing after 1m48s
build / lint (push) Successful in 9m15s
codeql / analyze (go) (push) Failing after 5m10s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-04-06 22:48:21 +03:00
5ec59f0989 add missing file
Some checks failed
build / test (push) Failing after 1m42s
codeql / analyze (go) (push) Failing after 1m44s
build / lint (push) Successful in 9m21s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-04-06 22:36:21 +03:00
d4a2dd918f meter support
Some checks failed
build / test (push) Failing after 1m37s
codeql / analyze (go) (push) Failing after 1m48s
build / lint (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-04-06 22:32:12 +03:00
8c42fbb18b update for latest micro
Some checks failed
build / test (push) Failing after 1m20s
build / lint (push) Successful in 9m16s
codeql / analyze (go) (push) Failing after 2m15s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-03-17 00:32:52 +03:00
f860254f7b Merge pull request 'use time option' (#174) from devstigneev/micro-server-grpc:issue_171 into v3
Some checks failed
build / test (push) Failing after 1m30s
build / lint (push) Successful in 9m19s
codeql / analyze (go) (push) Failing after 2m0s
Reviewed-on: #174
2024-03-07 23:22:25 +03:00
0c060a5868 use time option
Some checks failed
automerge / automerge (pull_request) Has been skipped
dependabot-automerge / automerge (pull_request) Has been skipped
autoapprove / autoapprove (pull_request) Successful in 10s
codeql / analyze (go) (pull_request) Has been cancelled
prbuild / test (pull_request) Has been cancelled
prbuild / lint (pull_request) Has been cancelled
2024-03-07 21:52:54 +03:00
c24f1f26f8 Merge pull request '#131 delete recover' (#172) from kgorbunov/micro-server-grpc:#131 into v3
Some checks failed
build / test (push) Failing after 1m47s
codeql / analyze (go) (push) Failing after 6m1s
build / lint (push) Failing after 18m28s
Reviewed-on: #172
2024-02-27 20:28:59 +03:00
Gorbunov Kirill Andreevich
566036802b #131 delete recover
Some checks failed
autoapprove / autoapprove (pull_request) Successful in 8s
automerge / automerge (pull_request) Has been skipped
dependabot-automerge / automerge (pull_request) Has been skipped
codeql / analyze (go) (pull_request) Has been cancelled
prbuild / test (pull_request) Has been cancelled
prbuild / lint (pull_request) Has been cancelled
2024-02-27 17:08:08 +03:00
f33595f72a dont log error in server
Some checks failed
build / test (push) Has been cancelled
build / lint (push) Has been cancelled
codeql / analyze (go) (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-02-19 02:11:29 +03:00
f94c265c7a optimize unknown handler
Some checks failed
build / test (push) Has been cancelled
build / lint (push) Has been cancelled
codeql / analyze (go) (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-02-11 22:17:55 +03:00
3036359547 reflection update
Some checks failed
build / test (push) Has been cancelled
build / lint (push) Has been cancelled
codeql / analyze (go) (push) Has been cancelled
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2024-01-22 09:14:50 +03:00
9e0a58405f Merge pull request 'copy incoming content-type' (#169) from ct into v3
Some checks failed
build / test (push) Failing after 1m27s
build / lint (push) Failing after 2m31s
codeql / analyze (go) (push) Failing after 2m38s
Reviewed-on: #169
2023-12-20 09:24:16 +03:00
ee3f978683 copy incoming content-type
Some checks failed
codeql / analyze (go) (pull_request) Failing after 2m46s
prbuild / test (pull_request) Failing after 1m28s
prbuild / lint (pull_request) Failing after 2m44s
autoapprove / autoapprove (pull_request) Failing after 1m24s
automerge / automerge (pull_request) Failing after 4s
dependabot-automerge / automerge (pull_request) Has been skipped
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2023-12-20 09:23:42 +03:00
46891c397f add more metadata
Some checks failed
build / test (push) Failing after 1m27s
build / lint (push) Failing after 2m38s
codeql / analyze (go) (push) Failing after 2m46s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2023-11-04 00:05:33 +03:00
6856038abe add Path metadata
Some checks failed
build / test (push) Failing after 1m28s
build / lint (push) Failing after 2m38s
codeql / analyze (go) (push) Failing after 2m44s
Signed-off-by: Vasiliy Tolstov <v.tolstov@unistack.org>
2023-11-03 19:33:49 +03:00
40 changed files with 1308 additions and 2607 deletions

View File

@@ -1,6 +1,6 @@
---
name: Bug report
about: For reporting bugs in go-micro
about: For reporting bugs in micro
title: "[BUG]"
labels: ''
assignees: ''
@@ -16,9 +16,3 @@ assignees: ''
**How to reproduce the bug:**
If possible, please include a minimal code snippet here.
**Environment:**
Go Version: please paste `go version` output here
```
please paste `go env` output here
```

View File

@@ -1,6 +1,6 @@
---
name: Feature request / Enhancement
about: If you have a need not served by go-micro
about: If you have a need not served by micro
title: "[FEATURE]"
labels: ''
assignees: ''

View File

@@ -1,14 +1,8 @@
---
name: Question
about: Ask a question about go-micro
about: Ask a question about micro
title: ''
labels: ''
assignees: ''
---
Before asking, please check if your question has already been answered:
1. Check the documentation - https://micro.mu/docs/
2. Check the examples and plugins - https://github.com/micro/examples & https://github.com/micro/go-plugins
3. Search existing issues

28
.github/autoapprove.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: "autoapprove"
on:
pull_request_target:
types: [assigned, opened, synchronize, reopened]
workflow_run:
workflows: ["prbuild"]
types:
- completed
permissions:
pull-requests: write
contents: write
jobs:
autoapprove:
runs-on: ubuntu-latest
steps:
- name: approve
run: [ "curl -o tea https://dl.gitea.com/tea/main/tea-main-linux-amd64",
"chmod +x ./tea",
"./tea login add --name unistack --token ${{ secrets.GITHUB_TOKEN }} --url https://git.unistack.org",
"./tea pr --repo ${{ github.event.repository.name }}"
]
if: github.actor == 'vtolstov'
id: approve
with:
github-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,19 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
# Maintain dependencies for GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
# Maintain dependencies for Golang
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "daily"

View File

@@ -1,20 +0,0 @@
name: "autoapprove"
on:
pull_request_target:
types: [assigned, opened, synchronize, reopened]
permissions:
pull-requests: write
contents: write
jobs:
autoapprove:
runs-on: ubuntu-latest
steps:
- name: approve
uses: hmarr/auto-approve-action@v3
if: github.actor == 'vtolstov' || github.actor == 'dependabot[bot]'
id: approve
with:
github-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,21 +0,0 @@
name: "automerge"
on:
pull_request_target:
types: [assigned, opened, synchronize, reopened]
permissions:
pull-requests: write
contents: write
jobs:
automerge:
runs-on: ubuntu-latest
if: github.actor == 'vtolstov'
steps:
- name: merge
id: merge
run: gh pr merge --auto --merge "$PR_URL"
env:
PR_URL: ${{github.event.pull_request.html_url}}
GITHUB_TOKEN: ${{secrets.TOKEN}}

View File

@@ -1,47 +0,0 @@
name: build
on:
push:
branches:
- master
- v3
jobs:
test:
name: test
runs-on: ubuntu-latest
steps:
- name: setup
uses: actions/setup-go@v3
with:
go-version: 1.17
- name: checkout
uses: actions/checkout@v3
- name: cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-go-
- name: deps
run: go get -v -t -d ./...
- name: test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
- name: lint
uses: golangci/golangci-lint-action@v3.4.0
continue-on-error: true
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.30
# Optional: working directory, useful for monorepos
# working-directory: somedir
# Optional: golangci-lint command line arguments.
# args: --issues-exit-code=0
# Optional: show only new issues if it's a pull request. The default value is `false`.
# only-new-issues: true

View File

@@ -1,78 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "codeql"
on:
workflow_run:
workflows: ["prbuild"]
types:
- completed
push:
branches: [ master, v3 ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master, v3 ]
schedule:
- cron: '34 1 * * 0'
jobs:
analyze:
name: analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: checkout
uses: actions/checkout@v3
- name: setup
uses: actions/setup-go@v3
with:
go-version: 1.17
# Initializes the CodeQL tools for scanning.
- name: init
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: analyze
uses: github/codeql-action/analyze@v2

View File

@@ -1,27 +0,0 @@
name: "dependabot-automerge"
on:
pull_request_target:
types: [assigned, opened, synchronize, reopened]
permissions:
pull-requests: write
contents: write
jobs:
automerge:
runs-on: ubuntu-latest
if: github.actor == 'dependabot[bot]'
steps:
- name: metadata
id: metadata
uses: dependabot/fetch-metadata@v1.3.6
with:
github-token: "${{ secrets.TOKEN }}"
- name: merge
id: merge
if: ${{contains(steps.metadata.outputs.dependency-names, 'go.unistack.org')}}
run: gh pr merge --auto --merge "$PR_URL"
env:
PR_URL: ${{github.event.pull_request.html_url}}
GITHUB_TOKEN: ${{secrets.TOKEN}}

53
.github/workflows/job_coverage.yml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: coverage
on:
push:
branches: [ main, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
pull_request:
branches: [ main, v3, v4 ]
jobs:
build:
if: github.server_url != 'https://github.com'
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: test coverage
run: |
go test -v -cover ./... -covermode=count -coverprofile coverage.out -coverpkg ./...
go tool cover -func coverage.out -o coverage.out
- name: coverage badge
uses: tj-actions/coverage-badge-go@v2
with:
green: 80
filename: coverage.out
- uses: stefanzweifel/git-auto-commit-action@v4
name: autocommit
with:
commit_message: Apply Code Coverage Badge
skip_fetch: false
skip_checkout: false
file_pattern: ./README.md
- name: push
if: steps.auto-commit-action.outputs.changes_detected == 'true'
uses: ad-m/github-push-action@master
with:
github_token: ${{ github.token }}
branch: ${{ github.ref }}

29
.github/workflows/job_lint.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: lint
on:
pull_request:
types: [opened, reopened, synchronize]
branches: [ master, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup deps
run: go get -v ./...
- name: run lint
uses: golangci/golangci-lint-action@v6
with:
version: 'latest'

94
.github/workflows/job_sync.yml vendored Normal file
View File

@@ -0,0 +1,94 @@
name: sync
on:
schedule:
- cron: '*/5 * * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
sync:
if: github.server_url != 'https://github.com'
runs-on: ubuntu-latest
steps:
- name: init
run: |
git config --global user.email "vtolstov <vtolstov@users.noreply.github.com>"
git config --global user.name "github-actions[bot]"
echo "machine git.unistack.org login vtolstov password ${{ secrets.TOKEN_GITEA }}" >> /root/.netrc
echo "machine github.com login vtolstov password ${{ secrets.TOKEN_GITHUB }}" >> /root/.netrc
- name: check master
id: check_master
run: |
src_hash=$(git ls-remote https://github.com/${GITHUB_REPOSITORY} refs/heads/master | cut -f1)
dst_hash=$(git ls-remote ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} refs/heads/master | cut -f1)
echo "src_hash=$src_hash"
echo "dst_hash=$dst_hash"
if [ "$src_hash" != "$dst_hash" ]; then
echo "sync_needed=true" >> $GITHUB_OUTPUT
else
echo "sync_needed=false" >> $GITHUB_OUTPUT
fi
- name: sync master
if: steps.check_master.outputs.sync_needed == 'true'
run: |
git clone --filter=blob:none --filter=tree:0 --branch master --single-branch ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} repo
cd repo
git remote add --no-tags --fetch --track master upstream https://github.com/${GITHUB_REPOSITORY}
git pull --rebase upstream master
git push upstream master --progress
git push origin master --progress
cd ../
rm -rf repo
- name: check v3
id: check_v3
run: |
src_hash=$(git ls-remote https://github.com/${GITHUB_REPOSITORY} refs/heads/v3 | cut -f1)
dst_hash=$(git ls-remote ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} refs/heads/v3 | cut -f1)
echo "src_hash=$src_hash"
echo "dst_hash=$dst_hash"
if [ "$src_hash" != "$dst_hash" ]; then
echo "sync_needed=true" >> $GITHUB_OUTPUT
else
echo "sync_needed=false" >> $GITHUB_OUTPUT
fi
- name: sync v3
if: steps.check_v3.outputs.sync_needed == 'true'
run: |
git clone --filter=blob:none --filter=tree:0 --branch v3 --single-branch ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} repo
cd repo
git remote add --no-tags --fetch --track v3 upstream https://github.com/${GITHUB_REPOSITORY}
git pull --rebase upstream v3
git push upstream v3 --progress
git push origin v3 --progress
cd ../
rm -rf repo
- name: check v4
id: check_v4
run: |
src_hash=$(git ls-remote https://github.com/${GITHUB_REPOSITORY} refs/heads/v4 | cut -f1)
dst_hash=$(git ls-remote ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} refs/heads/v4 | cut -f1)
echo "src_hash=$src_hash"
echo "dst_hash=$dst_hash"
if [ "$src_hash" != "$dst_hash" ]; then
echo "sync_needed=true" >> $GITHUB_OUTPUT
else
echo "sync_needed=false" >> $GITHUB_OUTPUT
fi
- name: sync v4
if: steps.check_v4.outputs.sync_needed == 'true'
run: |
git clone --filter=blob:none --filter=tree:0 --branch v4 --single-branch ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY} repo
cd repo
git remote add --no-tags --fetch --track v4 upstream https://github.com/${GITHUB_REPOSITORY}
git pull --rebase upstream v4
git push upstream v4 --progress
git push origin v4 --progress
cd ../
rm -rf repo

31
.github/workflows/job_test.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: test
on:
pull_request:
types: [opened, reopened, synchronize]
branches: [ master, v3, v4 ]
push:
branches: [ master, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup deps
run: go get -v ./...
- name: run test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...

50
.github/workflows/job_tests.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: test
on:
pull_request:
types: [opened, reopened, synchronize]
branches: [ master, v3, v4 ]
push:
branches: [ master, v3, v4 ]
paths-ignore:
- '.github/**'
- '.gitea/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v4
with:
filter: 'blob:none'
- name: checkout tests
uses: actions/checkout@v4
with:
ref: master
filter: 'blob:none'
repository: unistack-org/micro-tests
path: micro-tests
- name: setup go
uses: actions/setup-go@v5
with:
cache-dependency-path: "**/*.sum"
go-version: 'stable'
- name: setup go work
env:
GOWORK: ${{ github.workspace }}/go.work
run: |
go work init
go work use .
go work use micro-tests
- name: setup deps
env:
GOWORK: ${{ github.workspace }}/go.work
run: go get -v ./...
- name: run tests
env:
INTEGRATION_TESTS: yes
GOWORK: ${{ github.workspace }}/go.work
run: |
cd micro-tests
go test -mod readonly -v ./... || true

View File

@@ -1,47 +0,0 @@
name: prbuild
on:
pull_request:
branches:
- master
- v3
jobs:
test:
name: test
runs-on: ubuntu-latest
steps:
- name: setup
uses: actions/setup-go@v3
with:
go-version: 1.17
- name: checkout
uses: actions/checkout@v3
- name: cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-go-
- name: deps
run: go get -v -t -d ./...
- name: test
env:
INTEGRATION_TESTS: yes
run: go test -mod readonly -v ./...
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
- name: lint
uses: golangci/golangci-lint-action@v3.4.0
continue-on-error: true
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.30
# Optional: working directory, useful for monorepos
# working-directory: somedir
# Optional: golangci-lint command line arguments.
# args: --issues-exit-code=0
# Optional: show only new issues if it's a pull request. The default value is `false`.
# only-new-issues: true

24
.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
bin
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work
# General
.DS_Store
.idea
.vscode

View File

@@ -1,44 +1,5 @@
run:
concurrency: 4
deadline: 5m
concurrency: 8
timeout: 5m
issues-exit-code: 1
tests: true
linters-settings:
govet:
check-shadowing: true
enable:
- fieldalignment
linters:
enable:
- govet
- deadcode
- errcheck
- govet
- ineffassign
- staticcheck
- structcheck
- typecheck
- unused
- varcheck
- bodyclose
- gci
- goconst
- gocritic
- gosimple
- gofmt
- gofumpt
- goimports
- golint
- gosec
- makezero
- misspell
- nakedret
- nestif
- nilerr
- noctx
- prealloc
- unconvert
- unparam
disable-all: false

4
README.md Normal file
View File

@@ -0,0 +1,4 @@
# GRPC Server
![Coverage](https://img.shields.io/badge/Coverage-3.5%25-red)
This plugin is a grpc server for micro.

View File

@@ -1,9 +1,7 @@
package grpc
import (
"io"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v4/codec"
"google.golang.org/grpc/encoding"
)
@@ -49,29 +47,3 @@ func (w *wrapGrpcCodec) Unmarshal(d []byte, v interface{}, opts ...codec.Option)
}
return w.Codec.Unmarshal(d, v)
}
func (w *wrapGrpcCodec) ReadHeader(conn io.Reader, m *codec.Message, mt codec.MessageType) error {
return nil
}
func (w *wrapGrpcCodec) ReadBody(conn io.Reader, v interface{}) error {
if m, ok := v.(*codec.Frame); ok {
_, err := conn.Read(m.Data)
return err
}
return codec.ErrInvalidMessage
}
func (w *wrapGrpcCodec) Write(conn io.Writer, m *codec.Message, v interface{}) error {
// if we don't have a body
if v != nil {
b, err := w.Marshal(v)
if err != nil {
return err
}
m.Body = b
}
// write the body using the framing codec
_, err := conn.Write(m.Body)
return err
}

View File

@@ -6,7 +6,7 @@ import (
"net/http"
"os"
"go.unistack.org/micro/v3/errors"
"go.unistack.org/micro/v4/errors"
"google.golang.org/grpc/codes"
)

4
generate.go Normal file
View File

@@ -0,0 +1,4 @@
package grpc
//go:generate go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
//go:generate sh -c "protoc -I./proto -I$(go list -f '{{ .Dir }}' -m go.unistack.org/micro-proto/v4) -I. --go-grpc_out=paths=source_relative:./proto --go_out=paths=source_relative:./proto proto/test.proto"

30
go.mod
View File

@@ -1,11 +1,27 @@
module go.unistack.org/micro-server-grpc/v3
module go.unistack.org/micro-server-grpc/v4
go 1.16
go 1.23.0
toolchain go1.24.2
require (
github.com/golang/protobuf v1.5.2
go.unistack.org/micro/v3 v3.10.14
golang.org/x/net v0.5.0
google.golang.org/grpc v1.52.3
google.golang.org/protobuf v1.28.1
github.com/stretchr/testify v1.10.0
go.unistack.org/micro/v4 v4.1.8
golang.org/x/net v0.39.0
google.golang.org/grpc v1.72.0
google.golang.org/protobuf v1.36.6
)
require (
github.com/ash3in/uuidv8 v1.2.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/matoous/go-nanoid v1.5.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/spf13/cast v1.7.1 // indirect
go.unistack.org/micro-proto/v4 v4.1.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/text v0.24.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250428153025-10db94c68c34 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

1094
go.sum

File diff suppressed because it is too large Load Diff

660
grpc.go

File diff suppressed because it is too large Load Diff

View File

@@ -3,43 +3,25 @@ package grpc
import (
"reflect"
"go.unistack.org/micro/v3/register"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v4/server"
)
type rpcHandler struct {
opts server.HandlerOptions
handler interface{}
name string
endpoints []*register.Endpoint
opts server.HandlerOptions
handler interface{}
name string
}
func newRPCHandler(handler interface{}, opts ...server.HandlerOption) server.Handler {
options := server.NewHandlerOptions(opts...)
typ := reflect.TypeOf(handler)
hdlr := reflect.ValueOf(handler)
name := reflect.Indirect(hdlr).Type().Name()
var endpoints []*register.Endpoint
for m := 0; m < typ.NumMethod(); m++ {
if e := register.ExtractEndpoint(typ.Method(m)); e != nil {
e.Name = name + "." + e.Name
for k, v := range options.Metadata[e.Name] {
e.Metadata[k] = v
}
endpoints = append(endpoints, e)
}
}
return &rpcHandler{
name: name,
handler: handler,
endpoints: endpoints,
opts: options,
name: name,
handler: handler,
opts: options,
}
}
@@ -51,10 +33,6 @@ func (r *rpcHandler) Handler() interface{} {
return r.handler
}
func (r *rpcHandler) Endpoints() []*register.Endpoint {
return r.endpoints
}
func (r *rpcHandler) Options() server.HandlerOptions {
return r.opts
}

47
metadata.go Normal file
View File

@@ -0,0 +1,47 @@
package grpc
import (
"context"
"go.unistack.org/micro/v4/metadata"
)
type (
rspMetadataKey struct{}
rspMetadataVal struct {
m metadata.Metadata
}
)
// AppendResponseMetadata adds metadata entries to metadata.Metadata stored in the context.
// It expects the context to contain a *rspMetadataVal value under the rspMetadataKey{} key.
// If the value is missing or invalid, the function does nothing.
//
// Note: this function is not thread-safe. Synchronization is required if used from multiple goroutines.
func AppendResponseMetadata(ctx context.Context, md metadata.Metadata) {
if md == nil {
return
}
val, ok := ctx.Value(rspMetadataKey{}).(*rspMetadataVal)
if !ok || val == nil || val.m == nil {
return
}
for key, values := range md {
val.m.Append(key, values...)
}
}
// getResponseMetadata retrieves the metadata.Metadata stored in the context.
//
// Note: this function is not thread-safe. Synchronization is required if used from multiple goroutines.
// If you plan to modify the returned metadata, make a full copy to avoid affecting shared state.
func getResponseMetadata(ctx context.Context) metadata.Metadata {
val, ok := ctx.Value(rspMetadataKey{}).(*rspMetadataVal)
if !ok || val == nil || val.m == nil {
return nil
}
return val.m
}

136
metadata_test.go Normal file
View File

@@ -0,0 +1,136 @@
package grpc
import (
"context"
"testing"
"github.com/stretchr/testify/require"
"go.unistack.org/micro/v4/metadata"
)
func TestAppendResponseMetadata(t *testing.T) {
tests := []struct {
name string
ctx context.Context
md metadata.Metadata
expected context.Context
}{
{
name: "nil metadata",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
md: nil,
expected: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
},
{
name: "empty metadata",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
md: metadata.Metadata{},
expected: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
},
{
name: "context without response metadata key",
ctx: context.Background(),
md: metadata.Pairs("key1", "val1"),
expected: context.Background(),
},
{
name: "context with nil response metadata value",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, nil),
md: metadata.Pairs("key1", "val1"),
expected: context.WithValue(context.Background(), rspMetadataKey{}, nil),
},
{
name: "context with incorrect type in response metadata value",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, struct{}{}),
md: metadata.Pairs("key1", "val1"),
expected: context.WithValue(context.Background(), rspMetadataKey{}, struct{}{}),
},
{
name: "context with response metadata value, but nil metadata",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: nil}),
md: metadata.Pairs("key1", "val1"),
expected: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: nil}),
},
{
name: "basic metadata append",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
md: metadata.Pairs("key1", "val1"),
expected: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{
m: metadata.Metadata{
"key1": []string{"val1"},
},
}),
},
{
name: "multiple values for same key",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
md: metadata.Pairs("key1", "val1", "key1", "val2"),
expected: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{
m: metadata.Metadata{
"key1": []string{"val1", "val2"},
},
}),
},
{
name: "multiple values for different keys",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: metadata.Metadata{}}),
md: metadata.Pairs("key1", "val1", "key1", "val2", "key2", "val3", "key2", "val4", "key3", "val5"),
expected: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{
m: metadata.Metadata{
"key1": []string{"val1", "val2"},
"key2": []string{"val3", "val4"},
"key3": []string{"val5"},
},
}),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
AppendResponseMetadata(tt.ctx, tt.md)
require.Equal(t, tt.expected, tt.ctx)
})
}
}
func TestGetResponseMetadata(t *testing.T) {
tests := []struct {
name string
ctx context.Context
expected metadata.Metadata
}{
{
name: "context without response metadata key",
ctx: context.Background(),
expected: nil,
},
{
name: "context with nil response metadata value",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, nil),
expected: nil,
},
{
name: "context with incorrect type in response metadata value",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &struct{}{}),
expected: nil,
},
{
name: "context with response metadata value, but nil metadata",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{m: nil}),
expected: nil,
},
{
name: "valid metadata",
ctx: context.WithValue(context.Background(), rspMetadataKey{}, &rspMetadataVal{
m: metadata.Pairs("key1", "value1"),
}),
expected: metadata.Metadata{"key1": {"value1"}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
require.Equal(t, tt.expected, getResponseMetadata(tt.ctx))
})
}
}

View File

@@ -3,7 +3,7 @@ package grpc
import (
"context"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v4/server"
"google.golang.org/grpc"
"google.golang.org/grpc/encoding"
)
@@ -36,7 +36,6 @@ func Options(opts ...grpc.ServerOption) server.Option {
return server.SetOption(grpcOptions{}, opts)
}
//
// MaxMsgSize set the maximum message in bytes the server can receive and
// send. Default maximum message size is 4 MB.
func MaxMsgSize(s int) server.Option {
@@ -44,8 +43,8 @@ func MaxMsgSize(s int) server.Option {
}
// Reflection enables reflection support in grpc server
func Reflection(b bool) server.Option {
return server.SetOption(reflectionKey{}, b)
func Reflection(r Reflector) server.Option {
return server.SetOption(reflectionKey{}, r)
}
// UnknownServiceHandler enables support for all services

208
proto/test.pb.go Normal file
View File

@@ -0,0 +1,208 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v4.25.2
// source: test.proto
package testpb
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type CallReq struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Data string `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
}
func (x *CallReq) Reset() {
*x = CallReq{}
if protoimpl.UnsafeEnabled {
mi := &file_test_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CallReq) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CallReq) ProtoMessage() {}
func (x *CallReq) ProtoReflect() protoreflect.Message {
mi := &file_test_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CallReq.ProtoReflect.Descriptor instead.
func (*CallReq) Descriptor() ([]byte, []int) {
return file_test_proto_rawDescGZIP(), []int{0}
}
func (x *CallReq) GetData() string {
if x != nil {
return x.Data
}
return ""
}
type CallRsp struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Data string `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
}
func (x *CallRsp) Reset() {
*x = CallRsp{}
if protoimpl.UnsafeEnabled {
mi := &file_test_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CallRsp) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CallRsp) ProtoMessage() {}
func (x *CallRsp) ProtoReflect() protoreflect.Message {
mi := &file_test_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CallRsp.ProtoReflect.Descriptor instead.
func (*CallRsp) Descriptor() ([]byte, []int) {
return file_test_proto_rawDescGZIP(), []int{1}
}
func (x *CallRsp) GetData() string {
if x != nil {
return x.Data
}
return ""
}
var File_test_proto protoreflect.FileDescriptor
var file_test_proto_rawDesc = []byte{
0x0a, 0x0a, 0x74, 0x65, 0x73, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x04, 0x74, 0x65,
0x73, 0x74, 0x22, 0x1d, 0x0a, 0x07, 0x43, 0x61, 0x6c, 0x6c, 0x52, 0x65, 0x71, 0x12, 0x12, 0x0a,
0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x64, 0x61, 0x74,
0x61, 0x22, 0x1d, 0x0a, 0x07, 0x43, 0x61, 0x6c, 0x6c, 0x52, 0x73, 0x70, 0x12, 0x12, 0x0a, 0x04,
0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61,
0x32, 0x33, 0x0a, 0x0b, 0x54, 0x65, 0x73, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12,
0x24, 0x0a, 0x04, 0x43, 0x61, 0x6c, 0x6c, 0x12, 0x0d, 0x2e, 0x74, 0x65, 0x73, 0x74, 0x2e, 0x43,
0x61, 0x6c, 0x6c, 0x52, 0x65, 0x71, 0x1a, 0x0d, 0x2e, 0x74, 0x65, 0x73, 0x74, 0x2e, 0x43, 0x61,
0x6c, 0x6c, 0x52, 0x73, 0x70, 0x42, 0x0b, 0x5a, 0x09, 0x2e, 0x2f, 0x3b, 0x74, 0x65, 0x73, 0x74,
0x70, 0x62, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_test_proto_rawDescOnce sync.Once
file_test_proto_rawDescData = file_test_proto_rawDesc
)
func file_test_proto_rawDescGZIP() []byte {
file_test_proto_rawDescOnce.Do(func() {
file_test_proto_rawDescData = protoimpl.X.CompressGZIP(file_test_proto_rawDescData)
})
return file_test_proto_rawDescData
}
var file_test_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_test_proto_goTypes = []interface{}{
(*CallReq)(nil), // 0: test.CallReq
(*CallRsp)(nil), // 1: test.CallRsp
}
var file_test_proto_depIdxs = []int32{
0, // 0: test.TestService.Call:input_type -> test.CallReq
1, // 1: test.TestService.Call:output_type -> test.CallRsp
1, // [1:2] is the sub-list for method output_type
0, // [0:1] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_test_proto_init() }
func file_test_proto_init() {
if File_test_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_test_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CallReq); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_test_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CallRsp); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_test_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_test_proto_goTypes,
DependencyIndexes: file_test_proto_depIdxs,
MessageInfos: file_test_proto_msgTypes,
}.Build()
File_test_proto = out.File
file_test_proto_rawDesc = nil
file_test_proto_goTypes = nil
file_test_proto_depIdxs = nil
}

18
proto/test.proto Normal file
View File

@@ -0,0 +1,18 @@
syntax = "proto3";
package test;
option go_package = "./;testpb";
service TestService {
rpc Call(CallReq) returns (CallRsp);
}
message CallReq {
string data = 1;
}
message CallRsp {
string data = 1;
}

109
proto/test_grpc.pb.go Normal file
View File

@@ -0,0 +1,109 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc v4.25.2
// source: test.proto
package testpb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
const (
TestService_Call_FullMethodName = "/test.TestService/Call"
)
// TestServiceClient is the client API for TestService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type TestServiceClient interface {
Call(ctx context.Context, in *CallReq, opts ...grpc.CallOption) (*CallRsp, error)
}
type testServiceClient struct {
cc grpc.ClientConnInterface
}
func NewTestServiceClient(cc grpc.ClientConnInterface) TestServiceClient {
return &testServiceClient{cc}
}
func (c *testServiceClient) Call(ctx context.Context, in *CallReq, opts ...grpc.CallOption) (*CallRsp, error) {
out := new(CallRsp)
err := c.cc.Invoke(ctx, TestService_Call_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// TestServiceServer is the server API for TestService service.
// All implementations must embed UnimplementedTestServiceServer
// for forward compatibility
type TestServiceServer interface {
Call(context.Context, *CallReq) (*CallRsp, error)
mustEmbedUnimplementedTestServiceServer()
}
// UnimplementedTestServiceServer must be embedded to have forward compatible implementations.
type UnimplementedTestServiceServer struct {
}
func (UnimplementedTestServiceServer) Call(context.Context, *CallReq) (*CallRsp, error) {
return nil, status.Errorf(codes.Unimplemented, "method Call not implemented")
}
func (UnimplementedTestServiceServer) mustEmbedUnimplementedTestServiceServer() {}
// UnsafeTestServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to TestServiceServer will
// result in compilation errors.
type UnsafeTestServiceServer interface {
mustEmbedUnimplementedTestServiceServer()
}
func RegisterTestServiceServer(s grpc.ServiceRegistrar, srv TestServiceServer) {
s.RegisterService(&TestService_ServiceDesc, srv)
}
func _TestService_Call_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CallReq)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestServiceServer).Call(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestService_Call_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestServiceServer).Call(ctx, req.(*CallReq))
}
return interceptor(ctx, in, info, handler)
}
// TestService_ServiceDesc is the grpc.ServiceDesc for TestService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var TestService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "test.TestService",
HandlerType: (*TestServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Call",
Handler: _TestService_Call_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "test.proto",
}

View File

@@ -1,488 +1,21 @@
// +build ignore
/*
*
* Copyright 2016 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
/*
Package reflection implements server reflection service.
The service implemented is defined in:
https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto.
To register server reflection on a gRPC server:
import "google.golang.org/grpc/reflection"
s := grpc.NewServer()
pb.RegisterYourOwnServer(s, &server{})
// Register reflection service on gRPC server.
reflection.Register(s)
s.Serve(lis)
*/
package grpc
import (
"bytes"
"compress/gzip"
"fmt"
"io"
"io/ioutil"
"reflect"
"sort"
"sync"
"github.com/golang/protobuf/proto"
dpb "github.com/golang/protobuf/protoc-gen-go/descriptor"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
rpb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha"
"google.golang.org/grpc/status"
"google.golang.org/grpc/reflection"
"google.golang.org/protobuf/reflect/protodesc"
)
type serverReflectionServer struct {
rpb.UnimplementedServerReflectionServer
s *grpc.Server
initSymbols sync.Once
serviceNames []string
symbols map[string]*dpb.FileDescriptorProto // map of fully-qualified names to files
type Reflector interface {
protodesc.Resolver
reflection.ServiceInfoProvider
reflection.ExtensionResolver
}
// Register registers the server reflection service on the given gRPC server.
func Register(s *grpc.Server) {
rpb.RegisterServerReflectionServer(s, &serverReflectionServer{
s: s,
})
}
// protoMessage is used for type assertion on proto messages.
// Generated proto message implements function Descriptor(), but Descriptor()
// is not part of interface proto.Message. This interface is needed to
// call Descriptor().
type protoMessage interface {
Descriptor() ([]byte, []int)
}
func (s *serverReflectionServer) getSymbols() (svcNames []string, symbolIndex map[string]*dpb.FileDescriptorProto) {
s.initSymbols.Do(func() {
serviceInfo := s.s.GetServiceInfo()
s.symbols = map[string]*dpb.FileDescriptorProto{}
s.serviceNames = make([]string, 0, len(serviceInfo))
processed := map[string]struct{}{}
for svc, info := range serviceInfo {
s.serviceNames = append(s.serviceNames, svc)
fdenc, ok := parseMetadata(info.Metadata)
if !ok {
continue
}
fd, err := decodeFileDesc(fdenc)
if err != nil {
continue
}
s.processFile(fd, processed)
}
sort.Strings(s.serviceNames)
})
return s.serviceNames, s.symbols
}
func (s *serverReflectionServer) processFile(fd *dpb.FileDescriptorProto, processed map[string]struct{}) {
filename := fd.GetName()
if _, ok := processed[filename]; ok {
return
}
processed[filename] = struct{}{}
prefix := fd.GetPackage()
for _, msg := range fd.MessageType {
s.processMessage(fd, prefix, msg)
}
for _, en := range fd.EnumType {
s.processEnum(fd, prefix, en)
}
for _, ext := range fd.Extension {
s.processField(fd, prefix, ext)
}
for _, svc := range fd.Service {
svcName := fqn(prefix, svc.GetName())
s.symbols[svcName] = fd
for _, meth := range svc.Method {
name := fqn(svcName, meth.GetName())
s.symbols[name] = fd
}
}
for _, dep := range fd.Dependency {
fdenc := proto.FileDescriptor(dep)
fdDep, err := decodeFileDesc(fdenc)
if err != nil {
continue
}
s.processFile(fdDep, processed)
}
}
func (s *serverReflectionServer) processMessage(fd *dpb.FileDescriptorProto, prefix string, msg *dpb.DescriptorProto) {
msgName := fqn(prefix, msg.GetName())
s.symbols[msgName] = fd
for _, nested := range msg.NestedType {
s.processMessage(fd, msgName, nested)
}
for _, en := range msg.EnumType {
s.processEnum(fd, msgName, en)
}
for _, ext := range msg.Extension {
s.processField(fd, msgName, ext)
}
for _, fld := range msg.Field {
s.processField(fd, msgName, fld)
}
for _, oneof := range msg.OneofDecl {
oneofName := fqn(msgName, oneof.GetName())
s.symbols[oneofName] = fd
}
}
func (s *serverReflectionServer) processEnum(fd *dpb.FileDescriptorProto, prefix string, en *dpb.EnumDescriptorProto) {
enName := fqn(prefix, en.GetName())
s.symbols[enName] = fd
for _, val := range en.Value {
valName := fqn(enName, val.GetName())
s.symbols[valName] = fd
}
}
func (s *serverReflectionServer) processField(fd *dpb.FileDescriptorProto, prefix string, fld *dpb.FieldDescriptorProto) {
fldName := fqn(prefix, fld.GetName())
s.symbols[fldName] = fd
}
func fqn(prefix, name string) string {
if prefix == "" {
return name
}
return prefix + "." + name
}
// fileDescForType gets the file descriptor for the given type.
// The given type should be a proto message.
func (s *serverReflectionServer) fileDescForType(st reflect.Type) (*dpb.FileDescriptorProto, error) {
m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(protoMessage)
if !ok {
return nil, fmt.Errorf("failed to create message from type: %v", st)
}
enc, _ := m.Descriptor()
return decodeFileDesc(enc)
}
// decodeFileDesc does decompression and unmarshalling on the given
// file descriptor byte slice.
func decodeFileDesc(enc []byte) (*dpb.FileDescriptorProto, error) {
raw, err := decompress(enc)
if err != nil {
return nil, fmt.Errorf("failed to decompress enc: %v", err)
}
fd := new(dpb.FileDescriptorProto)
if err := proto.Unmarshal(raw, fd); err != nil {
return nil, fmt.Errorf("bad descriptor: %v", err)
}
return fd, nil
}
// decompress does gzip decompression.
func decompress(b []byte) ([]byte, error) {
r, err := gzip.NewReader(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("bad gzipped descriptor: %v", err)
}
out, err := ioutil.ReadAll(r)
if err != nil {
return nil, fmt.Errorf("bad gzipped descriptor: %v", err)
}
return out, nil
}
func typeForName(name string) (reflect.Type, error) {
pt := proto.MessageType(name)
if pt == nil {
return nil, fmt.Errorf("unknown type: %q", name)
}
st := pt.Elem()
return st, nil
}
func fileDescContainingExtension(st reflect.Type, ext int32) (*dpb.FileDescriptorProto, error) {
m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(proto.Message)
if !ok {
return nil, fmt.Errorf("failed to create message from type: %v", st)
}
var extDesc *proto.ExtensionDesc
for id, desc := range proto.RegisteredExtensions(m) {
if id == ext {
extDesc = desc
break
}
}
if extDesc == nil {
return nil, fmt.Errorf("failed to find registered extension for extension number %v", ext)
}
return decodeFileDesc(proto.FileDescriptor(extDesc.Filename))
}
func (s *serverReflectionServer) allExtensionNumbersForType(st reflect.Type) ([]int32, error) {
m, ok := reflect.Zero(reflect.PtrTo(st)).Interface().(proto.Message)
if !ok {
return nil, fmt.Errorf("failed to create message from type: %v", st)
}
exts := proto.RegisteredExtensions(m)
out := make([]int32, 0, len(exts))
for id := range exts {
out = append(out, id)
}
return out, nil
}
// fileDescWithDependencies returns a slice of serialized fileDescriptors in
// wire format ([]byte). The fileDescriptors will include fd and all the
// transitive dependencies of fd with names not in sentFileDescriptors.
func fileDescWithDependencies(fd *dpb.FileDescriptorProto, sentFileDescriptors map[string]bool) ([][]byte, error) {
r := [][]byte{}
queue := []*dpb.FileDescriptorProto{fd}
for len(queue) > 0 {
currentfd := queue[0]
queue = queue[1:]
if sent := sentFileDescriptors[currentfd.GetName()]; len(r) == 0 || !sent {
sentFileDescriptors[currentfd.GetName()] = true
currentfdEncoded, err := proto.Marshal(currentfd)
if err != nil {
return nil, err
}
r = append(r, currentfdEncoded)
}
for _, dep := range currentfd.Dependency {
fdenc := proto.FileDescriptor(dep)
fdDep, err := decodeFileDesc(fdenc)
if err != nil {
continue
}
queue = append(queue, fdDep)
}
}
return r, nil
}
// fileDescEncodingByFilename finds the file descriptor for given filename,
// finds all of its previously unsent transitive dependencies, does marshalling
// on them, and returns the marshalled result.
func (s *serverReflectionServer) fileDescEncodingByFilename(name string, sentFileDescriptors map[string]bool) ([][]byte, error) {
enc := proto.FileDescriptor(name)
if enc == nil {
return nil, fmt.Errorf("unknown file: %v", name)
}
fd, err := decodeFileDesc(enc)
if err != nil {
return nil, err
}
return fileDescWithDependencies(fd, sentFileDescriptors)
}
// parseMetadata finds the file descriptor bytes specified meta.
// For SupportPackageIsVersion4, m is the name of the proto file, we
// call proto.FileDescriptor to get the byte slice.
// For SupportPackageIsVersion3, m is a byte slice itself.
func parseMetadata(meta interface{}) ([]byte, bool) {
// Check if meta is the file name.
if fileNameForMeta, ok := meta.(string); ok {
return proto.FileDescriptor(fileNameForMeta), true
}
// Check if meta is the byte slice.
if enc, ok := meta.([]byte); ok {
return enc, true
}
return nil, false
}
// fileDescEncodingContainingSymbol finds the file descriptor containing the
// given symbol, finds all of its previously unsent transitive dependencies,
// does marshalling on them, and returns the marshalled result. The given symbol
// can be a type, a service or a method.
func (s *serverReflectionServer) fileDescEncodingContainingSymbol(name string, sentFileDescriptors map[string]bool) ([][]byte, error) {
_, symbols := s.getSymbols()
fd := symbols[name]
if fd == nil {
// Check if it's a type name that was not present in the
// transitive dependencies of the registered services.
if st, err := typeForName(name); err == nil {
fd, err = s.fileDescForType(st)
if err != nil {
return nil, err
}
}
}
if fd == nil {
return nil, fmt.Errorf("unknown symbol: %v", name)
}
return fileDescWithDependencies(fd, sentFileDescriptors)
}
// fileDescEncodingContainingExtension finds the file descriptor containing
// given extension, finds all of its previously unsent transitive dependencies,
// does marshalling on them, and returns the marshalled result.
func (s *serverReflectionServer) fileDescEncodingContainingExtension(typeName string, extNum int32, sentFileDescriptors map[string]bool) ([][]byte, error) {
st, err := typeForName(typeName)
if err != nil {
return nil, err
}
fd, err := fileDescContainingExtension(st, extNum)
if err != nil {
return nil, err
}
return fileDescWithDependencies(fd, sentFileDescriptors)
}
// allExtensionNumbersForTypeName returns all extension numbers for the given type.
func (s *serverReflectionServer) allExtensionNumbersForTypeName(name string) ([]int32, error) {
st, err := typeForName(name)
if err != nil {
return nil, err
}
extNums, err := s.allExtensionNumbersForType(st)
if err != nil {
return nil, err
}
return extNums, nil
}
// ServerReflectionInfo is the reflection service handler.
func (s *serverReflectionServer) ServerReflectionInfo(stream rpb.ServerReflection_ServerReflectionInfoServer) error {
sentFileDescriptors := make(map[string]bool)
for {
in, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
out := &rpb.ServerReflectionResponse{
ValidHost: in.Host,
OriginalRequest: in,
}
switch req := in.MessageRequest.(type) {
case *rpb.ServerReflectionRequest_FileByFilename:
b, err := s.fileDescEncodingByFilename(req.FileByFilename, sentFileDescriptors)
if err != nil {
out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{
ErrorResponse: &rpb.ErrorResponse{
ErrorCode: int32(codes.NotFound),
ErrorMessage: err.Error(),
},
}
} else {
out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{
FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: b},
}
}
case *rpb.ServerReflectionRequest_FileContainingSymbol:
b, err := s.fileDescEncodingContainingSymbol(req.FileContainingSymbol, sentFileDescriptors)
if err != nil {
out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{
ErrorResponse: &rpb.ErrorResponse{
ErrorCode: int32(codes.NotFound),
ErrorMessage: err.Error(),
},
}
} else {
out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{
FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: b},
}
}
case *rpb.ServerReflectionRequest_FileContainingExtension:
typeName := req.FileContainingExtension.ContainingType
extNum := req.FileContainingExtension.ExtensionNumber
b, err := s.fileDescEncodingContainingExtension(typeName, extNum, sentFileDescriptors)
if err != nil {
out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{
ErrorResponse: &rpb.ErrorResponse{
ErrorCode: int32(codes.NotFound),
ErrorMessage: err.Error(),
},
}
} else {
out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{
FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: b},
}
}
case *rpb.ServerReflectionRequest_AllExtensionNumbersOfType:
extNums, err := s.allExtensionNumbersForTypeName(req.AllExtensionNumbersOfType)
if err != nil {
out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{
ErrorResponse: &rpb.ErrorResponse{
ErrorCode: int32(codes.NotFound),
ErrorMessage: err.Error(),
},
}
} else {
out.MessageResponse = &rpb.ServerReflectionResponse_AllExtensionNumbersResponse{
AllExtensionNumbersResponse: &rpb.ExtensionNumberResponse{
BaseTypeName: req.AllExtensionNumbersOfType,
ExtensionNumber: extNums,
},
}
}
case *rpb.ServerReflectionRequest_ListServices:
svcNames, _ := s.getSymbols()
serviceResponses := make([]*rpb.ServiceResponse, len(svcNames))
for i, n := range svcNames {
serviceResponses[i] = &rpb.ServiceResponse{
Name: n,
}
}
out.MessageResponse = &rpb.ServerReflectionResponse_ListServicesResponse{
ListServicesResponse: &rpb.ListServiceResponse{
Service: serviceResponses,
},
}
default:
return status.Errorf(codes.InvalidArgument, "invalid MessageRequest: %v", in.MessageRequest)
}
if err := stream.Send(out); err != nil {
return err
}
}
}
const (
// ReflectV1ServiceName is the fully-qualified name of the v1 version of the reflection service.
ReflectV1ServiceName = "grpc.reflection.v1.ServerReflection"
// ReflectServiceURLPathV1 is the full path for reflection service endpoint
ReflectServiceURLPathV1 = "/" + ReflectV1ServiceName + "/"
// ReflectMethodName is the reflection service name
ReflectMethodName = "ServerReflectionInfo"
)

62
reflection_test.go Normal file
View File

@@ -0,0 +1,62 @@
package grpc
import (
"fmt"
"testing"
"go.unistack.org/micro/v4/server"
"google.golang.org/grpc"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
)
type reflector struct{}
func (r *reflector) FindFileByPath(path string) (protoreflect.FileDescriptor, error) {
fd, err := protoregistry.GlobalFiles.FindFileByPath(path)
if err != nil {
fmt.Printf("err: %v\n", err)
return nil, err
}
return fd, nil
}
func (r *reflector) FindDescriptorByName(name protoreflect.FullName) (protoreflect.Descriptor, error) {
fd, err := protoregistry.GlobalFiles.FindDescriptorByName(name)
if err != nil {
return nil, err
}
return fd, nil
}
func (r *reflector) GetServiceInfo() map[string]grpc.ServiceInfo {
fmt.Printf("GetServiceInfo\n")
return nil
}
func (r *reflector) FindExtensionByName(field protoreflect.FullName) (protoreflect.ExtensionType, error) {
fmt.Printf("FindExtensionByName field %#+v\n", field)
return nil, nil
}
func (r *reflector) FindExtensionByNumber(message protoreflect.FullName, field protoreflect.FieldNumber) (protoreflect.ExtensionType, error) {
fmt.Printf("FindExtensionByNumber message %#+v field %#+v\n", message, field)
return nil, nil
}
func (r *reflector) RangeExtensionsByMessage(message protoreflect.FullName, f func(protoreflect.ExtensionType) bool) {
fmt.Printf("RangeExtensionsByMessage\n")
}
func TestReflector(t *testing.T) {
t.Skip()
srv := NewServer(Reflection(&reflector{}), server.Address(":12345"))
if err := srv.Init(); err != nil {
t.Fatal(err)
}
if err := srv.Start(); err != nil {
t.Fatal(err)
}
t.Logf("addr %s", srv.Options().Address)
select {}
}

View File

@@ -1,20 +1,14 @@
package grpc
import (
"io"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/server"
)
var (
_ server.Request = &rpcRequest{}
_ server.Message = &rpcMessage{}
)
var _ server.Request = &rpcRequest{}
type rpcRequest struct {
rw io.ReadWriter
payload interface{}
codec codec.Codec
header metadata.Metadata
@@ -25,14 +19,6 @@ type rpcRequest struct {
stream bool
}
type rpcMessage struct {
payload interface{}
codec codec.Codec
header metadata.Metadata
topic string
contentType string
}
func (r *rpcRequest) ContentType() string {
return r.contentType
}
@@ -58,11 +44,7 @@ func (r *rpcRequest) Header() metadata.Metadata {
}
func (r *rpcRequest) Read() ([]byte, error) {
f := &codec.Frame{}
if err := r.codec.ReadBody(r.rw, f); err != nil {
return nil, err
}
return f.Data, nil
return nil, nil
}
func (r *rpcRequest) Stream() bool {
@@ -72,23 +54,3 @@ func (r *rpcRequest) Stream() bool {
func (r *rpcRequest) Body() interface{} {
return r.payload
}
func (r *rpcMessage) ContentType() string {
return r.contentType
}
func (r *rpcMessage) Topic() string {
return r.topic
}
func (r *rpcMessage) Body() interface{} {
return r.payload
}
func (r *rpcMessage) Header() metadata.Metadata {
return r.header
}
func (r *rpcMessage) Codec() codec.Codec {
return r.codec
}

View File

@@ -1,17 +1,14 @@
package grpc
import (
"io"
"go.unistack.org/micro/v3/codec"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v4/codec"
"go.unistack.org/micro/v4/metadata"
"go.unistack.org/micro/v4/server"
)
var _ server.Response = &rpcResponse{}
type rpcResponse struct {
rw io.ReadWriter
header metadata.Metadata
codec codec.Codec
}
@@ -27,8 +24,5 @@ func (r *rpcResponse) WriteHeader(hdr metadata.Metadata) {
}
func (r *rpcResponse) Write(b []byte) error {
return r.codec.Write(r.rw, &codec.Message{
Header: r.header,
Body: b,
}, nil)
return nil
}

View File

@@ -14,7 +14,7 @@ import (
"unicode"
"unicode/utf8"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v4/server"
)
// Precompute the reflect type for error. Can't use error directly
@@ -47,8 +47,8 @@ type rServer struct {
// Is this an exported - upper case - name?
func isExported(name string) bool {
rune, _ := utf8.DecodeRuneInString(name)
return unicode.IsUpper(rune)
r, _ := utf8.DecodeRuneInString(name)
return unicode.IsUpper(r)
}
// Is this type exported or a builtin?
@@ -123,43 +123,43 @@ func prepareEndpoint(method reflect.Method) (*methodType, error) {
return &methodType{method: method, ArgType: argType, ReplyType: replyType, ContextType: contextType, stream: stream}, nil
}
func (server *rServer) register(rcvr interface{}) error {
server.mu.Lock()
defer server.mu.Unlock()
if server.serviceMap == nil {
server.serviceMap = make(map[string]*service)
func (s *rServer) register(rcvr interface{}) error {
s.mu.Lock()
defer s.mu.Unlock()
if s.serviceMap == nil {
s.serviceMap = make(map[string]*service)
}
s := &service{}
s.typ = reflect.TypeOf(rcvr)
s.rcvr = reflect.ValueOf(rcvr)
sname := reflect.Indirect(s.rcvr).Type().Name()
srv := &service{}
srv.typ = reflect.TypeOf(rcvr)
srv.rcvr = reflect.ValueOf(rcvr)
sname := reflect.Indirect(srv.rcvr).Type().Name()
if sname == "" {
return fmt.Errorf("rpc: no service name for type %v", s.typ.String())
return fmt.Errorf("rpc: no service name for type %v", srv.typ.String())
}
if !isExported(sname) {
return fmt.Errorf("rpc Register: type %s is not exported", sname)
}
if _, present := server.serviceMap[sname]; present {
return fmt.Errorf("rpc: service already defined: " + sname)
if _, present := s.serviceMap[sname]; present {
return fmt.Errorf("rpc: service already defined: %s", sname)
}
s.name = sname
s.method = make(map[string]*methodType)
srv.name = sname
srv.method = make(map[string]*methodType)
// Install the methods
for m := 0; m < s.typ.NumMethod(); m++ {
method := s.typ.Method(m)
for m := 0; m < srv.typ.NumMethod(); m++ {
method := srv.typ.Method(m)
mt, err := prepareEndpoint(method)
if mt != nil && err == nil {
s.method[method.Name] = mt
srv.method[method.Name] = mt
} else if err != nil {
return err
}
}
if len(s.method) == 0 {
if len(srv.method) == 0 {
return fmt.Errorf("rpc Register: type %s has no exported methods of suitable type", sname)
}
server.serviceMap[s.name] = s
s.serviceMap[srv.name] = srv
return nil
}

View File

@@ -3,7 +3,7 @@ package grpc
import (
"context"
"go.unistack.org/micro/v3/server"
"go.unistack.org/micro/v4/server"
"google.golang.org/grpc"
)

View File

@@ -1,231 +0,0 @@
package grpc
import (
"context"
"fmt"
"reflect"
"runtime/debug"
"strings"
"go.unistack.org/micro/v3/broker"
"go.unistack.org/micro/v3/errors"
"go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/register"
"go.unistack.org/micro/v3/server"
)
type handler struct {
reqType reflect.Type
ctxType reflect.Type
method reflect.Value
}
type subscriber struct {
topic string
rcvr reflect.Value
typ reflect.Type
subscriber interface{}
handlers []*handler
endpoints []*register.Endpoint
opts server.SubscriberOptions
}
func newSubscriber(topic string, sub interface{}, opts ...server.SubscriberOption) server.Subscriber {
options := server.NewSubscriberOptions(opts...)
var endpoints []*register.Endpoint
var handlers []*handler
if typ := reflect.TypeOf(sub); typ.Kind() == reflect.Func {
h := &handler{
method: reflect.ValueOf(sub),
}
switch typ.NumIn() {
case 1:
h.reqType = typ.In(0)
case 2:
h.ctxType = typ.In(0)
h.reqType = typ.In(1)
}
handlers = append(handlers, h)
endpoints = append(endpoints, &register.Endpoint{
Name: "Func",
Request: register.ExtractSubValue(typ),
Metadata: map[string]string{
"topic": topic,
"subscriber": "true",
},
})
} else {
hdlr := reflect.ValueOf(sub)
name := reflect.Indirect(hdlr).Type().Name()
for m := 0; m < typ.NumMethod(); m++ {
method := typ.Method(m)
h := &handler{
method: method.Func,
}
switch method.Type.NumIn() {
case 2:
h.reqType = method.Type.In(1)
case 3:
h.ctxType = method.Type.In(1)
h.reqType = method.Type.In(2)
}
handlers = append(handlers, h)
endpoints = append(endpoints, &register.Endpoint{
Name: name + "." + method.Name,
Request: register.ExtractSubValue(method.Type),
Metadata: map[string]string{
"topic": topic,
"subscriber": "true",
},
})
}
}
return &subscriber{
rcvr: reflect.ValueOf(sub),
typ: reflect.TypeOf(sub),
topic: topic,
subscriber: sub,
handlers: handlers,
endpoints: endpoints,
opts: options,
}
}
func (g *Server) createSubHandler(sb *subscriber, opts server.Options) broker.Handler {
return func(p broker.Event) (err error) {
defer func() {
if r := recover(); r != nil {
if g.opts.Logger.V(logger.ErrorLevel) {
g.opts.Logger.Error(g.opts.Context, "panic recovered: ", r)
g.opts.Logger.Error(g.opts.Context, string(debug.Stack()))
}
err = errors.InternalServerError(g.opts.Name+".subscriber", "panic recovered: %v", r)
}
}()
msg := p.Message()
// if we don't have headers, create empty map
if msg.Header == nil {
msg.Header = make(map[string]string)
}
ct := msg.Header["Content-Type"]
if len(ct) == 0 {
msg.Header["Content-Type"] = DefaultContentType
ct = DefaultContentType
}
cf, err := g.newCodec(ct)
if err != nil {
return err
}
hdr := make(map[string]string, len(msg.Header))
for k, v := range msg.Header {
if k == "Content-Type" {
continue
}
hdr[k] = v
}
ctx := metadata.NewIncomingContext(sb.opts.Context, hdr)
results := make(chan error, len(sb.handlers))
for i := 0; i < len(sb.handlers); i++ {
handler := sb.handlers[i]
var isVal bool
var req reflect.Value
if handler.reqType.Kind() == reflect.Ptr {
req = reflect.New(handler.reqType.Elem())
} else {
req = reflect.New(handler.reqType)
isVal = true
}
if isVal {
req = req.Elem()
}
if err = cf.Unmarshal(msg.Body, req.Interface()); err != nil {
return err
}
fn := func(ctx context.Context, msg server.Message) error {
var vals []reflect.Value
if sb.typ.Kind() != reflect.Func {
vals = append(vals, sb.rcvr)
}
if handler.ctxType != nil {
vals = append(vals, reflect.ValueOf(ctx))
}
vals = append(vals, reflect.ValueOf(msg.Body()))
returnValues := handler.method.Call(vals)
if rerr := returnValues[0].Interface(); rerr != nil {
return rerr.(error)
}
return nil
}
for i := len(opts.SubWrappers); i > 0; i-- {
fn = opts.SubWrappers[i-1](fn)
}
if g.wg != nil {
g.wg.Add(1)
}
go func() {
if g.wg != nil {
defer g.wg.Done()
}
cerr := fn(ctx, &rpcMessage{
topic: sb.topic,
contentType: ct,
payload: req.Interface(),
header: msg.Header,
})
results <- cerr
}()
}
var errors []string
for i := 0; i < len(sb.handlers); i++ {
if rerr := <-results; rerr != nil {
errors = append(errors, rerr.Error())
}
}
if len(errors) > 0 {
err = fmt.Errorf("subscriber error: %s", strings.Join(errors, "\n"))
}
return err
}
}
func (s *subscriber) Topic() string {
return s.topic
}
func (s *subscriber) Subscriber() interface{} {
return s.subscriber
}
func (s *subscriber) Endpoints() []*register.Endpoint {
return s.endpoints
}
func (s *subscriber) Options() server.SubscriberOptions {
return s.opts
}

View File

@@ -26,6 +26,7 @@ func TestServiceMethod(t *testing.T) {
if err != nil && test.err == true {
continue
}
t.Logf("input %s service %s method %s", test.input, service, method)
// unexpected error
if err != nil && test.err == false {
t.Fatalf("unexpected err %v for %+v", err, test)