Compare commits

..

No commits in common. "v3" and "v3.11.35" have entirely different histories.
v3 ... v3.11.35

7 changed files with 59 additions and 150 deletions

View File

@ -26,24 +26,24 @@ jobs:
- name: test coverage - name: test coverage
run: | run: |
go test -v -cover ./... -covermode=count -coverprofile coverage.out -coverpkg ./... go test -v -cover ./... -coverprofile coverage.out -coverpkg ./...
go tool cover -func coverage.out -o coverage.out go tool cover -func coverage.out -o coverage.out
- name: coverage badge - name: coverage badge
uses: tj-actions/coverage-badge-go@v2 uses: tj-actions/coverage-badge-go@v1
with: with:
green: 80 green: 80
filename: coverage.out filename: coverage.out
- uses: stefanzweifel/git-auto-commit-action@v4 - uses: stefanzweifel/git-auto-commit-action@v4
name: autocommit id: auto-commit-action
with: with:
commit_message: Apply Code Coverage Badge commit_message: Apply Code Coverage Badge
skip_fetch: true skip_fetch: true
skip_checkout: true skip_checkout: true
file_pattern: ./README.md file_pattern: ./README.md
- name: push - name: Push Changes
if: steps.auto-commit-action.outputs.changes_detected == 'true' if: steps.auto-commit-action.outputs.changes_detected == 'true'
uses: ad-m/github-push-action@master uses: ad-m/github-push-action@master
with: with:

View File

@ -1,10 +1,5 @@
# Micro # Micro
![Coverage](https://img.shields.io/badge/Coverage-44.8%25-yellow) ![Coverage](https://img.shields.io/badge/Coverage-44.8%25-yellow)
[![License](https://img.shields.io/:license-apache-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Doc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/go.unistack.org/micro/v3?tab=overview)
[![Status](https://git.unistack.org/unistack-org/micro/actions/workflows/job_tests.yml/badge.svg?branch=v3)](https://git.unistack.org/unistack-org/micro/actions?query=workflow%3Abuild+branch%3Av3+event%3Apush)
[![Lint](https://goreportcard.com/badge/go.unistack.org/micro/v3)](https://goreportcard.com/report/go.unistack.org/micro/v3)
![Coverage](https://img.shields.io/badge/coverage-44.6%25-yellow)
Micro is a standard library for microservices. Micro is a standard library for microservices.
@ -16,20 +11,30 @@ Micro provides the core requirements for distributed systems development includi
Micro abstracts away the details of distributed systems. Here are the main features. Micro abstracts away the details of distributed systems. Here are the main features.
- **Authentication** - Auth is built in as a first class citizen. Authentication and authorization enable secure
zero trust networking by providing every service an identity and certificates. This additionally includes rule
based access control.
- **Dynamic Config** - Load and hot reload dynamic config from anywhere. The config interface provides a way to load application - **Dynamic Config** - Load and hot reload dynamic config from anywhere. The config interface provides a way to load application
level config from any source such as env vars, cmdline, file, consul, vault... You can merge the sources and even define fallbacks. level config from any source such as env vars, file, etcd. You can merge the sources and even define fallbacks.
- **Data Storage** - A simple data store interface to read, write and delete records. It includes support for memory, file and - **Data Storage** - A simple data store interface to read, write and delete records. It includes support for memory, file and
s3. State and persistence becomes a core requirement beyond prototyping and Micro looks to build that into the framework. CockroachDB by default. State and persistence becomes a core requirement beyond prototyping and Micro looks to build that into the framework.
- **Service Discovery** - Automatic service registration and name resolution. Service discovery is at the core of micro service - **Service Discovery** - Automatic service registration and name resolution. Service discovery is at the core of micro service
development. When service A needs to speak to service B it needs the location of that service. development. When service A needs to speak to service B it needs the location of that service.
- **Load Balancing** - Client side load balancing built on service discovery. Once we have the addresses of any number of instances
of a service we now need a way to decide which node to route to. We use random hashed load balancing to provide even distribution
across the services and retry a different node if there's a problem.
- **Message Encoding** - Dynamic message encoding based on content-type. The client and server will use codecs along with content-type - **Message Encoding** - Dynamic message encoding based on content-type. The client and server will use codecs along with content-type
to seamlessly encode and decode Go types for you. Any variety of messages could be encoded and sent from different clients. The client to seamlessly encode and decode Go types for you. Any variety of messages could be encoded and sent from different clients. The client
and server handle this by default. and server handle this by default.
- **Async Messaging** - Pub/Sub is built in as a first class citizen for asynchronous communication and event driven architectures. - **Transport** - gRPC or http based request/response with support for bidirectional streaming. We provide an abstraction for synchronous communication. A request made to a service will be automatically resolved, load balanced, dialled and streamed.
- **Async Messaging** - PubSub is built in as a first class citizen for asynchronous communication and event driven architectures.
Event notifications are a core pattern in micro service development. Event notifications are a core pattern in micro service development.
- **Synchronization** - Distributed systems are often built in an eventually consistent manner. Support for distributed locking and - **Synchronization** - Distributed systems are often built in an eventually consistent manner. Support for distributed locking and
@ -38,6 +43,10 @@ leadership are built in as a Sync interface. When using an eventually consistent
- **Pluggable Interfaces** - Micro makes use of Go interfaces for each system abstraction. Because of this these interfaces - **Pluggable Interfaces** - Micro makes use of Go interfaces for each system abstraction. Because of this these interfaces
are pluggable and allows Micro to be runtime agnostic. are pluggable and allows Micro to be runtime agnostic.
## Getting Started
To be created.
## License ## License
Micro is Apache 2.0 licensed. Micro is Apache 2.0 licensed.

View File

@ -14,7 +14,6 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
"go.unistack.org/micro/v3/logger" "go.unistack.org/micro/v3/logger"
"go.unistack.org/micro/v3/metadata" "go.unistack.org/micro/v3/metadata"
"go.unistack.org/micro/v3/util/buffer"
) )
// always first to have proper check // always first to have proper check
@ -31,30 +30,11 @@ func TestStacktrace(t *testing.T) {
l.Error(ctx, "msg1", errors.New("err")) l.Error(ctx, "msg1", errors.New("err"))
if !bytes.Contains(buf.Bytes(), []byte(`slog_test.go:32`)) { if !bytes.Contains(buf.Bytes(), []byte(`slog_test.go:31`)) {
t.Fatalf("logger error not works, buf contains: %s", buf.Bytes()) t.Fatalf("logger error not works, buf contains: %s", buf.Bytes())
} }
} }
func TestDelayedBuffer(t *testing.T) {
ctx := context.TODO()
buf := bytes.NewBuffer(nil)
dbuf := buffer.NewDelayedBuffer(100, 100*time.Millisecond, buf)
l := NewLogger(logger.WithLevel(logger.ErrorLevel), logger.WithOutput(dbuf),
WithHandlerFunc(slog.NewTextHandler),
logger.WithAddStacktrace(true),
)
if err := l.Init(logger.WithFields("key1", "val1")); err != nil {
t.Fatal(err)
}
l.Error(ctx, "msg1", errors.New("err"))
time.Sleep(120 * time.Millisecond)
if !bytes.Contains(buf.Bytes(), []byte(`key1=val1`)) {
t.Fatalf("logger delayed buffer not works, buf contains: %s", buf.Bytes())
}
}
func TestTime(t *testing.T) { func TestTime(t *testing.T) {
ctx := context.TODO() ctx := context.TODO()
buf := bytes.NewBuffer(nil) buf := bytes.NewBuffer(nil)

27
util/buf/buf.go Normal file
View File

@ -0,0 +1,27 @@
package buf
import (
"bytes"
"io"
)
var _ io.Closer = &Buffer{}
// Buffer bytes.Buffer wrapper to satisfie io.Closer interface
type Buffer struct {
*bytes.Buffer
}
// Close reset buffer contents
func (b *Buffer) Close() error {
b.Buffer.Reset()
return nil
}
// New creates new buffer that satisfies Closer interface
func New(b *bytes.Buffer) *Buffer {
if b == nil {
b = bytes.NewBuffer(nil)
}
return &Buffer{b}
}

View File

@ -1,85 +0,0 @@
package buffer
import (
"io"
"sync"
"time"
)
var _ io.WriteCloser = (*DelayedBuffer)(nil)
// DelayedBuffer is the buffer that holds items until either the buffer filled or a specified time limit is reached
type DelayedBuffer struct {
mu sync.Mutex
maxWait time.Duration
flushTime time.Time
buffer chan []byte
ticker *time.Ticker
w io.Writer
err error
}
func NewDelayedBuffer(size int, maxWait time.Duration, w io.Writer) *DelayedBuffer {
b := &DelayedBuffer{
buffer: make(chan []byte, size),
ticker: time.NewTicker(maxWait),
w: w,
flushTime: time.Now(),
maxWait: maxWait,
}
b.loop()
return b
}
func (b *DelayedBuffer) loop() {
go func() {
for range b.ticker.C {
b.mu.Lock()
if time.Since(b.flushTime) > b.maxWait {
b.flush()
}
b.mu.Unlock()
}
}()
}
func (b *DelayedBuffer) flush() {
bufLen := len(b.buffer)
if bufLen > 0 {
tmp := make([][]byte, bufLen)
for i := 0; i < bufLen; i++ {
tmp[i] = <-b.buffer
}
for _, t := range tmp {
_, b.err = b.w.Write(t)
}
b.flushTime = time.Now()
}
}
func (b *DelayedBuffer) Put(items ...[]byte) {
b.mu.Lock()
for _, item := range items {
select {
case b.buffer <- item:
default:
b.flush()
b.buffer <- item
}
}
b.mu.Unlock()
}
func (b *DelayedBuffer) Close() error {
b.mu.Lock()
b.flush()
close(b.buffer)
b.ticker.Stop()
b.mu.Unlock()
return b.err
}
func (b *DelayedBuffer) Write(data []byte) (int, error) {
b.Put(data)
return len(data), b.err
}

View File

@ -1,22 +0,0 @@
package buffer
import (
"bytes"
"testing"
"time"
)
func TestTimedBuffer(t *testing.T) {
buf := bytes.NewBuffer(nil)
b := NewDelayedBuffer(100, 300*time.Millisecond, buf)
for i := 0; i < 100; i++ {
_, _ = b.Write([]byte(`test`))
}
if buf.Len() != 0 {
t.Fatal("delayed write not worked")
}
time.Sleep(400 * time.Millisecond)
if buf.Len() == 0 {
t.Fatal("delayed write not worked")
}
}

View File

@ -35,11 +35,11 @@ func TestUnmarshalYAML(t *testing.T) {
t.Fatalf("invalid duration %v != 10000000", v.TTL) t.Fatalf("invalid duration %v != 10000000", v.TTL)
} }
err = yaml.Unmarshal([]byte(`{"ttl":"1d"}`), v) err = yaml.Unmarshal([]byte(`{"ttl":"1y"}`), v)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} else if *(v.TTL) != 86400000000000 { } else if *(v.TTL) != 31622400000000000 {
t.Fatalf("invalid duration %v != 86400000000000", *v.TTL) t.Fatalf("invalid duration %v != 31622400000000000", v.TTL)
} }
} }
@ -68,11 +68,11 @@ func TestUnmarshalJSON(t *testing.T) {
t.Fatalf("invalid duration %v != 10000000", v.TTL) t.Fatalf("invalid duration %v != 10000000", v.TTL)
} }
err = json.Unmarshal([]byte(`{"ttl":"1d"}`), v) err = json.Unmarshal([]byte(`{"ttl":"1y"}`), v)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} else if v.TTL != 86400000000000 { } else if v.TTL != 31622400000000000 {
t.Fatalf("invalid duration %v != 86400000000000", v.TTL) t.Fatalf("invalid duration %v != 31622400000000000", v.TTL)
} }
} }
@ -87,11 +87,11 @@ func TestParseDuration(t *testing.T) {
if td.String() != "340h0m0s" { if td.String() != "340h0m0s" {
t.Fatalf("ParseDuration 14d != 340h0m0s : %s", td.String()) t.Fatalf("ParseDuration 14d != 340h0m0s : %s", td.String())
} }
td, err = ParseDuration("1d") td, err = ParseDuration("1y")
if err != nil { if err != nil {
t.Fatalf("ParseDuration error: %v", err) t.Fatalf("ParseDuration error: %v", err)
} }
if td.String() != "24h0m0s" { if td.String() != "8784h0m0s" {
t.Fatalf("ParseDuration 1d != 24h0m0s : %s", td.String()) t.Fatalf("ParseDuration 1y != 8784h0m0s : %s", td.String())
} }
} }