Initial version
This commit is contained in:
476
README.md
476
README.md
@@ -1,3 +1,477 @@
|
||||
# EonaCat.LogStack
|
||||
|
||||
EonaCat.LogStack
|
||||
**EonaCat.LogStack** flow-based logging library for .NET, designed for zero-allocation logging paths and superior memory efficiency.
|
||||
It features a rich fluent API for routing log events to dozens of destinations - from console and file to Slack, Discord, Redis, Elasticsearch, and beyond.
|
||||
|
||||
---
|
||||
|
||||
## Features
|
||||
|
||||
- **Flow-based architecture** - route log events to one or many output destinations simultaneously
|
||||
- **Booster system** - enrich every log event with contextual metadata (machine name, process ID, thread info, memory, uptime, correlation IDs, and more)
|
||||
- **Pre-build modifiers** - intercept and mutate log events before they are written
|
||||
- **Zero-allocation hot path** - `AggressiveInlining` throughout, `StringBuilder` pooling, and `ref`-based builder pattern
|
||||
- **Async-first** - all flows implement `IAsyncDisposable` and `FlushAsync`
|
||||
- **Resilience built-in** - retry, failover, throttling, and rolling buffer flows
|
||||
- **Tamper-evident audit trail** - SHA-256 hash-chained audit files
|
||||
- **Encrypted file logging** - AES-encrypted log files with a built-in decrypt utility
|
||||
- **Compression** - GZip-compressed rolled log files
|
||||
- **Category routing** - split logs into separate files per category or log level
|
||||
- **Diagnostics** - live counters (total logged, total dropped, per-flow stats)
|
||||
|
||||
---
|
||||
|
||||
## Supported Targets
|
||||
|
||||
- .NET Standard 2.1
|
||||
- .NET 8.0
|
||||
- .NET Framework 4.8
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
dotnet add package EonaCat.LogStack
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```csharp
|
||||
await using var logger = LogBuilder.CreateDefault("MyApp");
|
||||
|
||||
logger.Information("Application started");
|
||||
logger.Warning("Low memory warning");
|
||||
logger.Error(ex, "Unexpected error occurred");
|
||||
```
|
||||
|
||||
`CreateDefault` creates a logger writing to both the console and a `./logs` directory, enriched with machine name and process ID.
|
||||
|
||||
---
|
||||
|
||||
## Fluent Configuration
|
||||
|
||||
Build a fully customised logger using `LogBuilder`:
|
||||
|
||||
```csharp
|
||||
await using var logger = new LogBuilder("MyApp")
|
||||
.WithMinimumLevel(LogLevel.Debug)
|
||||
.WithTimestampMode(TimestampMode.Utc)
|
||||
.WriteToConsole(useColors: true)
|
||||
.WriteToFile("./logs", filePrefix: "app", maxFileSize: 50 * 1024 * 1024)
|
||||
.WriteToSlack("https://hooks.slack.com/services/...")
|
||||
.BoostWithMachineName()
|
||||
.BoostWithProcessId()
|
||||
.BoostWithCorrelationId()
|
||||
.Build();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Logging Methods
|
||||
|
||||
```csharp
|
||||
logger.Trace("Verbose trace message");
|
||||
logger.Debug("Debug detail");
|
||||
logger.Information("Something happened");
|
||||
logger.Warning("Potential problem");
|
||||
logger.Warning(ex, "Warning with exception");
|
||||
logger.Error("Something failed");
|
||||
logger.Error(ex, "Error with exception");
|
||||
logger.Critical("System is going down");
|
||||
logger.Critical(ex, "Critical failure");
|
||||
|
||||
// With structured properties
|
||||
logger.Log(LogLevel.Information, "User logged in",
|
||||
("UserId", 42),
|
||||
("IP", "192.168.1.1"));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Available Flows
|
||||
### Flows can be extended with custom implementations of `IFlow`, but here are the built-in options:
|
||||
|
||||
| Flow | Method | Description |
|
||||
|------|--------|-------------|
|
||||
| Console | `WriteToConsole()` | Colored console output |
|
||||
| File | `WriteToFile()` | Batched, rotated, compressed file output |
|
||||
| Encrypted File | `WriteToEncryptedFile()` | AES-encrypted log files |
|
||||
| Memory | `WriteToMemory()` | In-memory ring buffer |
|
||||
| Audit | `WriteToAudit()` | Tamper-evident hash-chained audit trail |
|
||||
| Database | `WriteToDatabase()` | ADO.NET database sink |
|
||||
| HTTP | `WriteToHttp()` | Generic HTTP endpoint (batched) |
|
||||
| Webhook | `WriteToWebhook()` | Generic webhook POST |
|
||||
| Email | `WriteToEmail()` | HTML digest emails via SMTP |
|
||||
| Slack | `WriteToSlack()` | Slack incoming webhooks |
|
||||
| Discord | `WriteToDiscord()` | Discord webhooks |
|
||||
| Microsoft Teams | `WriteToMicrosoftTeams()` | Teams incoming webhooks |
|
||||
| Telegram | `WriteToTelegram()` | Telegram bot messages |
|
||||
| SignalR | `WriteToSignalR()` | Real-time SignalR hub push |
|
||||
| Redis | `RedisFlow()` | Redis Pub/Sub + optional List persistence |
|
||||
| Elasticsearch | `WriteToElasticSearch()` | Elasticsearch index |
|
||||
| Splunk | `WriteToSplunkFlow()` | Splunk HEC |
|
||||
| Graylog | `WriteToGraylogFlow()` | GELF over UDP or TCP |
|
||||
| Syslog UDP | `WriteToSyslogUdp()` | RFC-5424 Syslog over UDP |
|
||||
| Syslog TCP | `WriteToSyslogTcp()` | RFC-5424 Syslog over TCP (with optional TLS) |
|
||||
| TCP | `WriteToTcp()` | Raw TCP (with optional TLS) |
|
||||
| UDP | `WriteToUdp()` | Raw UDP datagrams |
|
||||
| SNMP Trap | `WriteToSnmpTrap()` | SNMP v2c traps |
|
||||
| Zabbix | `WriteToZabbixFlow()` | Zabbix trapper protocol |
|
||||
| EventLog | `WriteToEventLogFlow()` | Remote event log forwarding |
|
||||
| Rolling Buffer | `WriteToRollingBuffer()` | Circular buffer with trigger-based flush |
|
||||
| Throttled | `WriteToThrottled()` | Token-bucket rate limiting + deduplication |
|
||||
| Retry | `WriteToRetry()` | Automatic retry with exponential back-off |
|
||||
| Failover | `WriteToFailover()` | Primary/secondary failover |
|
||||
| Diagnostics | `WriteDiagnostics()` | Periodic diagnostic snapshots |
|
||||
| Status | `WriteToStatusFlow()` | Service health monitoring |
|
||||
|
||||
---
|
||||
|
||||
## Available Boosters
|
||||
|
||||
Boosters enrich every log event with additional properties before it reaches any flow.
|
||||
|
||||
```csharp
|
||||
new LogBuilder("MyApp")
|
||||
.BoostWithMachineName() // host name
|
||||
.BoostWithProcessId() // PID
|
||||
.BoostWithThreadId() // managed thread ID
|
||||
.BoostWithThreadName() // thread name
|
||||
.BoostWithUser() // current OS user
|
||||
.BoostWithApp() // app name and base directory
|
||||
.BoostWithApplication("MyApp", "2.0.0") // explicit name + version
|
||||
.BoostWithEnvironment("Production")
|
||||
.BoostWithOS() // OS description
|
||||
.BoostWithFramework() // .NET runtime description
|
||||
.BoostWithMemory() // working set in MB
|
||||
.BoostWithUptime() // process uptime in seconds
|
||||
.BoostWithProcStart() // process start time
|
||||
.BoostWithDate() // current date (yyyy-MM-dd)
|
||||
.BoostWithTime() // current time (HH:mm:ss.fff)
|
||||
.BoostWithTicks() // current timestamp ticks
|
||||
.BoostWithCorrelationId() // Activity.Current correlation ID
|
||||
.BoostWithCustomText("env", "prod") // arbitrary key/value
|
||||
.Boost("myBooster", () => new Dictionary<string, object?> { ["key"] = "val" })
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pre-Build Modifiers
|
||||
|
||||
Modifiers run after boosters and can mutate or cancel a log event before it is dispatched to flows:
|
||||
|
||||
```csharp
|
||||
logger.AddModifier((ref LogEventBuilder builder) =>
|
||||
{
|
||||
builder.WithProperty("RequestId", Guid.NewGuid().ToString());
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resilience Patterns
|
||||
|
||||
### Retry with exponential back-off
|
||||
```csharp
|
||||
.WriteToRetry(
|
||||
primaryFlow: new HttpFlow("https://logs.example.com"),
|
||||
maxRetries: 5,
|
||||
initialDelay: TimeSpan.FromMilliseconds(200),
|
||||
exponentialBackoff: true)
|
||||
```
|
||||
|
||||
### Primary / secondary failover
|
||||
```csharp
|
||||
.WriteToFailover(
|
||||
primaryFlow: new ElasticSearchFlow("https://es-prod:9200"),
|
||||
secondaryFlow: new FileFlow("./fallback-logs"))
|
||||
```
|
||||
|
||||
### Token-bucket throttling with deduplication
|
||||
```csharp
|
||||
.WriteToThrottled(
|
||||
inner: new SlackFlow(webhookUrl),
|
||||
burstCapacity: 10,
|
||||
refillPerSecond: 1.0,
|
||||
deduplicate: true,
|
||||
dedupWindow: TimeSpan.FromSeconds(60))
|
||||
```
|
||||
|
||||
### Rolling buffer - flush context on error
|
||||
```csharp
|
||||
.WriteToRollingBuffer(
|
||||
capacity: 500,
|
||||
triggerLevel: LogLevel.Error,
|
||||
triggerTarget: new FileFlow("./error-context"))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Encrypted File Logging
|
||||
|
||||
```csharp
|
||||
.WriteToEncryptedFile("./secure-logs", password: "s3cr3t")
|
||||
```
|
||||
|
||||
To decrypt later:
|
||||
```csharp
|
||||
LogBuilder.DecryptFile(
|
||||
encryptedPath: "./secure-logs/log.enc",
|
||||
outputPath: "./secure-logs/log.txt",
|
||||
password: "s3cr3t");
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Audit Trail
|
||||
|
||||
The audit flow produces a tamper-evident file where every entry is SHA-256 hash-chained. Deleting or modifying any past entry invalidates all subsequent hashes.
|
||||
|
||||
```csharp
|
||||
.WriteToAudit(
|
||||
directory: "./audit",
|
||||
auditLevel: AuditLevel.WarningAndAbove,
|
||||
includeProperties: true)
|
||||
```
|
||||
|
||||
Verify integrity at any time:
|
||||
```csharp
|
||||
bool intact = AuditFlow.Verify("./audit/audit.audit");
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Log Message Template
|
||||
|
||||
Both `ConsoleFlow` and `FileFlow` accept a customisable template string:
|
||||
|
||||
```
|
||||
[{ts}] [{tz}] [Host: {host}] [Category: {category}] [Thread: {thread}] [{logtype}] {message}{props}
|
||||
```
|
||||
|
||||
| Token | Description |
|
||||
|-------|-------------|
|
||||
| `{ts}` | Timestamp (yyyy-MM-dd HH:mm:ss.fff) |
|
||||
| `{tz}` | Timezone (UTC or local name) |
|
||||
| `{host}` | Machine name |
|
||||
| `{category}` | Logger category |
|
||||
| `{thread}` | Managed thread ID |
|
||||
| `{pid}` | Process ID |
|
||||
| `{logtype}` | Log level label (INFO, WARN, ERROR, …) |
|
||||
| `{message}` | Log message text |
|
||||
| `{props}` | Structured properties as key=value pairs |
|
||||
| `{newline}` | Line break |
|
||||
|
||||
---
|
||||
|
||||
## Diagnostics
|
||||
|
||||
```csharp
|
||||
var diag = logger.GetDiagnostics();
|
||||
Console.WriteLine($"Logged: {diag.TotalLogged}, Dropped: {diag.TotalDropped}");
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Flushing and Disposal
|
||||
|
||||
```csharp
|
||||
// Flush all pending events
|
||||
await logger.FlushAsync();
|
||||
|
||||
// Dispose (flushes automatically)
|
||||
await logger.DisposeAsync();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Events
|
||||
|
||||
```csharp
|
||||
logger.OnLog += (sender, msg) =>
|
||||
{
|
||||
// Fired for every log event that passes filters
|
||||
Console.WriteLine($"[Event] {msg.Level}: {msg.Message}");
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Custom Flows
|
||||
|
||||
Implement `IFlow` (or extend `FlowBase`) to create your own destination:
|
||||
|
||||
```csharp
|
||||
public class MyFlow : FlowBase
|
||||
{
|
||||
public MyFlow() : base("MyFlow", LogLevel.Trace) { }
|
||||
|
||||
public override Task<WriteResult> BlastAsync(LogEvent logEvent, CancellationToken ct = default)
|
||||
{
|
||||
// Write logEvent somewhere
|
||||
return Task.FromResult(WriteResult.Success);
|
||||
}
|
||||
|
||||
public override Task FlushAsync(CancellationToken ct = default) => Task.CompletedTask;
|
||||
}
|
||||
|
||||
// Register with:
|
||||
new LogBuilder("App").WriteTo(new MyFlow()).Build();
|
||||
```
|
||||
|
||||
|
||||
|
||||
# EonaCat.LogStack.Server
|
||||
|
||||
A lightweight, multi-transport log server for the EonaCat LogStack ecosystem.
|
||||
|
||||
## Quick start
|
||||
|
||||
```csharp
|
||||
// Minimal - UDP on port 5555
|
||||
var server = new Server();
|
||||
await server.Start();
|
||||
|
||||
// Full control via ServerOptions
|
||||
var server = new Server(new ServerOptions
|
||||
{
|
||||
UseTcp = true,
|
||||
UseUdp = true,
|
||||
UseHttp = true, // enables POST /ingest + GET /metrics
|
||||
Port = 5555,
|
||||
HttpPort = 5556,
|
||||
MinimumLevel = ServerLogLevel.Information, // drop Debug/Trace
|
||||
RateLimitPerSecond = 100, // per remote endpoint
|
||||
LogRetentionDays = 30,
|
||||
MaxLogDirectorySize = 10L * 1024 * 1024 * 1024, // 10 GB
|
||||
LogsRootDirectory = "logs",
|
||||
});
|
||||
|
||||
server.LogWritten += line => Console.WriteLine("[written] " + line);
|
||||
server.LogDropped += line => Console.WriteLine("[dropped] " + line);
|
||||
|
||||
await server.Start();
|
||||
```
|
||||
|
||||
## Transports
|
||||
|
||||
| Transport | Default | Notes |
|
||||
|-----------|---------|-------|
|
||||
| TCP | enabled | Streams until connection closes |
|
||||
| UDP | enabled | Max 65 507 bytes per packet |
|
||||
| HTTP | disabled | Enable via `UseHttp = true` |
|
||||
|
||||
TCP and UDP can run simultaneously on the same port.
|
||||
|
||||
## HTTP endpoints (when `UseHttp = true`)
|
||||
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `POST` | `/ingest` | Accept a JSON log entry or array |
|
||||
| `GET` | `/metrics` | Return live server metrics as JSON |
|
||||
|
||||
### POST /ingest - single entry
|
||||
```json
|
||||
{ "level": "info", "message": "Hello world", "source": "MyApp", "host": "srv-01" }
|
||||
```
|
||||
|
||||
### POST /ingest - batch
|
||||
```json
|
||||
[
|
||||
{ "level": "warn", "message": "Disk at 80%", "source": "monitor" },
|
||||
{ "level": "error", "message": "DB timeout", "source": "api", "exception": "TimeoutException…" }
|
||||
]
|
||||
```
|
||||
|
||||
### GET /metrics response
|
||||
```json
|
||||
{
|
||||
"totalReceived": 12345,
|
||||
"totalWritten": 12300,
|
||||
"totalDropped": 45,
|
||||
"totalBytes": 4096000,
|
||||
"activeTcpConnections": 3,
|
||||
"uptimeSeconds": 3600,
|
||||
"startedAt": "2026-03-27T08:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Structured JSON parsing
|
||||
|
||||
If the incoming payload is valid JSON the server parses it and formats each
|
||||
entry before writing:
|
||||
|
||||
```
|
||||
[2026-03-27T09:15:00Z] [ERROR] [MyApp] host=srv-01 trace=abc123 Something went wrong
|
||||
EXCEPTION: System.TimeoutException: The operation timed out.
|
||||
```
|
||||
|
||||
Recognised JSON fields:
|
||||
|
||||
| Field | Aliases | Description |
|
||||
|-------|---------|-------------|
|
||||
| `timestamp` | - | ISO-8601 timestamp |
|
||||
| `level` | `Level`, `severity`, `Severity` | Log level string |
|
||||
| `message` | `Message` | Log message |
|
||||
| `source` | `application` | App / service name |
|
||||
| `host` | - | Hostname |
|
||||
| `traceId` | - | Distributed trace ID |
|
||||
| `exception` | - | Exception string |
|
||||
|
||||
Plain-text payloads are written as-is and always bypass the level filter.
|
||||
|
||||
## Log level filtering
|
||||
|
||||
```csharp
|
||||
MinimumLevel = ServerLogLevel.Warning // only Warning / Error / Critical are stored
|
||||
```
|
||||
|
||||
Levels in order: `Trace → Debug → Information → Warning → Error → Critical`
|
||||
|
||||
## Rate limiting
|
||||
|
||||
```csharp
|
||||
RateLimitPerSecond = 100 // per remote IP:port, 0 = disabled
|
||||
```
|
||||
|
||||
Dropped messages are counted in `Metrics.TotalDropped` and raise the `LogDropped` event.
|
||||
|
||||
## Metrics
|
||||
|
||||
```csharp
|
||||
var m = server.GetMetrics();
|
||||
Console.WriteLine($"Written={m.TotalWritten} Dropped={m.TotalDropped} Uptime={m.Uptime}");
|
||||
```
|
||||
|
||||
## Log file layout
|
||||
|
||||
```
|
||||
logs/
|
||||
20260327/
|
||||
EonaCatLogs.log ← active file (≤ 200 MB)
|
||||
EonaCatLogs_1.log ← rolled over
|
||||
20260326/
|
||||
EonaCatLogs.log
|
||||
```
|
||||
|
||||
Daily directories older than `LogRetentionDays` are deleted automatically.
|
||||
The total directory is also capped at `MaxLogDirectorySize`.
|
||||
|
||||
## Events
|
||||
|
||||
```csharp
|
||||
server.LogWritten += line => NotifyDashboard(line);
|
||||
server.LogDropped += line => Metrics.Increment("dropped");
|
||||
```
|
||||
|
||||
## Graceful shutdown
|
||||
|
||||
```csharp
|
||||
Console.CancelKeyPress += (_, e) => { e.Cancel = true; server.Stop(); };
|
||||
```
|
||||
|
||||
`Stop()` prints a throughput summary and disposes all listeners cleanly.
|
||||
Reference in New Issue
Block a user