Logging Configuration
ProActions Hub provides comprehensive logging capabilities powered by Winston. This guide covers configuration, log types, and best practices for monitoring and compliance.
Quick Start
Minimal Configuration
logging:
format: json
defaultLogLevel: info
types:
access:
enabled: true
console: true
error:
enabled: true
console: true
This enables basic access and error logging to console in JSON format.
Logging Configuration Structure
logging:
format: string # Log format: 'json' or 'text'
text_format: string # Custom text format template
defaultLogLevel: string # Default log level
logDirectory: string # Directory for log files
filePrefix: string # Prefix for log filenames
fileRotation: boolean # Enable file rotation
maxFileSize: string # Max size per log file
maxFiles: string # Retention period
sensitiveData: array # Keywords to filter
logTrace: boolean # Include stack traces
types: object # Per-type configurations
Log Formats
JSON Format (Recommended)
logging:
format: json
Example output:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "info",
"message": "AI request completed",
"module": "aiLink",
"target": "openai",
"model": "gpt-4o-mini",
"promptTokens": 15,
"completionTokens": 25,
"totalTokens": 40
}
Advantages:
- Easy to parse and query
- Works well with log aggregation tools
- Structured data for analytics
- Better for production environments
Text Format
logging:
format: text
text_format: "${timestamp} [${level}] [${module}]: ${message}"
Example output:
2024-01-15T10:30:00.123Z [info] [aiLink]: AI request completed
Advantages:
- Human-readable
- Good for development
- Easier to read in terminal
Log Levels
Supported log levels (highest to lowest priority):
| Level | Description | Use Case |
|---|---|---|
error | Critical errors requiring attention | Production issues, exceptions |
warn | Warning conditions | Deprecated features, recoverable errors |
info | Informational messages | Request tracking, business events |
debug | Detailed debugging information | Development, troubleshooting |
verbose | Very detailed information | Deep debugging |
silly | Most granular logs | Rarely used |
Default Log Level
logging:
defaultLogLevel: info
This sets the level for log types that don't specify their own level.
Use info or warn in production. Use debug only for troubleshooting.
Log Types
ProActions Hub supports six log types, each configurable independently:
1. Access Logs
HTTP request/response logs.
logging:
types:
access:
enabled: true
console: true
file: true
level: info
Logged information:
- HTTP method and URL
- Status code
- Response time
- Client IP address
- User agent
- Request ID
- Username (if authenticated)
Example:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "info",
"message": "POST /v1/chat/completions 200 1234ms",
"method": "POST",
"url": "/v1/chat/completions",
"statusCode": 200,
"responseTime": 1234,
"ip": "192.168.1.100",
"userAgent": "ProActions-Client/1.0",
"requestId": "abc123",
"username": "john.doe"
}
2. Error Logs
Application errors and exceptions.
logging:
types:
error:
enabled: true
console: true
file: true
level: error
Logged information:
- Error message
- Stack trace (if
logTrace: true) - Request context
- Error code
- Module where error occurred
Example:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "error",
"message": "Failed to connect to AI provider",
"error": "ECONNREFUSED",
"stack": "Error: ECONNREFUSED...",
"module": "aiLink",
"target": "openai"
}
3. AI-Link Logs
AI operations and token usage tracking.
logging:
types:
aiLink:
enabled: true
console: true
file: false
level: info
logPrompt: false # Include prompts in logs
logResponse: false # Include responses in logs
Logged information:
- Target and provider
- Model used
- Token usage (prompt, completion, total)
- Request duration
- Optional: Full prompt and response
Example:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "info",
"message": "AI request completed",
"module": "aiLink",
"target": "openai",
"provider": "openai",
"model": "gpt-4o-mini",
"promptTokens": 150,
"completionTokens": 250,
"totalTokens": 400,
"duration": 1234
}
Only enable logPrompt and logResponse for debugging. These logs can contain sensitive data and consume significant storage.
4. Proxy Logs
HTTP proxy operations.
logging:
types:
proxy:
enabled: true
console: true
file: false
level: info
Logged information:
- Proxy target ID and URL
- Request path
- Response status
- Request duration
Example:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "info",
"message": "Proxy request completed",
"module": "proxy",
"target": "deepl",
"targetUrl": "https://api.deepl.com/v2/translate",
"path": "/",
"statusCode": 200,
"duration": 456
}
5. YouTube Logs
YouTube API operations.
logging:
types:
youtube:
enabled: true
console: true
file: false
level: info
Logged information:
- Account ID
- Operation type (auth, upload, etc.)
- Success/failure status
- Video ID (for uploads)
Example:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "info",
"message": "Video uploaded successfully",
"module": "youtube",
"account": "main",
"videoId": "dQw4w9WgXcQ",
"title": "My Video"
}
6. Tools Logs
Tool operations (content extraction, etc.).
logging:
types:
tools:
enabled: true
console: true
file: false
level: info
Logged information:
- Tool ID and type
- Operation parameters
- Success/failure status
- Processing time
Example:
{
"timestamp": "2024-01-15T10:30:00.123Z",
"level": "info",
"message": "Content extraction completed",
"module": "tools",
"tool": "extract",
"url": "https://example.com/article",
"duration": 789
}
File Logging
Configuration
logging:
logDirectory: logs
filePrefix: app
fileRotation: true
maxFileSize: 10m
maxFiles: 14d
File Structure
When file logging is enabled for a log type, files are created with this pattern:
logs/
├── app-access-2024-01-15.log
├── app-error-2024-01-15.log
├── app-aiLink-2024-01-15.log
└── ...
File Rotation
With rotation enabled:
- Files rotate daily
- Files rotate when they reach
maxFileSize - Old files are deleted after
maxFilesperiod - Files are compressed after rotation (gzip)
Example maxFileSize values:
10m- 10 megabytes100m- 100 megabytes1g- 1 gigabyte
Example maxFiles values:
7d- 7 days14d- 14 days30d- 30 days10- Keep 10 files
Sensitive Data Filtering
ProActions Hub automatically filters sensitive information from logs.
Configuration
logging:
sensitiveData:
- password
- token
- apiKey
- authorization
- key
- secret
How It Works
Sensitive values are replaced with [FILTERED]:
Before filtering:
{
"apiKey": "sk-abc123xyz",
"password": "myPassword123"
}
After filtering:
{
"apiKey": "[FILTERED]",
"password": "[FILTERED]"
}
Custom Filters
Add custom patterns:
logging:
sensitiveData:
- password
- token
- apiKey
- ssn
- creditCard
- privateKey
Stack Trace Logging
Configuration
logging:
logTrace: false
When true, error logs include full stack traces.
Stack traces may expose sensitive information about your application's internal structure. Only enable for debugging.
Complete Configuration Example
Production Configuration
logging:
format: json
defaultLogLevel: info
logDirectory: logs
filePrefix: proactions
fileRotation: true
maxFileSize: 100m
maxFiles: 30d
logTrace: false
sensitiveData:
- password
- token
- apiKey
- authorization
- key
- secret
types:
access:
enabled: true
console: false
file: true
level: info
error:
enabled: true
console: true
file: true
level: error
aiLink:
enabled: true
console: false
file: true
level: info
logPrompt: false
logResponse: false
proxy:
enabled: true
console: false
file: true
level: info
youtube:
enabled: true
console: false
file: true
level: info
tools:
enabled: true
console: false
file: true
level: info
Development Configuration
logging:
format: text
text_format: "${timestamp} [${level}] [${module}]: ${message}"
defaultLogLevel: debug
logTrace: true
types:
access:
enabled: true
console: true
file: false
level: debug
error:
enabled: true
console: true
file: false
level: error
aiLink:
enabled: true
console: true
file: false
level: debug
logPrompt: true
logResponse: true
proxy:
enabled: true
console: true
file: false
level: debug
youtube:
enabled: true
console: true
file: false
level: debug
tools:
enabled: true
console: true
file: false
level: debug
Viewing Logs
Console Logs
Podman:
podman logs -f proactionshub
Docker:
docker compose logs -f
File Logs
# View all logs
tail -f logs/app-access-*.log
# View specific log type
tail -f logs/app-error-*.log
# View logs with jq (for JSON)
tail -f logs/app-aiLink-*.log | jq '.'
Filter Logs by Level
Using jq (JSON format):
tail -f logs/app-access-*.log | jq 'select(.level == "error")'
Using grep (text format):
tail -f logs/app-access-*.log | grep ERROR
Log Aggregation
ELK Stack
For JSON logs, configure Filebeat to ship logs to Elasticsearch:
# filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/logs/app-*.log
json.keys_under_root: true
json.add_error_key: true
output.elasticsearch:
hosts: ["elasticsearch:9200"]
Splunk
Configure Splunk Universal Forwarder to monitor log directory:
[monitor:///path/to/logs]
sourcetype = _json
index = proactions
CloudWatch
For AWS deployments, use CloudWatch Logs agent to ship logs.
Performance Considerations
High-Traffic Environments
-
Disable console logging - Write to files only
console: false
file: true -
Use JSON format - More efficient to parse
format: json -
Increase log level - Reduce log volume
defaultLogLevel: warn -
Disable verbose AI logging - Don't log prompts/responses
logPrompt: false
logResponse: false
Storage Management
-
Configure rotation - Prevent disk space issues
fileRotation: true
maxFileSize: 100m
maxFiles: 14d -
Archive old logs - Move to cold storage
find logs/ -name "*.gz" -mtime +30 -exec mv {} /archive/ \;
Monitoring Token Usage
Enable AI-Link logging to track token consumption:
logging:
types:
aiLink:
enabled: true
file: true
Query logs for token statistics:
# Total tokens by model (jq)
cat logs/app-aiLink-*.log | jq -r '[.totalTokens] | add'
# Tokens by target
cat logs/app-aiLink-*.log | jq -r 'group_by(.target) | .[] | {target: .[0].target, total: ([.[].totalTokens] | add)}'
Compliance and Auditing
Audit Trail
Access logs provide complete audit trail:
- Who made requests (username)
- When requests were made (timestamp)
- What endpoints were accessed (URL)
- Request outcomes (status code)
Data Retention
Configure retention based on compliance requirements:
logging:
maxFiles: 365d # 1 year retention
Sensitive Data
Ensure sensitive data filtering is enabled:
logging:
sensitiveData:
- password
- token
- apiKey
- ssn
- creditCard
Best Practices
Configuration
- Use JSON in production - Easier to parse and query
- Set appropriate log levels - Balance detail vs. volume
- Enable file rotation - Prevent disk space issues
- Configure retention - Meet compliance requirements
Security
- Filter sensitive data - Prevent credential leaks
- Secure log files - Appropriate file permissions
- Disable stack traces - Don't expose internal structure
- Don't log prompts/responses - Unless debugging
Performance
- Disable console in production - Reduce overhead
- Use async logging - Winston handles this automatically
- Limit log levels - Higher levels for production
- Monitor log volume - Adjust levels if needed
Operations
- Monitor disk usage - Ensure logs don't fill disk
- Set up log rotation - Automatic cleanup
- Integrate with aggregation - Central log management
- Create dashboards - Visualize metrics
Troubleshooting
Logs Not Appearing
Issue: No logs in console or files
Solutions:
- Check log type is enabled
- Verify log level allows messages through
- Check file permissions for log directory
- Ensure log directory exists and is writable
Too Many Logs
Issue: Excessive log volume
Solutions:
- Increase log level (info → warn → error)
- Disable verbose AI logging
- Disable access logs for health checks
- Filter noisy endpoints
Log Files Growing Too Large
Issue: Disk space consumed by logs
Solutions:
- Enable file rotation
- Reduce maxFileSize
- Reduce retention period
- Compress old logs
Can't Parse JSON Logs
Issue: Invalid JSON in logs
Solutions:
- Verify format is set to
json - Check for multi-line stack traces (set
logTrace: false) - Ensure Winston is properly initialized
Next Steps
- Configure Security Hardening for production
- Review Operations Guide for log monitoring procedures