Platform Logging System
The MyNATCA Platform implements a structured JSON logging system designed for OpenSearch compatibility and centralized log aggregation across the entire ecosystem.
Overview
As of October 2025, the Platform has been updated with a comprehensive logging overhaul that replaced all console.log calls with structured JSON logging. This enables:
- OpenSearch Integration: Clean JSON output for log aggregation and analysis
- Consistent Format: Uniform log structure across all Platform modules
- Contextual Information: Rich context objects for filtering and debugging
- Error Tracking: Full stack traces with error codes and metadata
- Multi-Service Correlation: Service field enables cross-service log analysis
Logger Implementation
Core Logger Module
The Platform uses a centralized logger module (lib/logger.js) that all routes and modules import:
// lib/logger.js
const logger = {
info: (message, context = {}) => {
console.log(JSON.stringify({
timestamp: new Date().toISOString(),
level: 'info',
service: 'platform',
message,
context
}));
},
error: (message, error = null) => {
const logEntry = {
timestamp: new Date().toISOString(),
level: 'error',
service: 'platform',
message
};
if (error) {
logEntry.error = {
message: error.message,
stack: error.stack,
code: error.code,
name: error.name
};
}
console.error(JSON.stringify(logEntry));
},
warn: (message, context = {}) => {
console.warn(JSON.stringify({
timestamp: new Date().toISOString(),
level: 'warn',
service: 'platform',
message,
context
}));
},
debug: (message, context = {}) => {
if (process.env.NODE_ENV === 'development') {
console.debug(JSON.stringify({
timestamp: new Date().toISOString(),
level: 'debug',
service: 'platform',
message,
context
}));
}
}
};
module.exports = logger;Usage in Routes
All Platform routes and middleware use the structured logger:
// routes/verify.js
const logger = require('../lib/logger');
app.post('/api/verify/complete', async (req, res) => {
try {
logger.info('Verification completion started', {
verificationId: req.body.verificationId,
userId: req.session.user?.sub
});
// ... verification logic
logger.info('Verification completed successfully', {
verificationId,
memberId: member.id,
discordId: discordUser.id
});
res.json({ success: true });
} catch (error) {
logger.error('Verification completion failed', error);
res.status(500).json({ error: 'Verification failed' });
}
});Log Structure
Standard Log Format
All logs follow a consistent JSON structure:
{
"timestamp": "2025-10-01T12:34:56.789Z",
"level": "info",
"service": "platform",
"message": "Descriptive log message",
"context": {
"key": "value",
"additionalData": "optional"
}
}Field Definitions
| Field | Type | Description |
|---|---|---|
timestamp | string (ISO 8601) | Exact time the log was created |
level | string | Log level: info, warn, error, debug |
service | string | Always "platform" for Platform logs |
message | string | Human-readable log message |
context | object | Optional contextual data (user IDs, request IDs, etc.) |
error | object | Only present for error logs - contains message, stack, code |
Log Examples
Successful Authentication
{
"timestamp": "2025-10-01T10:15:30.123Z",
"level": "info",
"service": "platform",
"message": "User authenticated successfully",
"context": {
"userId": "auth0|123456",
"memberNumber": "12345",
"natcaId": "67890",
"loginMethod": "oauth2"
}
}API Request Handling
{
"timestamp": "2025-10-01T10:16:45.456Z",
"level": "info",
"service": "platform",
"message": "Member data fetched",
"context": {
"memberId": "67890",
"requestPath": "/api/member/67890",
"responseTime": 125
}
}Error with Stack Trace
{
"timestamp": "2025-10-01T10:17:20.789Z",
"level": "error",
"service": "platform",
"message": "Database query failed",
"error": {
"message": "Connection timeout",
"stack": "Error: Connection timeout\n at Timeout._onTimeout...",
"code": "ETIMEDOUT",
"name": "Error"
}
}Sync Operation
{
"timestamp": "2025-10-01T02:00:00.000Z",
"level": "info",
"service": "platform",
"message": "Data sync completed",
"context": {
"syncType": "members",
"recordsProcessed": 5000,
"recordsUpdated": 47,
"duration": 58234,
"status": "success"
}
}OpenSearch Integration
Log Shipping Configuration
Using Filebeat
# filebeat.yml
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/*/*.log'
json.keys_under_root: true
json.add_error_key: true
json.message_key: message
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- drop_fields:
fields: ["agent", "ecs", "host.name"]
ignore_missing: true
output.elasticsearch:
hosts: ["opensearch:9200"]
index: "platform-logs-%{+yyyy.MM.dd}"
username: "admin"
password: "${OPENSEARCH_PASSWORD}"Using Fluentd
# fluent.conf
<source>
@type tail
path /var/log/platform/*.log
pos_file /var/log/td-agent/platform.pos
tag platform.*
format json
time_key timestamp
time_format %Y-%m-%dT%H:%M:%S.%LZ
</source>
<match platform.**>
@type opensearch
host opensearch
port 9200
index_name platform-logs
type_name _doc
logstash_format true
logstash_prefix platform-logs
</match>OpenSearch Index Mapping
Create an index pattern in OpenSearch with the following field mappings:
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"level": {
"type": "keyword"
},
"service": {
"type": "keyword"
},
"message": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"context": {
"type": "object",
"dynamic": true
},
"error": {
"properties": {
"message": { "type": "text" },
"stack": { "type": "text" },
"code": { "type": "keyword" },
"name": { "type": "keyword" }
}
}
}
}
}OpenSearch Dashboards
Create visualization dashboards for:
- Error Rate Monitoring: Track error frequency by type and endpoint
- Response Time Analysis: Analyze API response times and identify bottlenecks
- Authentication Metrics: Monitor login success/failure rates
- Sync Performance: Track data synchronization duration and success rates
- User Activity: Analyze user authentication and API usage patterns
Example OpenSearch Query - Error Rate
{
"query": {
"bool": {
"must": [
{ "term": { "service": "platform" } },
{ "term": { "level": "error" } },
{ "range": { "timestamp": { "gte": "now-1h" } } }
]
}
},
"aggs": {
"errors_over_time": {
"date_histogram": {
"field": "timestamp",
"interval": "5m"
}
}
}
}Logging Best Practices
When to Log
DO Log:
- User authentication events (success and failure)
- API requests with response times
- Database operations (connections, queries, errors)
- Sync operations with statistics
- Configuration changes
- Error conditions with full context
DON'T Log:
- Sensitive data (passwords, tokens, API keys)
- Personally identifiable information (PII) without sanitization
- Full request/response bodies (log IDs instead)
- High-frequency debug statements in production
Context Guidelines
Always include relevant context in log entries:
// GOOD: Contextual logging
logger.info('Member verification completed', {
memberId: member.id,
discordId: discordUser.id,
rolesAssigned: roles.length,
duration: Date.now() - startTime
});
// BAD: No context
logger.info('Verification completed');Error Logging
Always pass the error object to capture stack traces:
// GOOD: Error with stack trace
try {
await performOperation();
} catch (error) {
logger.error('Operation failed', error);
throw error;
}
// BAD: String error message only
try {
await performOperation();
} catch (error) {
logger.error(`Operation failed: ${error.message}`);
throw error;
}Migration from console.log
The Platform logging system was migrated from unstructured console.log calls to structured JSON logging in October 2025. Key changes:
Before (Unstructured)
console.log('✅ User authenticated:', user.name);
console.error('❌ Database error:', error);After (Structured)
logger.info('User authenticated successfully', {
userId: user.sub,
name: user.name
});
logger.error('Database operation failed', error);Migration Checklist
- Replace all
console.log()withlogger.info() - Replace all
console.error()withlogger.error() - Replace all
console.warn()withlogger.warn() - Remove emoji and decorative characters from log messages
- Add contextual information to all log calls
- Ensure error objects are passed to
logger.error() - Test log output is valid JSON
- Configure log shipping to OpenSearch
Performance Considerations
JSON Serialization
Structured logging adds minimal overhead:
- ~0.1ms per log entry for JSON serialization
- Negligible impact on request handling
- No blocking I/O (stdout is non-blocking)
Log Volume Management
Monitor and manage log volume:
- Set appropriate log levels per environment
- Use
debuglevel sparingly - Implement log rotation at the infrastructure level
- Archive old logs to cost-effective storage
Production Configuration
// Production: info level and above
if (process.env.NODE_ENV === 'production') {
logger.minLevel = 'info';
}
// Development: all levels including debug
if (process.env.NODE_ENV === 'development') {
logger.minLevel = 'debug';
}The structured logging system ensures the Platform provides comprehensive observability for operations teams while maintaining performance and security standards.