Integrations
Intercom Integration
Deployment

Deployment Guide

Complete guide for deploying and configuring the Intercom integration in production and development environments.

Prerequisites

Before deploying the Intercom integration, ensure you have:

  • ✅ Intercom account with API access
  • ✅ Intercom Access Token (with read/write permissions)
  • ✅ Supabase project with members, regions, facilities, and positions tables
  • ✅ MySQL database (for webhook fallback and audit script)
  • ✅ DigitalOcean App Platform account (for production)

Environment Variables

Required Variables

Add the following environment variables to your deployment:

# Intercom API
INTERCOM_ACCESS_TOKEN=your_intercom_access_token
 
# Supabase (for member data)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your_supabase_anon_key
 
# MySQL (for webhook fallback and audit script)
MYSQL_HOST=your_mysql_host
MYSQL_USER=your_mysql_user
MYSQL_PASS=your_mysql_password
MYSQL_DB=your_mysql_database

Getting Intercom Access Token

  1. Login to Intercom → Settings → Developers

  2. Create New App (if needed):

    • Name: "MyNATCA Platform Integration"
    • Description: "Member data sync and contact enrichment"
  3. Generate Access Token:

    • Go to Authentication section
    • Click "Access Token"
    • Copy token (starts with dG9r:...)
  4. Required Permissions:

    • ✅ Read contacts
    • ✅ Write contacts
    • ✅ Read conversations
    • ✅ Read users

Storing Secrets

DigitalOcean App Platform

# Using doctl CLI
doctl apps create-deployment <app-id> \
  --app-spec .do/app.yaml \
  --env INTERCOM_ACCESS_TOKEN=dG9r:your_token
 
# Or via dashboard
# Settings → Environment Variables → Add Variable
# Name: INTERCOM_ACCESS_TOKEN
# Value: dG9r:your_token
# Scope: All components
# Encrypt: Yes

Docker/Kubernetes

# docker-compose.yml
services:
  platform:
    environment:
      - INTERCOM_ACCESS_TOKEN=${INTERCOM_ACCESS_TOKEN}
    env_file:
      - .env.production
 
# .env.production (never commit!)
INTERCOM_ACCESS_TOKEN=dG9r:your_token

Heroku

heroku config:set INTERCOM_ACCESS_TOKEN=dG9r:your_token -a mynatca-platform

Production Deployment

In-App Cron Scheduling

The Intercom integration now uses in-app cron scheduling (node-cron) instead of external DigitalOcean job components. This provides:

  • Cost Savings: Eliminates $5/month per job component
  • Simplified Management: No separate job configuration needed
  • Real-Time Monitoring: Web UI for status and manual triggers
  • Automatic Execution: Jobs run within the main application server

Cron Schedule:

  • Run Time: Daily at 3:00 AM EST / 8:00 AM UTC
  • Cron Expression: 0 8 * * *
  • Execution: Automatic async execution with setImmediate

Deployment Steps

  1. Configure Environment Variables:

    • Go to DigitalOcean App Platform dashboard
    • Settings → Environment Variables
    • Add INTERCOM_ACCESS_TOKEN with encryption enabled
  2. Verify Cron Job Status:

    • Navigate to https://platform.natca.org/cron
    • Log in with Auth0 credentials
    • Verify "Intercom Sync" job is listed with status
    • Check next scheduled run time
  3. Manual Testing:

    • In the /cron UI, click "Trigger" next to "Intercom Sync"
    • Monitor execution status in real-time
    • Verify completion and check for errors
  4. Monitor Logs:

    # View platform logs (includes cron execution)
    doctl apps logs <app-id> --component platform --follow
     
    # Filter for Intercom sync events
    doctl apps logs <app-id> --component platform | grep "Intercom"

Docker Deployment

# Dockerfile
FROM node:18-alpine
 
WORKDIR /app
 
# Install dependencies
COPY package*.json ./
RUN npm ci --production
 
# Copy application code
COPY . .
 
# Expose port
EXPOSE 1300
 
# Start server (includes in-app cron jobs)
CMD ["node", "server.js"]
# docker-compose.yml
version: '3.8'
 
services:
  platform:
    build: .
    ports:
      - "1300:1300"
    environment:
      - NODE_ENV=production
      - INTERCOM_ACCESS_TOKEN=${INTERCOM_ACCESS_TOKEN}
      - SUPABASE_URL=${SUPABASE_URL}
      - SUPABASE_KEY=${SUPABASE_KEY}
      - MYSQL_HOST=${MYSQL_HOST}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASS=${MYSQL_PASS}
      - MYSQL_DB=${MYSQL_DB}
    volumes:
      - ./logs:/tmp
    restart: unless-stopped

Deploy:

docker-compose up -d
docker-compose logs -f platform

Note: Cron jobs are now managed in-app. No separate sync-intercom container needed.

Kubernetes Deployment

# k8s/platform-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: platform
  namespace: mynatca
spec:
  replicas: 2
  selector:
    matchLabels:
      app: platform
  template:
    metadata:
      labels:
        app: platform
    spec:
      containers:
      - name: platform
        image: mynatca/platform:latest
        ports:
        - containerPort: 1300
        env:
        - name: NODE_ENV
          value: production
        - name: INTERCOM_ACCESS_TOKEN
          valueFrom:
            secretKeyRef:
              name: intercom-secrets
              key: access-token
        - name: SUPABASE_URL
          valueFrom:
            configMapKeyRef:
              name: platform-config
              key: supabase-url
        - name: SUPABASE_KEY
          valueFrom:
            secretKeyRef:
              name: supabase-secrets
              key: anon-key
        - name: MYSQL_HOST
          valueFrom:
            configMapKeyRef:
              name: platform-config
              key: mysql-host
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: mysql-secrets
              key: user
        - name: MYSQL_PASS
          valueFrom:
            secretKeyRef:
              name: mysql-secrets
              key: password
        - name: MYSQL_DB
          valueFrom:
            configMapKeyRef:
              name: platform-config
              key: mysql-db

Deploy:

kubectl apply -f k8s/platform-deployment.yaml
kubectl get deployments -n mynatca
kubectl logs -f deployment/platform -n mynatca

Note: Cron jobs run in-app automatically when the platform server starts. No separate CronJob resource needed.

Development Setup

Local Development

  1. Clone Repository:

    git clone https://github.com/natca_itc/platform.git
    cd platform
  2. Install Dependencies:

    npm install
  3. Configure Environment:

    cp .env.example .env
     
    # Edit .env with your credentials
    INTERCOM_ACCESS_TOKEN=dG9r:your_dev_token
    SUPABASE_URL=https://your-dev-project.supabase.co
    SUPABASE_KEY=your_dev_anon_key
    MYSQL_HOST=localhost
    MYSQL_USER=root
    MYSQL_PASS=password
    MYSQL_DB=natca_dev
  4. Test Daily Sync:

    # Dry run with limited members
    node sync/intercom/daily-sync.js --dry-run --limit=10
     
    # Actual sync (use dev Intercom workspace)
    npm run sync:intercom
  5. Test Audit Script:

    # Dry run with limited contacts
    node sync/intercom/audit.js --dry-run --limit=10
     
    # Process single contact
    node sync/intercom/audit.js --contact-id=67eb0a6e8f72a828ba21c342
  6. Test Webhook Locally:

    # Start platform server
    npm run dev
     
    # In another terminal, use ngrok
    ngrok http 1300
     
    # Configure Intercom webhook with ngrok URL
    # https://abc123.ngrok.io/api/intercom/webhook
     
    # Trigger test webhook in Intercom dashboard

Development Best Practices

  1. Use Separate Intercom Workspace:

    • Create "MyNATCA Dev" workspace in Intercom
    • Use separate access token
    • Prevents accidental production data changes
  2. Use Test Data:

    # Sync limited members only
    node sync/intercom/daily-sync.js --limit=100
     
    # Use --dry-run to preview changes
    node sync/intercom/audit.js --dry-run
  3. Monitor Logs:

    # Tail audit log
    tail -f /tmp/intercom_audit.log
     
    # Watch platform logs
    npm run dev | bunyan

Intercom Setup Checklist

Before deploying the integration, ensure the following are configured in Intercom:

1. Webhook Configuration

Location: Intercom Developer Hub → Webhooks

  1. Create Webhook:

    • Webhook URL: https://platform.natca.org/api/intercom/webhook
    • Method: POST
    • Version: 2.11
  2. Enable Topics:

    • conversation.user.created - Auto-enrichment when member messages
    • conversation.user.replied - Auto-enrichment on replies
    • conversation_part.tag.created - Manual re-sync via tag
  3. Test Webhook:

    • Click "Send test webhook"
    • Verify Platform logs show "Intercom webhook received"

2. Conversation Data Attribute

Location: Settings → Data → Conversations

Required Attribute: Member Number Verification

  1. Click Add attribute
  2. Configure:
    • Name: "Member Number Verification"
    • Type: Text
    • Description: "Member number provided by member for verification and manual re-sync"
  3. Click Save

Purpose: Stores member number for tag-based manual re-sync workflow

3. Tag Setup

Location: Settings → Tags

Required Tag: mynatca-sync

  1. Click Add tag
  2. Enter tag name: "mynatca-sync" (lowercase, no spaces)
  3. Click Save

Purpose: Triggers manual member enrichment when added to conversations

4. API Access Token

Location: Settings → Developers → Authentication

  1. Create or verify existing token:

    • Token should start with dG9r:
    • Must have read/write permissions
  2. Required Permissions:

    • ✅ Read contacts
    • ✅ Write contacts
    • ✅ Read conversations
    • ✅ Write conversations (for tag removal)
  3. Store token securely:

    • Add to environment variables as INTERCOM_ACCESS_TOKEN
    • Never commit to git

Initial Setup Checklist

First Time Deployment

  • Environment Variables Configured

    • INTERCOM_ACCESS_TOKEN set
    • SUPABASE_URL and SUPABASE_KEY set
    • MYSQL credentials set
  • Supabase Data Synced

    • Members synced (every 4 hours)
    • Regions synced (daily at 2:00 AM)
    • Facilities synced (daily at 2:00 AM)
    • Positions synced (daily at 2:15 AM)
  • Run Initial Audit (one-time setup)

    # 1. Preview audit with limited contacts
    node sync/intercom/audit.js --dry-run --limit=100
     
    # 2. Review logs and ensure behavior is expected
    cat /tmp/intercom_audit.log | jq '.'
     
    # 3. Run full audit
    node sync/intercom/audit.js
     
    # 4. Monitor progress
    tail -f /tmp/intercom_audit.log
  • Verify Cron Job

    • Access /cron UI (requires Auth0 login)
    • Verify Intercom Sync job is listed
    • Check schedule: "0 8 * * *" (3:00 AM EST / 8:00 AM UTC)
    • Test manual trigger via UI
    • Verify logs after first run
  • Setup Webhooks

    • Configure webhook URL in Intercom
    • Select topics: conversation.user.created, conversation.user.replied, conversation_part.tag.created
    • Test webhook with Intercom test event
    • Verify Platform receives and processes webhook
  • Setup Tag-Based Re-Sync (for support team)

    • Create conversation data attribute: "Member Number Verification" (Text type)
    • Create tag: "mynatca-sync"
    • Enable webhook topic: conversation_part.tag.created
    • Test workflow: Add member number to conversation attribute, add tag
    • Verify contact is updated and tag is removed
  • Verify Integration

    • Check Intercom contacts have member_number
    • Verify member_type, region, facility, positions are set
    • Test email lookup endpoint
    • Test real conversation (create test message)

Monitoring & Maintenance

Health Checks

# Platform health
curl https://platform.natca.org/api/health
 
# Intercom client health (via daily sync logs)
doctl apps logs <app-id> --component sync-intercom | grep "Rate limit"

Log Monitoring

Cron Management UI:

  • Navigate to https://platform.natca.org/cron
  • View real-time job status, last run, next run, and duration
  • Monitor error messages directly in the UI

Command Line Logs:

# View platform logs (includes all cron job execution)
doctl apps logs <app-id> --component platform --follow
 
# Filter for errors
doctl apps logs <app-id> --component platform | grep -i error
 
# Filter for Intercom events
doctl apps logs <app-id> --component platform | grep -i intercom

Performance Metrics

Monitor daily sync performance:

📊 Intercom Daily Sync Summary:
⏱️  Duration: 487.23s (target: < 600s)
📈 Total Members: 20145
✨ Created: 23
🔄 Updated: 20089
⏭️  Skipped: 33
🔧 Duplicates Resolved: 12
❌ Failed: 0 (target: < 10)

Key Metrics:

  • Duration: Should be < 10 minutes for ~20,000 members
  • Failed Count: Should be < 10 (< 0.05% failure rate)
  • Duplicates Resolved: Decreases over time (initial cleanup)

Alerting

Set up alerts for:

  • High Failure Rate: Failed > 100 (> 0.5%)
  • Slow Sync: Duration > 15 minutes
  • Cron Job Failure: Exit code != 0
  • Webhook Errors: Error rate > 5%

Example Alert (DigitalOcean):

# .do/app.yaml
alerts:
- rule: DEPLOYMENT_FAILED
- rule: DOMAIN_FAILED
- rule: FUNCTIONS_FAILED  # For cron jobs

Upgrading

Update Intercom Client

# Update dependencies
npm update axios
 
# Test sync
node sync/intercom/daily-sync.js --dry-run --limit=10
 
# Deploy
git add package.json package-lock.json
git commit -m "Update Intercom client dependencies"
git push origin main

Schema Changes

If Intercom schema changes (new fields, deprecated fields):

  1. Update Data Mapping:

    // Update contactData in daily-sync.js
    const contactData = {
      name: `${member.firstname} ${member.lastname}`,
      external_id: member.membernumber.toString(),
      custom_attributes: {
        member_type: memberType,
        region: region,
        facility: facility,
        positions: positions.join(', '),
        // Add new field
        new_field: member.new_field
      }
    };
  2. Test Changes:

    node sync/intercom/daily-sync.js --dry-run --limit=10
  3. Deploy:

    git add sync/intercom/daily-sync.js
    git commit -m "Add new_field to Intercom sync"
    git push origin main

Rollback

Rollback Deployment

# DigitalOcean
doctl apps create-deployment <app-id> --force-rebuild
 
# Docker
docker-compose down
docker-compose up -d --build
 
# Kubernetes
kubectl rollout undo deployment/platform -n mynatca

Rollback Data Changes

If sync causes issues:

  1. Stop Platform Server (to prevent scheduled cron execution):

    # DigitalOcean: Scale down to 0 instances temporarily in dashboard
    # Docker: docker-compose stop platform
    # Kubernetes: kubectl scale deployment platform --replicas=0 -n mynatca
  2. Restore from Intercom Export:

    • Intercom → Settings → Data Export
    • Download contacts CSV
    • Re-import with correct data
  3. Re-run Audit Script:

    node sync/intercom/audit.js --force

Security Considerations

API Token Security

  • Never commit tokens to git
  • Use secrets management: DigitalOcean secrets, Kubernetes secrets, etc.
  • Rotate tokens regularly (every 90 days)
  • Limit token permissions: Only grant required scopes

Network Security

  • Use HTTPS for webhook URLs
  • Verify webhook signatures (if Intercom provides secret)
  • Restrict API access: Only allow Platform server IPs

Data Privacy

  • Only sync Active/Retired members (not all statuses)
  • Validate phone numbers before sending to Intercom
  • Archive stale contacts (no conversations, no member match)
  • Log minimal PII (member number, not full names)

Troubleshooting Deployment

Cron Job Not Running

Problem: Daily sync not executing

Solutions:

  1. Access /cron UI and verify job status
  2. Check next scheduled run time in the UI
  3. Verify platform server is running: doctl apps list
  4. Check platform logs for cron initialization: doctl apps logs <app-id> --component platform
  5. Manually trigger via /cron UI or command line: npm run sync:intercom

Environment Variables Not Set

Problem: Sync fails with "undefined" errors

Solutions:

  1. Verify variables in dashboard: Settings → Environment Variables
  2. Check variable scope: Should be "All components" or specific component
  3. Redeploy after setting variables: doctl apps create-deployment <app-id>

Webhook Not Receiving Events

Problem: Webhook configured but no events received

Solutions:

  1. Verify webhook URL: https://platform.natca.org/api/intercom/webhook
  2. Check webhook is "Active" in Intercom dashboard
  3. Test with Intercom test webhook
  4. Check Platform server logs: doctl apps logs <app-id> --component platform

High Memory Usage

Problem: Sync job crashes with OOM error

Solutions:

  1. Reduce batch size: --batch-size=50 (default: 100)
  2. Increase instance size: apps-s-2vcpu-1gb (from apps-s-1vcpu-0.5gb)
  3. Process in smaller chunks: --limit=10000 (run twice daily)

Related Documentation