Daily Sync Process
The daily sync process automatically synchronizes Active and Retired member data from Supabase to Intercom nightly at 3:00 AM EST / 8:00 AM UTC.
Overview
Purpose: Keep Intercom contacts up-to-date with latest member information
Schedule: Nightly at 3:00 AM EST / 8:00 AM UTC (15 minutes after positions sync)
Duration: ~5-10 minutes for ~20,000 members
Script: /sync/intercom/daily-sync.js
Cron Job: Configured manually in DigitalOcean dashboard
Process Flow
1. Fetch Members from Supabase
const members = await supabase
.from('members')
.select('membernumber, firstname, lastname, email, phone, status, membertypeid, region_id, facility_id', { count: 'exact' })
.in('status', ['Active', 'Retired'])
.not('email', 'is', null)
.order('membernumber');Filters:
- Only
ActiveandRetiredmembers - Must have an email address
- Ordered by member number for consistent processing
Pagination: The sync uses automatic pagination to handle all members (20,000+), not just the first 1,000:
const PAGE_SIZE = 1000;
let allMembers = [];
let page = 0;
while (hasMore) {
const { data, error, count } = await query
.range(page * PAGE_SIZE, (page + 1) * PAGE_SIZE - 1);
allMembers = allMembers.concat(data);
page++;
hasMore = data.length === PAGE_SIZE;
logger.info(`Fetched page ${page} (${allMembers.length} members so far...)`);
}This ensures all Active/Retired members are synced, regardless of total count. The --limit flag still works for testing with smaller datasets.
2. Enrich Member Data
For each member, gather additional data:
// Get region code from Supabase
const regionCode = await getRegionCode(member.region_id);
// Get facility code from Supabase
const facilityCode = await getFacilityCode(member.facility_id);
// Get positions from Supabase (already synced at 2:15 AM)
const positions = await getMemberPositions(supabase, member.membernumber);
// Determine region/facility (use "RNAV" for Retired members)
const region = member.status === 'Retired' ? 'RNAV' : (regionCode || '');
const facility = member.status === 'Retired' ? 'RNAV' : (facilityCode || '');
// Determine member type
const memberType = getMemberTypeForIntercom(member.status, member.membertypeid);
// Format phone to E.164
const formattedPhone = formatPhoneForIntercom(member.phone);3. Prepare Contact Data
const contactData = {
name: `${member.firstname} ${member.lastname}`,
external_id: member.membernumber.toString(),
custom_attributes: {
member_type: memberType,
region: region,
facility: facility,
positions: positions.join(', ')
}
};
// Add phone if valid E.164 format
if (formattedPhone) {
contactData.phone = formattedPhone;
}4. Upsert to Intercom
try {
// Create or update contact by email
const result = await intercomClient.upsertContact(member.email, contactData);
// Determine if created or updated
const wasCreated = result.role === 'user' && !result.custom_attributes?.member_number;
if (wasCreated) {
logger.info(`Created contact for member #${member.membernumber}`);
stats.created++;
} else {
logger.info(`Updated contact for member #${member.membernumber}`);
stats.updated++;
}
} catch (error) {
// Handle errors (duplicates, validation, etc.)
}5. Handle Duplicate, Archived, and Blocked Contacts
The sync handles multiple edge cases when upserting contacts to Intercom:
A. Duplicate Contacts (409 Conflict)
If Intercom returns a 409 Conflict error, the sync extracts the contact ID and resolves the conflict:
if (error.message.includes('409') && error.message.includes('conflict')) {
// Extract contact ID from error message
const idMatch = error.message.match(/id=([a-f0-9]+)/);
const contactId = idMatch ? idMatch[1] : null;
// Update the specific contact
await intercomClient.updateContact(contactId, customAttributes, builtInFields);
}B. Archived Contacts
If the conflict is with an archived contact, the sync automatically unarchives it:
const isArchivedConflict = error.message.includes('archived contact');
if (isArchivedConflict && contactId) {
await intercomClient.unarchiveContact(contactId);
// Then update the contact
}C. Blocked Contacts (NEW)
Contacts that have been blocked in Intercom cannot be unarchived until they're unblocked. The sync now handles this automatically:
try {
await intercomClient.unarchiveContact(contactId);
} catch (unarchiveError) {
// Detect blocked contact
const isBlocked = unarchiveError.message.includes('not_restorable') ||
unarchiveError.message.includes('blocked');
if (isBlocked) {
// Unblock first
await intercomClient.unblockContact(contactId);
logger.info(`Unblocked contact ${contactId}`);
// Then unarchive
await intercomClient.unarchiveContact(contactId);
logger.info(`Unarchived contact ${contactId} after unblocking`);
// Finally update with member data
await intercomClient.updateContact(contactId, customAttributes, builtInFields);
}
}Automatic Recovery Flow:
- Detect 409 conflict with archived contact
- Try to unarchive → fails with
not_restorableerror (contact is blocked) - Unblock the contact
- Unarchive the contact (now succeeds)
- Update with current member data
This solves the previously impossible loop where blocked contacts couldn't be updated.
6. Process in Batches
const batchSize = 100; // Process 100 members at a time
for (let i = 0; i < members.length; i += batchSize) {
const batch = members.slice(i, i + batchSize);
for (const member of batch) {
await syncMember(member);
}
// Small delay between batches (200ms)
await sleep(200);
}Performance: With automatic pagination, the sync processes all 20,000+ members in approximately 10-15 minutes, limited primarily by Intercom's rate limiting (166 requests per 10 seconds).
7. Log Summary
logger.info('Intercom Daily Sync Summary:');
logger.info(`Duration: ${duration.toFixed(2)}s`);
logger.info(`Total Members: ${stats.totalMembers}`);
logger.info(`Created: ${stats.created}`);
logger.info(`Updated: ${stats.updated}`);
logger.info(`Skipped: ${stats.skipped}`);
logger.info(`Duplicates Resolved: ${stats.duplicatesResolved}`);
logger.info(`Failed: ${stats.failed}`);Data Sources
Primary: Supabase
All member data comes from Supabase:
| Data | Source Table | Notes |
|---|---|---|
| Member info | members | Name, email, phone, status, membertypeid |
| Region code | regions | Joined by region_id |
| Facility code | facilities | Joined by facility_id |
| Positions | positions | Filtered by membernumber |
Why Supabase?
- All data already synced (no MySQL dependency)
- Positions synced at 2:15 AM (fresh data)
- Faster queries, better performance
- Modern data access patterns
Retired Members
Special handling for Retired members:
region= "RNAV" (Retired NATCA)facility= "RNAV"member_type= "Retired Member" (regardless of membertypeid)
Command Line Options
Basic Usage
# Sync all Active/Retired members
npm run sync:intercom
# Or directly with Node
node sync/intercom/daily-sync.jsTesting & Development
# Test with limited members
node sync/intercom/daily-sync.js --limit=10
# Preview changes without updating Intercom (dry run)
node sync/intercom/daily-sync.js --dry-run
# Dry run with limited members
node sync/intercom/daily-sync.js --dry-run --limit=5
# Custom batch size (default: 100)
node sync/intercom/daily-sync.js --batch-size=50Dry Run Mode
When --dry-run flag is used:
- Fetches members from Supabase normally
- Prepares contact data
- Logs what would be done
- Does NOT call Intercom API
- Does NOT update contacts
- Stats show simulated results
Example output:
🔍 [DRY RUN] Would upsert member #12345 (john.doe@example.com)
🔍 [DRY RUN] Would archive 2 duplicate(s) for jane.doe@example.com
📊 Intercom Daily Sync Summary:
⏱️ Duration: 15.23s
📈 Total Members: 10
✨ Created: 3 (simulated)
🔄 Updated: 7 (simulated)
⏭️ Skipped: 0
🔧 Duplicates Resolved: 2 (simulated)
❌ Failed: 0Duplicate Resolution Strategy
Detection
Duplicates are detected when:
- Intercom returns 409 Conflict on upsert
- Error message contains "conflict" keyword
- Contact ID is extracted from error message
Resolution Process
The sync now uses an optimized approach:
-
Extract contact ID from the 409 error message
- Format:
"A contact matching those details already exists with id=68dfdac9271b2c0f1b6573b2" - Or:
"An archived contact matching those details already exists with id=66208121143d787bca426074"
- Format:
-
Check if archived - if error mentions "archived contact":
- Unarchive the contact
- If unarchive fails with
not_restorable, unblock first
-
Update the specific contact using the extracted ID
This approach is more efficient than searching for all duplicates and archiving older ones.
Edge Cases
Blocked and Archived Contact
If a contact is both blocked and archived:
try {
await intercomClient.unarchiveContact(contactId);
} catch (unarchiveError) {
if (unarchiveError.message.includes('not_restorable')) {
// Unblock first
await intercomClient.unblockContact(contactId);
// Then unarchive
await intercomClient.unarchiveContact(contactId);
// Finally update
await intercomClient.updateContact(contactId, data);
}
}Contact ID Not Found in Error
If the ID cannot be extracted from the error message, fall back to search:
if (!contactId) {
logger.info('Could not extract contact ID from error, searching for duplicates...');
contactId = await handleDuplicateContact(email);
}Contact Already Archived During Update
If a contact becomes archived between detection and update:
if (error.message.includes('404') || error.message.includes('archived')) {
// Try to unarchive first, then update
await intercomClient.unarchiveContact(contactId);
await intercomClient.updateContact(contactId, data);
}Rate Limiting
Strategy
- Intercom Limit: 10,000 requests/minute
- Our Limit: 166 requests per 10 seconds
- Reason: Evenly distribute load, prevent bursts
Implementation
The Intercom client automatically handles rate limiting:
if (this.requestsInWindow >= this.maxRequestsPer10Seconds) {
const waitTime = this.windowDuration - windowElapsed;
logger.info(`Rate limit reached, waiting ${Math.ceil(waitTime/1000)}s...`);
await sleep(waitTime);
this.requestsInWindow = 0;
this.windowStart = Date.now();
}Monitoring
Watch for rate limit messages in logs:
⏱️ Rate limit reached, waiting 5s...
requests_in_window: 166
max_requests: 166
wait_seconds: 5Error Handling
Member Sync Errors
Errors are logged and tracked but don't stop the sync:
try {
await syncMember(member);
stats.processed++;
} catch (error) {
stats.failed++;
stats.errors.push({
member_number: member.membernumber,
email: member.email,
error: error.message,
stack: error.stack // NEW: Stack traces for debugging
});
logger.error(`Failed to sync member #${member.membernumber}`, {
error: error.message,
email: member.email,
name: `${member.firstname} ${member.lastname}`, // NEW: Member context
stack: error.stack
});
}Common Errors
| Error | Cause | Resolution |
|---|---|---|
| 409 Conflict (archived) | Duplicate with archived contact | Automatic: Extract ID, unarchive, update |
| 409 Conflict (active) | Duplicate with active contact | Automatic: Extract ID, update directly |
| 400 Not Restorable | Contact is blocked | Automatic: Unblock → unarchive → update |
| 404 Not Found | Contact was archived/deleted | Automatic: Unarchive if possible, or skip |
| 422 Validation | Invalid phone | Skip phone field, log warning |
| 429 Rate Limit | Too many requests | Automatic: Wait and retry |
| 500 Server Error | Intercom API issue | Log error, continue sync |
| Network timeout | Connection issue | Retry with exponential backoff |
Enhanced Error Detection
The sync now detects and handles multiple error types with improved logging:
Blocked Contact Detection:
- Error code:
not_restorable - Error message contains:
"blocked"or"User has been blocked" - HTTP status: 400
Archived Contact Detection:
- Error code:
conflictwith"archived contact"in message - HTTP status: 409
Example Error Flow:
1. Try upsert → 409 conflict (archived contact with id=abc123)
2. Try unarchive → 400 not_restorable (contact is blocked)
3. Unblock contact → 200 success
4. Unarchive contact → 200 success
5. Update contact → 200 successSupabase Connection Failures
Positions lookup includes retry logic:
async function getMemberPositions(supabase, memberNumber, retryCount = 0) {
try {
const { data, error } = await supabase
.from('positions')
.select('positiontype')
.eq('membernumber', memberNumber);
if (error) throw new Error(`Failed: ${error.message}`);
return data.map(p => POSITION_CODE_TO_NAME[p.positiontype]);
} catch (error) {
// Retry on network failures (max 3 attempts)
if (retryCount < 3 && isNetworkError(error)) {
const delay = Math.pow(2, retryCount) * 1000; // 1s, 2s, 4s
await sleep(delay);
return getMemberPositions(supabase, memberNumber, retryCount + 1);
}
throw error;
}
}Monitoring & Logs
Log Levels
info- Normal operations, progress updateswarn- Duplicates, skipped members, retrieserror- Failed syncs, API errors
Key Log Messages
Start:
🚀 Starting Intercom daily sync...
📊 Processing limited to 10 members (if --limit used)
🔍 DRY RUN MODE - No changes will be made (if --dry-run)Progress:
📊 Progress: 100/20000 members synced (every 20 members)Duplicates:
⚠️ Duplicate detected for member #12345 (john.doe@example.com), resolving...
🔍 Found 3 contacts with email john.doe@example.com:
✅ PRIMARY (keeping): 67eb0a6e8f72a828ba21c342 - John Doe - john.doe@example.com - created: 2025-10-01T12:00:00Z
🗑️ DUPLICATE 1 (archiving): 66a13ba93686d4c0e4fdb04e - John Doe - john.doe@example.com - created: 2025-09-15T08:30:00Z
🗑️ DUPLICATE 2 (archiving): 65f2c1a8d5e3f123456789ab - J. Doe - john.doe@example.com - created: 2025-08-20T14:15:00Z
📦 Archived older duplicate contact 66a13ba93686d4c0e4fdb04e
📦 Archived older duplicate contact 65f2c1a8d5e3f123456789ab
✅ Resolved duplicates and updated contact for member #12345 (John Doe)Summary:
📊 Intercom Daily Sync Summary:
⏱️ Duration: 487.23s
📈 Total Members: 20145
✨ Created: 23
🔄 Updated: 20089
⏭️ Skipped: 33
🔧 Duplicates Resolved: 12
❌ Failed: 0Accessing Logs
Development:
node sync/intercom/daily-sync.js 2>&1 | tee sync.logProduction (DigitalOcean):
# View cron job logs
doctl apps logs <app-id> --component sync-intercom --follow
# Or via dashboard
https://cloud.digitalocean.com/apps/<app-id>/logsScheduling
DigitalOcean App Platform
Configured manually in the DigitalOcean dashboard:
Job Configuration:
- Name:
sync-intercom - Run Command:
npm run sync:intercom - Kind: CRON
- Schedule:
0 8 * * * - Instance Size: Basic (512 MB RAM)
Cron Schedule Breakdown:
0- Minute (00)8- Hour (8 AM UTC / 3 AM EST)*- Day of month (every day)*- Month (every month)*- Day of week (every day)
Why 3:00 AM EST / 8:00 AM UTC?
- Runs 15 minutes after positions sync (2:45 AM EST / 7:45 AM UTC)
- Ensures fresh position data
- Low Intercom API traffic
- Before US business hours
Manual Trigger
# SSH into production server
ssh platform.natca.org
# Run sync manually
cd /app
npm run sync:intercom
# Or with options
node sync/intercom/daily-sync.js --limit=100Performance Optimization
Batch Processing
- Process 100 members at a time
- 200ms delay between batches
- Prevents memory spikes
- Allows garbage collection
Parallel Queries
Member data fetched in parallel:
const [regionCode, facilityCode, positions] = await Promise.all([
getRegionCode(member.region_id),
getFacilityCode(member.facility_id),
getMemberPositions(supabase, member.membernumber)
]);Rate Limit Optimization
- 166 requests per 10 seconds (not bursts of 10,000/min)
- Prevents rate limit errors
- Smooth, consistent API usage
- Headroom for webhook spikes
Expected Performance
| Members | Duration | Requests/sec | Rate Limit Hits |
|---|---|---|---|
| 1,000 | ~60s | ~17 | 0 |
| 10,000 | ~600s | ~17 | 0 |
| 20,000 | ~1200s | ~17 | 0 |
Performance & Scale
Member Count Handling
Previous Limitation: The sync was limited to 1,000 members due to Supabase's default row limit.
Current Implementation: Automatic pagination fetches all members:
- Fetches in pages of 1,000 members
- Continues until all Active/Retired members are retrieved
- Logs progress: "Fetched page X (Y members so far...)"
- Total count displayed at start
Example Output:
Total members to fetch: 20123 (fetching in pages of 1000)
Fetched page 1 (1000 members so far...)
Fetched page 2 (2000 members so far...)
...
Fetched page 20 (20000 members so far...)
Found 20123 members with email addressesExpected Performance
| Members | Duration | Throughput | Rate Limit Hits |
|---|---|---|---|
| 1,000 | ~60s | ~17 req/s | 0 |
| 10,000 | ~600s | ~17 req/s | 0 |
| 20,000 | ~1200s | ~17 req/s | 0 |
Bottleneck: Intercom rate limiting (166 requests/10 seconds) is the primary performance constraint.
Troubleshooting
See Troubleshooting Guide for detailed solutions.
Common Issues
Sync Not Running
- Check cron job status in DigitalOcean dashboard
- Verify
npm run sync:intercomworks manually - Check environment variables are set
High Failure Rate
- Review error logs for patterns
- Check Intercom API status
- Verify Supabase connection
Contact is Blocked Errors
- Now handled automatically
- Sync will unblock → unarchive → update
- Check logs for "Successfully unblocked contact" messages
Only 1,000 Members Syncing
- Fixed in current version with pagination
- Check logs for "Total members to fetch: X" message
- Should show full count (20,000+), not just 1,000
Slow Sync Times
- Check rate limit logs (should not hit limit)
- Monitor Supabase query performance
- Verify batch size (default: 100)