- SalesforceChaCha
- Posts
- 💃 Crack the Code: Salesforce Observability🕺
💃 Crack the Code: Salesforce Observability🕺
See deeper. Fix faster. Sleep better.
Good morning, Salesforce Nerds! Ever feel like Salesforce is gaslighting you?
A Flow fails quietly. Apex errors get swallowed. Users report slowness, but the debug logs look clean. ✨
This is what low observability looks like. A world where problems exist but leave no trail.
Observability isn’t just a fancy word for logging or monitoring. 🚫
It’s the science (and art) of understanding what's happening inside your system just by examining its outputs. It answers the why, not just the what.
In the Salesforce world - where you're juggling Flows, Apex, Scheduled Jobs, Platform Events, and the occasional governor tantrum - observability isn’t optional. ❎
It’s how you get ahead of outages, diagnose issues faster, and ship with confidence.

TABLE OF CONTENTS
💃 Crack the Code: Salesforce Obersvability🕺
OBSERVABILITY IS MORE THAN JUST LOGS
LOGS, METRICS, TRACES, OH MY
Frist, let’s clear the fog with the Three Pillars of Observability. 🏦
AKA, the Holy Trinity of “what broke this time?”
📝 Logs – Granular, timestamped records of events. Think System.debug()
, Flow fault messages, or Platform Event payloads. You’ll need structure, correlation IDs, and a way to find them when it hits the fan.
📈 Metrics – Quantitative data over time. Apex queue depths, Flow error counts, CPU time per transaction. Great for spotting trends (or explosions) before they escalate.
🧩 Traces – The hardest and most valuable: the full journey of a request across components. In Salesforce, this means tracking a user action through Flow, Apex, async jobs, and even outbound messages—tied together by a correlation ID.
Observability connects all three. 🔗
Logging alone tells you something happened. Observability tells you why it happened, where it happened, and what to fix. 🛠️
SPAGHETTI FLOWS AND SILENT FAILURES ABOUND
WHY SALESFORCE IS SO TRICKY
Salesforce makes observability hard because of how it abstracts execution. ⚡️
💨 Flow Errors Disappear – Unless fault paths are configured and logged intentionally, they vanish without a trace.
🙉 Async Jobs Fail Quietly – Queueables and Scheduled Jobs don’t always scream when they die. Sometimes they whisper... into a void.
💥 Governor Limits Obfuscate Root Cause – You’ll see “Too many SOQL queries” but not what caused it unless you had breadcrumbs in place.
✏️ Debug Logs ≠ Observability – You only get logs after the fact, and only if someone thought to check the right user or job. That’s not observability. That’s digital archaeology.
Your solution? A framework. 👇️
STRUCTURE NOW. SANITY LATER.
BUILD THE FRAMEWORK FIRST
To achieve observability in Salesforce, you need to architect for it. Here's how:
🧱 1. Structured Logging
Start by ditching the spaghetti System.debug()
statements. Your logs should be machine-readable and human-friendly. Ideally in JSON format. This makes it easier to ingest into tools like Gearset, Splunk, or even a custom Lightning component dashboard.
Structured logs allow filtering by context, severity, or component. This turns chaos into clarity during root cause analysis. 👓️
System.debug(JSON.serializePretty(new Map<String, Object>{
'level' => 'ERROR',
'correlationId' => correlationId,
'context' => 'AccountTriggerHandler.updateAccounts',
'message' => 'Too many SOQL queries',
'timestamp' => DateTime.now()
}));
🧪 2. Correlation IDs
Without a correlation ID, you're looking at logs like puzzle pieces from different boxes. Generate a UUID per transaction. Store it at the start of your user interaction (like in a Lightning Controller or Flow variable) and pass it everywhere: Apex, Flows, Platform Events. 🧠
This allows you to trace a user action across components, even when they’re decoupled or asynchronous. It’s your best friend when debugging “phantom errors” that jump across layers. 👍️
String correlationId = UUID.randomUUID().toString();
// Store in Custom Setting or Static var for reuse
🧼 3. Observable Flows
Flows often fail silently if you don't wire up fault connectors. Always add error handling branches and route them to a common Apex logging method or custom object. Include the Record.Id
, Flow name, user ID, and that sweet, sweet correlationId
. 😋
Flows are great for no-code automation, but that doesn’t mean they should be no-trace. Logging from Flows helps you close the observability gap without writing extra Apex.
🚨 4. Threshold Alerts
Logging is only half the battle. Alerting is what wakes you up (ideally before your stakeholders do). Define acceptable error rates, execution times, or queue lengths. When thresholds are exceeded, trigger an email, Slack message, or incident ticket. 🔥
Don’t go overboard. Alert fatigue is real. Focus on actionable thresholds that reflect real business risk. Like "5 Flow errors in 10 minutes" or "Apex CPU time over 9,000ms".
🔁 5. Job & Batch Monitoring
Scheduled Jobs and Queueables often fail with little evidence. Wrap them in try/catch blocks and log both outcomes, success and failure, along with job duration and impacted records.
For extra credit, store this info in a custom “Job Execution” object with a dashboard view. You'll have historical insight, trend data, and a great tool for post-mortems. 💯
💡 6. Pattern: Decorator Logging
Use the Decorator Pattern (or Service Wrapper) to inject logging behavior into your business logic. Instead of scattering System.debug()
calls everywhere, centralize your logging logic around method entry/exit, exception handling, and performance metrics. 💹
This makes your codebase more maintainable, and your logs more predictable. You’ll spend less time hunting through noise and more time understanding what actually happened.
YOU DESERVE BETTER THAN DEBUG LOGS
OBSERVABILITY, THE GEARSET WAY
Once you’ve built the foundation, Gearset Observability can take it to the next level. 🔼
Gearset’s observability platform hooks into your Salesforce org(s) and captures:
👉️ Flow errors and their fault paths
👉️ Apex exceptions (with context!)
👉️ Governor limit overages
👉️ Deployment-related issues
👉️ Correlation of logs to org metadata
Even better, it provides visual dashboards, anomaly detection, and alerting without forcing your team to spelunk through CSV logs or reinvent Splunk. 🎉
It’s observability done for you, with Salesforce-specific context that generic tools (like CloudWatch or Datadog) just don’t have. 💥
YOU CAN’T FIX WHAT YOU CAN’T SEE
KNOW BEFORE YOU GO BOOM
Observability isn’t about preventing every error.
It’s about knowing the moment something breaks, understanding why, and fixing it fast. 🏁
With a solid framework in place and tools like Gearset to amplify your efforts, you can:
✂️ Cut mean time to recovery (MTTR)
❌ Prevent cascading failures
🔎 Uncover silent performance degraders
🫠 Prove issues weren’t “just user error”
In short: you stop reacting and start anticipating. 🔄
So next time someone says, “Salesforce is acting weird,” you won’t be scrambling through debug logs and expired logs.
You’ll have structured data, visual traces, and alerts that do the yelling for you.
Now that’s observable. 🔍️
SOUL FOOD
Today’s Principle
"Observability is the degree to which the results of an innovation are visible to others."
and now....Salesforce Memes



What did you think about today's newsletter? |